What’ is the sequence in which ASP.NET events are processed? Following is the sequence in
which the events occur: o Page_Init. o Page_Load. o Control events. o Page_Unload event.
Page_Init event only occurs when first time the page is started, but Page_Load occurs in subsequent request of the page
which event are the controls fully loaded? PageJoad event guarantees that all controls are In
fully loaded.Controls are also accessed in Page_Init events but you will see that viewstate is not fully loaded during this event. How can we identify that the Page is Post Back? Page object has a "IsPostBack" property which can be checked to know that is the page posted back. What is event bubbling? Event Bubbling is nothing but events raised by child controls is handled by the parent control. Example: Suppose consider datagrid as parent control in which there are several child controls.There can be a column of link buttons right.Each link button has click event.Instead of writing event routine for each link button write one routine for parent which will handlde the click events of the child link button events.Parent can know which child actaully triggered the event.That thru arguments passed to event routine. " Happy programming" If we want to make sure that no one has tampered with View State, how do we ensure it? By setting the EnableViewStateMac to true in the @Page directive. his attribute checks the encoded and encrypted viewstate for tampering. What is the use of @ Register directives? @Register directive informs the compiler of any custom server control added to the page. Directives in ASP.NET are used to set attributes for a page. The @Register directive is a directive used to register user defined controls on a web page. A user created server control has an ascx extenstion. These controls inherit from the namespace System.Web.UI.UserControl. This namespace inherits from the System.Web.UI.Control. A user control may be embedded in an aspx web page using the @Register directive. A user control cannot be executed on its own independently, but may be registered on a web page and then used out there. Below is the syntax for registering an @register directive in an aspx page in ASP.NET <%@ Register TagPrefix="UC1" TagName="UserControl1" Src="UserControl1.ascx" %> The TagPrefix attributes is used to specify a unique namespace for the user control. The TagName is a name used to refer a user control uniquely by its name. Say we want to use this control in a webpage, we may use the code below... <UC1:UserControl1 runat="server"/> What is the use of Smart Navigation property? It‘s a feature provided by ASP.NET to prevent flickering and redrawing when the page is posted back. It is supported by Internet Explorer only. What is AppSetting Section in “Web.Config” file? Web.config file defines configuration for a webproject. Using "AppSetting" section we can define user defined values. For Example we can define "ConnectionString" section which will be used through out the project for database connection. <configuration> <appSettings> <add key="ConnectionString" value="server=xyz;pwd=www;database=testing" /> </appSettings> Where is View State information stored? In HTML Hidden Fields. ViewState allows the state of objects (serializable) to be stored in a hidden field on the page. ViewState is transported to the
client and back to the server, and is not stored on the server or any other external source. ViewState is used the retain the state of server-side objects between postabacks. What is the use of @ Output Cache directive in ASP.NET. It is basically used for caching. ASP.NET supports two important forms of caching: Page (including sub-page) caching, and Data caching. Page caching allows the server to store a copy of the output of a dynamic page, and to use this to respond to requests instead of running the dynamic code again. Sub-page caching does the same thing, but for parts of pages. Data caching allows a web application to store commonly used objects so that they are available for use across all pages in the site. While it was possible to do this kind of thing in ASP, using the Application object, it‘s a whole lot better in ASP.NET. ASP.NET provides easier methods to control caching. You can use the @ OutputCache directive to control page output caching in ASP.NET. Use the HttpCachePolicy class to store arbitrary objects, such as datasets, to server memory. You can store the cache in applications such as the client browser, the proxy server, and Microsoft Internet Information Services (IIS). By using the Cache-Control HTTP Header, you can control caching. You can use the @ OutputCache directive to cache, or you can cache programmatically through code by using Visual Basic .NET or Visual C# .NET. The @ OutputCache directive contains a Location attribute. This attribute determines the location for cached item. You can specify the following locations: Any - This stores the output cache in the client's browser, on the proxy server (or any other server) that participates in the request, or on the server where the request is processed. By default, Any is selected. Client - This stores output cache in the client's browser. Downstream - This stores the output cache in any cache-capable devices (other than the origin server) that participate in the request. Server - This stores the output cache on the Web server. None - This turns off the output cache. How can we create custom controls in ASP.NET? Custom controls can be created in either of the following 3 methods. Creating as a composite control : This method uses and combines the existing controls to give a custom functionality which can be used across different projects by adding to the control library. This can provide for event bubbling from child controls to the Parent container, custom event handling and properties. The CreateChildControls function of the Control class should be overridden for creating this custom control. This can also support design time rendering of the control. Deriving from an existing control : This method of creating a custom control derives from an existing ASP .Net control and customizing the properties that we need. This also can support custom event handling, properties etc., Creating a control from Scratch : This method is the one which needs maximum programming. This method needs even the HTML code for the custom controls to be written by the programmer. This may also need one to implement the IPostBackDataHandler and IPostBackEventHandler interfaces. What are the validation controls? A set of server controls included with ASP.NET that test user input in HTML and Web server controls for programmer-defined requirements. Validation controls perform input checking in server code. If the user is working with a browser that supports DHTML, the validation controls can also perform validation using client script. Can you explain what “AutoPostBack” feature in ASP.NET is? AutoPostBack is built into the form-based server controls, and when enabled, automatically posts the page back to the server whenever the value of the control in question is changed. How many types of validation controls are available in ASP.Net? There are main six types of validation controls :RequiredFieldValidator It checks does the control have any value.It's used when you want the control should not be empty. RangeValidator Checks if the value in validated control is in that specific range.Example TxtCustomerCode should not be more than eight lengths.
Compare Validator Checks that the value in controls should match the value in other control.Example Textbox TxtPie should be equal to RegularExpressionValidator When we want the control value should match with a specific regular expression. CustomValidator Used to define UserDefmed validation. ValidationSummary Displays summary of all current validation errors. How can you enable automatic paging in Data Grid? To use default paging, you set properties to enable paging, set the page size, and specify the style of the paging controls. Paging controls are LinkButton controls. You can choose from these types: Next and previous buttons. The button captions can be any text you want. Page numbers, which allow users to jump to a specific page. You can specify how many numbers are displayed; if there are more pages, an ellipsis ( … ) is displayed next to the numbers. You must also create an event-handling method that responds when users click a navigation control.To use the built-in paging controlsSet the control‘s AllowPaging property to true. Set the PageSize property to the number of items to display per page.To set the appearance of the paging buttons, include a element into the page as a child of the DataGrid control. For syntax, see DataGrid Control Syntax. Create a handler for the grid‘s PageIndexChanged event to respond to a paging request. The DataGridPageChangedEventsArgs enumeration contains the NewPageIndex property, which is the page the user would like to browse to. Set the grid‘s CurrentPageIndex property to e.NewPageIndex, then rebind the data. What is the use of “GLOBAL.ASAX” file? It allows handling ASP.NET application level events and setting application-level variables. The Global.asax file, which is derived from the HttpApplication class, maintains a pool of HttpApplication objects, and assigns them to applications as needed. The Global.asax file contains the following events: Application_Init: Fired when an application initializes or is first called. It's invoked for all HttpApplication object instances. Application_Disposed: Fired just before an application is destroyed. This is the ideal location for cleaning up previously used resources. Application_Error: Fired when an unhandled exception is encountered within the application. Application_Start: Fired when the first instance of the HttpApplication class is created. It allows you to create objects that are accessible by all HttpApplication instances. Application_End: Fired when the last instance of an HttpApplication class is destroyed. It's fired only once during an application's lifetime. Application_BeginRequest: Fired when an application request is received. It's the first event fired for a request, which is often a page request (URL) that a user enters. Application_EndRequest: The last event fired for an application request. Application_PreRequestHandlerExecute: Fired before the ASP.NET page framework begins executing an event handler like a page or Web service. Application_PostRequestHandlerExecute: Fired when the ASP.NET page framework is finished executing an event handler. Applcation_PreSendRequestHeaders: Fired before the ASP.NET page framework sends HTTP headers to a requesting client (browser). Application_PreSendContent: Fired before the ASP.NET page framework sends content to a requesting client (browser). Application_AcquireRequestState: Fired when the ASP.NET page framework gets the current state (Session state) related to the current request. Application_ReleaseRequestState: Fired when the ASP.NET page framework completes execution of all event handlers. This results in all state modules to save their current state data. Application_ResolveRequestCache: Fired when the ASP.NET page framework completes an authorization request. It allows caching modules to serve the request from the cache, thus bypassing handler execution.
Application_UpdateRequestCache: Fired when the ASP.NET page framework completes handler execution to allow caching modules to store responses to be used to handle subsequent requests. Application_AuthenticateRequest: Fired when the security module has established the current user's identity as valid. At this point, the user's credentials have been validated. Application_AuthorizeRequest: Fired when the security module has verified that a user can access resources. Session_Start: Fired when a new user visits the application Web site. Session_End: Fired when a user's session times out, ends, or they leave the application Web site. What is the difference between “Web.config” and “Machine.Config”? The settings made in the web.config file are applied to that particular web application only whereas the settings of machine.config file are applied to the whole asp.net application. What is a SESSION and APPLICATION object? ASP solves this problem by creating a unique cookie for each user. The cookie is sent to the client and it contains information that identifies the user. This interface is called the Session object. The Session object is used to store information about, or change settings for a user session. Variables stored in the Session object hold information about one single user, and are available to all pages in one application. Common information stored in session variables are name, id, and preferences. The server creates a new Session object for each new user, and destroys the Session object when the session expires. An application on the Web may be a group of ASP files. The ASP files work together to perform some purpose. The Application object in ASP is used to tie these files together.The Application object is used to store and access variables from any page, just like the Session object. The difference is that ALL users share one Application object, while with Sessions there is one Session object for EACH user. What is the difference between Servers.Transfer and response. Redirect? Server.Transfer transfers page processing from one page directly to the next page without making a round-trip back to the client's browser. This provides a faster response with a little less overhead on the server. Server.Transfer does not update the clients url history list or current url. Response.Redirect is used to redirect the user's browser to another page or site. This performas a trip back to the client where the client's browser is redirected to the new page. The user's browser history list is updated to reflect the new address. What is the difference between Authentication and authorization? Authentication is taking the credentials like username and password and allow him to access the application. Authorization is allowing a person to access a page after authentication. It can be set in the webconfig file. What is impersonation in ASP.NET? By default, ASP.NET executes in the security context of a restricted user account on the local machine. Sometimes you need to access network resources such as a file on a shared drive, which requires additional permissions. One way to overcome this restriction is to use impersonation. With impersonation, ASP.NET can execute the request using the identity of the client who is making the request, or ASP.NET can impersonate a specific account you specify in web.config. When using impersonation, ASP.NET applications can execute with the Windows identity (user account) of the user making the request. Impersonation is commonly used in applications that rely on Microsoft Internet Information Services (IIS) to authenticate the user. ASP.NET impersonation is disabled by default. If impersonation is enabled for an ASP.NET application, that application runs in the context of the identity whose access token IIS passes to ASP.NET. That token can be either an authenticated user token, such as a token for a logged-in Windows user, or the token that IIS provides for anonymous users (typically, the IUSR_MACHINENAME identity). Can you explain in brief how the ASP.NET authentication process works? ASP.NET does not run by itself, it runs inside the process of IIS. So there are two authentication layers which exist in ASP.NET system. First authentication happens at the IIS level and then at the ASP.NET level depending on the WEB.CONFIG file : o IIS first checks to make sure the incoming request comes from an IP address that is
allowed access to the domain. If not it denies the request. Next IIS performs its own user authentication if it is configured to do so. By default IIS allows anonymous access, so requests are automatically authenticated, but you can change this default on a per – application basis with in IIS. o If the request is passed to ASP.Net with an authenticated user, ASP.Net checks to see whether impersonation is enabled. If impersonation is enabled, ASP.Net acts as though it were the authenticated user. If not ASP.Net acts with its own configured account. o Finally the identity from step 3 is used to request resources from the operating system. If ASP.net authentication can obtain all the necessary resources it grants the users request otherwise it is denied. Resources can include much more than just the ASP.net page itself you can also use .Net‘s code access security features to extend this authorization step to disk files, registry keys and other resources. What are the various ways of authentication techniques in ASP.NET? Selecting an authentication provider is as simple as making an entry in the web.config file for the application. You can use one of these entries to select the corresponding built in authentication provider: o <authentication mode="windows"> o <authentication mode="passport"> o <authentication mode="forms"> o Custom authentication where you might install an ISAPI filter in IIS that compares incoming requests to list of source IP addresses, and considers requests to be authenticated if they come from an acceptable address. In that case, you would set the authentication mode to none to prevent any of the .Net authentication providers from being triggered. How does authorization work in ASP.NET? ASP.NET impersonation is controlled by entries in the applications web.config file. The default setting is "no impersonation". You can explicitly specify that ASP.NET shouldn‘t use impersonation by including the following code in the file <identity impersonate="false"/> o It means that ASP.NET will not perform any authentication and runs with its own privileges. By default ASP.NET runs as an unprivileged account named ASPNET. You can change this by making a setting in the processModel section of the machine.config file. When you make this setting, it automatically applies to every site on the server. To user a high-privileged system account instead of a low-privileged set the userName attribute of the processModel element to SYSTEM. Using this setting is a definite security risk, as it elevates the privileges of the ASP.NET process to a point where it can do bad things to the operating system.
When you disable impersonation, all the request will run in the context of the account running ASP.NET: either the ASPNET account or the system account. This is true when you are using anonymous access or authenticating users in some fashion. After the user has been authenticated, ASP.NET uses its own identity to request access to resources.
The second possible setting is to turn on impersonation. <identity impersonate="false"/>.
In this case, ASP.NET takes on the identity IIS passes to it. If you are allowing anonymous access in IIS, this means ASP.NET will impersonate the IUSR_ComputerName account that IIS itself uses. If you aren‘t allowing anonymous access, ASP.NET will take on the credentials of the
authenticated user and make requests for resources as if it were that user. Thus by turning impersonation on and using a non-anonymous method of authentication in IIS, you can let users log on and use their identities within your ASP.NET application.
Finally, you can specify a particular identity to use for all authenticated requests : <identity impersonate="false" username="DOMAIN\username" password="password"/>.
With this setting, all the requests are made as the specified user (Assuming the password it correct in the configuration file). So, for example you could designate a user for a single application, and use that user‘s identity every time someone authenticates to the application. The drawback to this technique is that you must embed the user‘s password in the web.config file in plain text. Although ASP.NET won‘t allow anyone to download this file, this is still a security risk if anyone can get the file by other means.
What is difference between Data grid, Datalist, and repeater? A Datagrid, Datalist and Repeater
are all ASP.NET data Web controls. They have many things in common like DataSource Property, DataBind Method, ItemDataBound and ItemCreated events. When you assign the DataSource Property of a DataGrid to a DataSet then each DataRow present in the DataRow Collection of DataTable is assigned to a corresponding DataGridItem and this is same for the rest of the two controls also. But The HTML code generated for a DataGrid has an HTML TABLE ROW (TR) element created for the particular DataRow and its a Table form representation with Columns and Rows.
For a Datalist its an Array of Rows and based on the Template Selected and the RepeatColumn Property value We can specify how many DataSource records should appear per HTML <table> row. In short in DataGrid we have one record per row, but in DataList we can have five or six rows per row.
For a Repeater Control, the DataRecords to be displayed depends upon the Templates specified and the only HTML generated is the due to the Templates.
In addition to these, DataGrid has a in-built support for Sort, Filter and paging the data, which is not possible when using a DataList and for a Repeater Control we would require to write an explicit code to do paging.
From performance point of view, how do Datagrid, Datalist and Repeater rate ? Repeater is
fastest, followed by DataList and finally DataGrid.
What is the method to customize columns in Data Grid? Use the TemplateColumn. How can we format data inside Data Grid? Use the DataFormatString property.
How to decide on the design consideration to take a Data grid, data list, or repeater? DataGrid provides ability to allow the end-user to sort, page, and edit its data. But it comes at a cost of speed. Second the display format is simple that is in row and columns. Real life scenarios can be more demanding that. With its templates, the DataList provides more control over the look and feel of the displayed
data than the DataGrid. It offers better performance than DataGrid.
Repeater control allows for complete and total control. With the Repeater, the only HTML emitted are the values of the databinding statements in the templates along with the HTML markup specified in the templates—no "extra" HTML is emitted, as with the DataGrid and DataList. By requiring the developer to specify the complete generated HTML markup, the Repeater often requires the longest development time. But repeater does not provide editing features like datagrid so everything has to be coded by programmer. However, the Repeater does boast the best performance of the three data Web controls. Repeater is fastest followed by DataList and finally DataGrid.
Difference between ASP and ASP.NET? ASP.NET supports new features:
Better Language Support
New ADO.NET Concepts have been implemented. ASP.NET supports full language (C#, VB.NET, C++) and not simple scripting like Vbscript.
Better controls than ASP
ASP.NET covers large sets of HTML controls. Better Display grid like Datagrid, Repeater and DataList. Many of the display grids have paging support.
The first request for an ASP.NET page on the server will compile the ASP.NET code and keep a cached copy in memory. The result of this is greatly increased performance. Better Display grid like Datagrid, Repeater and DataList. Many of the display grids have paging support.
Controls have events support
All ASP.NET controls support events. Load, Click and Change events handled by code makes coding much simpler and much better organized.
The first request for an ASP.NET page on the server will compile the ASP.NET code and keep a cached copy in memory. The result of this is greatly increased performance.
Better Authentication Support
ASP.NET supports forms-based user authentication, including cookie management and automatic redirecting of unauthorized logins. (You can still do your custom login page and custom user checking).
User Accounts and Roles
ASP.NET allows for user accounts and roles, to give each user (with a given role) access to different server code and executables.
Server to server communication has been greatly enhanced, making it possible to scale an application over several servers. One example of this is the ability to run XML parsers, XSL transformations and even resource hungry session objects on other servers.
Configuration of ASP.NET is done with plain text files. Configuration files can be uploaded or changed while the application is running. No need to restart the server, deal with metabase or registry.
No more server restart to deploy or replace compiled code. ASP.NET simply redirects all new requests to the new code. What order they (major events in GLOBAL.ASAX file) are triggered? They‘re triggered in the following order: o Application_BeginRequest o Application_AuthenticateRequest o Application_AuthorizeRequest o Application_ResolveRequestCache o Application_AcquireRequestState o Application_PreRequestHandlerExecute o Application_PreSendRequestHeaders o
file to a folder named temp on your server: string strdir = "D:\\temp\\"; string strfilename = Path.GetFileName( txtFile.PostedFile. FileName); txtFile.PostedFile.SaveAs(strdir+strfilename); Make sure to create the specified folder and change the path before attempting to execute the program. In the above code, you can the HTML File control so that users can browse for the required file. As you may know, an HTML control can be converted into an ASP.NET server control with the addition of the runat = "server" attribute. The system retrieves and saves the file using the PostedFile property. Because the HTML file control is used, you have to specifically give the enctype attribute of the Form tag:
<form method = "post" name = "frmemail" runat = "server" enctype = "multipart/form-data" onSubmit = "return Tocheck(this)">
Visual Studio 2005 ships with a built-in control named "FileUpload". Hence, the usage of the HTML File control can be avoided. Also, there is no need to give the enctype attribute as shown above. The new control automatically handles the encryption.
How do I send email message from ASP.NET? To send an email from your ASP.NET page, you
need to: o import the System.Web.Mail namespace in your ASP.NET page. o create an instance of the MailMessage class o set all the properties of the MailMessage instance. o send the message with SmtpMail.Send method. What are different IIS isolation levels? IIS has three level of isolation: o Low (IIS process) In this main IIS process and ASP.NET application run in same process. So if any one crashes the other is also affected. So all application and the IIS process runs on the same process. In case any website crashes it affects everyone. o Medium (Pooled) In Medium pooled scenario the IIS and web application run in different processes. So in this case there are two processes process1 and process2. In process1 the IIS process is running and in process2 we have all Web applications running. o High (Isolated) In high isolated scenario every process is running is there own process. This consumes heavy memory but has highest reliability. ASP used STA threading model, what is the threading model used for ASP.NET? ASP.NET uses MTA threading model. What is the use of <%@ page aspcompat=true %> attribute? This attribute works like a compatibility option. As mentioned before ASP worked in STA model and ASP.NET works in MTA model, but what if your ASP.NET application is using a VB COM component. In order that VB COM runs properly in ASP.NET threading model we have to set attribute. After defining the ASPCOMPAT directive attribute ASP.NET pages runs in STA model thus building the compatibility between ASP.NET and old COM components that does not support MTA model. Explain the differences between Server-side and Client-side code? Server side code is generated script for client side that display client browser in format HTML or XML becuse
browser is understand only XML and HTML format. server side is execute in server and display output in client-server side code do the round trip in server while client execute in without any trip. What is the main difference between Gridlayout and FlowLayout ? GridLayout provides absolute positioning for controls placed on the page. Developers that have their roots in rich-client development environments like Visual Basic will find it easier to develop their pages using absolute positioning, because they can place items exactly where they want them. On the other hand, FlowLayout positions items down the page like traditional HTML. Experienced Web developers favor this approach because it results in pages that are compatible with a wider range of browsers. If you look in to the HTML code created by absolute positioning you can notice lot of DIV tags. While in Flow layout you can see more of using HTML table to position elements which is compatible with wide range of browsers. If cookies are not enabled at browser end does form Authentication work? No, it does not work. What is Tracing in ASP.NET? Tracing allows us to view how the code was executed in detail. How do we enable tracing? On the Application level we can add directive <trace enabled="true" to the web.config between <system.web>…</system.web> directives. On the page level we can add Trace ="true" to the @Page directive in the beginning of the page. How do I sign out in forms authentication? In Logout.aspx, I added Request.Cookies.Clear(), Response.Cookies.Clear(), System.Web.Security.FormsAuthentication.SignOut() and Session.Abandon(). When the Response is redirected and execution is passed to Application_AuthenticateRequest in Global.asax, the AuthCookies are still in the Context.Request.Cookies! cookies are not enabled at browser end does form Authentication work? No. If What is the difference between “Web farms” and “Web garden”? "Web farms" are used to have some redundancy to minimize failures. It consists of two or more web server of the same configuration and they stream the same kind of contents. When any request comes there is switching / routing logic which decides which web server from the farm handles the request. For instance we have two servers "Server1" and "Server2" which have the same configuration and content. So there is a special switch which stands in between these two servers and the users and routes the request accordingly. A router in between which takes a request and sees which one of the server is least loaded and forwards the request to that server. So for request1 it route‘s server1, for request2 it routes server2, for request3 it routes to server3 and final request4 is routed to server4. So you can see because we have web farm at place server1 and server2 are loaded with two request each rather than one server loading to full. One more advantage of using this kind of architecture is if one of the servers goes down we can still run with the other server thus having 24×7 uptime. The routing logic can be a number of different options: o Round-robin: Each node gets a request sent to it "in turn". So, server1 gets a request, then server2 again, then server1, then server2 again. o Least Active Whichever node show to have the lowest number of current connects gets new connects sent to it. This is good to help keep the load balanced between the server nodes o Fastest Reply Whichever node replies faster is the one that gets new requests. This is also a good option - especially if there are nodes that might not be "equal" in performance. If one performs better than the other, then send more requests there rather than which is moving slowly?
Web Garden: All requests to IIS are routed to "aspnet_wp.exe" for IIS 5.0 and "w3wp.exe" for IIS
6.0. In normal case i.e. with out web garden we have one worker process instance ("aspnet_wp.exe" / "w3wp.exe") across all requests. This one instance of worker process uses the CPU processor as directed by the operating system. But when we enable web garden for a web server it creates different instances of the worker process and each of these worker process runs on different CPU. You can see in the below diagram we have different worker process instances created which run on different CPU‘s. In short we can define a model in which multiple processes run on multiple CPUs in a single server machine are known as a Web garden.
How do we configure “Web Garden”? "Web garden" can be configured by using process model
settings in "machine.config" or "Web.config" file. The configuration section is named <processModel> and is shown in the following example. The process model is enabled by default enable="true"). Below is the snippet from config file. <processModel enable="true" timeout="infinite" idleTimeout="infinite" shutdownTimeout="0:00:05" requestLimit="infinite" requestQueueLimit="5000" memoryLimit="80" webGarden="false" cpuMask="12" userName="" password="" logLevel="errors" clientConnectedCheck=‖0:00:05" /> From the above processModel section for web garden we are concerned with only two attributes "webgarden" and "cpuMask".
webGarden : Controls CPU affinity. True indicates that processes should be affinitized to the
corresponding CPU. The default is False.
cpuMask : Specifies which processors on a multiprocessor server are eligible to run ASP.NET
processes. The cpuMask value specifies a bit pattern that indicates the CPUs eligible to run ASP.NET threads. ASP.NET launches one worker process for each eligible CPU. If webGarden is set to false, cpuMask is ignored and only one worker process will run regardless of the number of processors in the machine. If webGarden is set to true, ASP.NET launches one worker process for each CPU that corresponds to a set bit in cpuMask. The default value of cpuMask is 0xffffffff.
Use 1 for the processor that you want to use for ASP.NET. Use 0 for the processor that you do not want to use for ASP.NET. For example, if you want to use the first two processors for ASP.NET of a four-processor computer, type 1100.
What are user controls and custom controls? Custom controls:
A control authored by a user or a third-party software vendor that does not belong to the .NET
Framework class library. This is a generic term that includes user controls. A custom server control is used in Web Forms (ASP.NET pages). A custom client control is used in Windows Forms applications. User Controls: In ASP.NET: A user-authored server control that enables an ASP.NET page to be re-used as a server control. An ASP.NET user control is authored declaratively and persisted as a text file with an .ascx extension. The ASP.NET page framework compiles a user control on the fly to a class that derives from the System.Web.UI.UserControl class. What are the advantages and limitations of View State? * No server resources are required because state is contained in a structure in the page code. * Simplicity. * States are retained automatically. * The values in view state are hashed, compressed, and encoded, thus representing a higher state of security than hidden fields. * View state is good for caching data in Web farm configurations because the data is cached on the client. What is code behind technology? Code-Behind is a concept where the contents of a page are in one file and the server-side code is in another. This allows different people to work on the same page at the same time and also allows either part of the page to be easily redesigned, with no changes required in the other. An Inherits attribute is added to the @ Page directive to specify the location of the Code-Behind file to the ASP.NET page.ASP.NET page is made of two components—page layout information and procedural code that ties controls and literals up together. The code can be injected in the same .aspx file through the server-side <script> tag or placed in an externally bound class file. This technology of keeping code and layout separately is known as Code behind technology. The following code shows how to bind a page to a source class file. <%@ Page Language="C#" Src="MyBasePage.cs" %> How to enable and disabling the View State ? you can specify that a control should not save its view state or the view state of its children controls by setting the control's EnableViewState property to False (the default is True).The EnableViewState property is defined in the System.Web.UI.Control class, so all server controls have this property, including the Page class. You can therefore indicate that an entire page's view state need not be saved by setting the Page class's EnableViewState to False. (This can be done either in the code-behind class with Page.EnableViewState = false; or as a @Page-level directive—<%@Page EnableViewState="False" %>.) What is ViewState and How it works ? View state's purpose in life is simple: it's there to persist state across postbacks. (For an ASP.NET Web page, its state is the property values of the controls that make up its control hierarchy.) This begs the question, "What sort of state needs to be persisted?" To answer that question, let's start by looking at what state doesn't need to be persisted across postbacks. Recall that in the instantiation stage of the page life cycle, the control hierarchy is created and those properties that are specified in the declarative syntax are assigned. Since these declarative properties are automatically reassigned on each postback when the control hierarchy is constructed, there's no need to store these property values in the view state. ADO.NET
What is the namespace in which .NET has the data functionality class? System.Data This
namespace contains the basic objects used for accessing and storing relational data, such as DataSet, DataTable, and DataRelation. Each of these is independent of the type of data source and the type of the connection.
System.Data.OleDB:It contains the objects that we use to connect to a data source via an
OLE-DB provider, such as OleDbConnection, OleDbCommand, etc. These objects inherit from the common base classes, and so have the same properties, methods, and events as the SqlClient equivalents.
System.Data.SqlClient: This contains the objects that we use to connect to a data source via the
Tabular Data Stream (TDS) interface of Microsoft SQL Server (only). This can generally provide better performance as it removes some of the intermediate layers required by an OLE-DB connection.
System.XML: This Contains the basic objects required to create, read, store, write, and manipulate
XML documents according to W3C recommendations.
Can you give an overview of ADO.NET architecture? ADO.NET provides consistent access to
data sources such as Microsoft SQL Server and XML, as well as to data sources exposed through OLE DB and ODBC. Data-sharing consumer applications can use ADO.NET to connect to these data sources and retrieve, manipulate, and update the data that they contain. ADO.NET separates data access from data manipulation into discrete components that can be used separately or in tandem. ADO.NET includes .NET Framework data providers for connecting to a database, executing commands, and retrieving results. Those results are either processed directly, placed in an ADO.NET DataSet object in order to be exposed to the user in an ad hoc manner, combined with data from multiple sources, or remoted between tiers. The ADO.NET DataSet object can also be used independently of a .NET Framework data provider to manage data local to the application or sourced from XML. The ADO.NET classes are found in System.Data.dll, and are integrated with the XML classes found in System.Xml.dll. When compiling code that uses the System.Data namespace, reference both System.Data.dll and System.Xml.dll. For an example of an ADO.NET application that connects to a database, retrieves data from it, and then displays that data in a command prompt, see ADO.NET Sample Application. ADO.NET provides functionality to developers writing managed code similar to the functionality provided to native component object model (COM) developers by ActiveX Data Objects (ADO). For a discussion of the differences between ADO and ADO.NET, see ADO.NET for the ADO Programmer on MSDN. We recommend that you use ADO.NET, not ADO for accessing data in your .NET applications. What are the two fundamental objects in ADO.NET? DataReader and Dataset. The DataTable Select Method - This method is overloaded to accept arguments to filter and sort data rows returning an array of DataRow objects. The DataView object sort, filter and find methods - This object uses the same filter arguments supported by the Select method, but the DataView extrudes structures that can be bound to data-aware controls. What is difference between dataset and data reader? Following are some major differences between DataSet and DataReader : o DataReader provides forward-only and read-only access to data, while the DataSet object providesrandom access and can hold more than one table (in other words more than one rowset) from the same data source as well as the relationships between them. o DataSet is a disconnected architecture while DataReader is connected architecture. o DataSet can persist contents while DataReader can not. What are major difference between classic ADO and ADO.NET? o As in classic ADO we had client and server side cursors they are no more present in ADO.NET. Note it‘s a disconnected model so they are no more applicable. o Locking is not supported due to disconnected model. o All data are stored in XML in ADO.Net as compared to classic ADO where data can be stored in binary format also.
What is the use of connection object? They are used to connect a data to a Command object.
o An OleDbConnection object is used with an OLE-DB provider.
What is the use of command objects and what are the methods provided by the command
object? The ADO Command object is used to execute a single query against a database. The query can perform actions like creating, adding, retrieving, deleting or updating records. If the query is used to retrieve data, the data will be returned as a RecordSet object. This means that the retrieved data can be manipulated by properties, collections, methods, and events of the Recordset object.The major feature of the Command object is the ability to use stored queries and procedures with parameters. The methods of Command Objects are: i) Cancel - Cancels an execution of a method ii) CreateParameter - Creates a new Parameter object iii) Execute - Executes the query, SQL statement or procedure in the CommandText property. What is the use of data adapter? A data adapter is the component that exists between the local repository (dataset) and the physical database. It contains the four different commands (SELECT, INSERT, UPDATE and DELETE). It uses these commands to fetch the data from the DB and fill into the dataset and to perform updates done in the dataset to the physical database. It is the data adapter that is responsible for opening and closing the database connection and communicates with the dataset. Why do we need a Command object? It is used to connect connection object to DataReader or DataSet. Following are the methods provided by command object : o ExecuteNonQuery : Executes the command defined in the CommandText property against the connection defined in the Connection property for a query that does not return any row (an UPDATE, DELETE or INSERT). Returns an Integer indicating the number of rows affected by the query. o ExecuteReader : Executes the command defined in the CommandText property against the connection defined in the Connection property. Returns a "reader" object that is connected to the resulting rowset within the database, allowing the rows to be retrieved. o ExecuteScalar : Executes the command defined in the CommandText property against the connection defined in the Connection property. Returns only single value (effectively the first column of the first row of the resulting rowset) any other returned columns and rows are discarded. It is fast and efficient when only a single value is required What are basic methods of Data adapter? What are basic methods of DataAdapter ? A DataAdapter is the object that bridges between the source data and the dataset object so retrieve and updates can occur. It is objects that connect one or more Command objects to a DataSet object. They provide logic that would get data from the data store and populates the tables in the DataSet, or pushes the changes in the DataSet back into the data store. o An OleDbDataAdapter object is used with an OLE-DB provider. o A SqlDataAdapter object uses Tabular Data Services with MS SQL Server. DataAdapter have following properties, i) AcceptChangeDuringFill :- (read/write) a value indicating whether AcceptChanges is called on a DataRow after it is added to the DataTable. ii) TableMappings :- (read) a collection that provides the master mapping between a source table and a DataTable. Methods, i) Fill - adds or refreshes rows in DataSet to match those in datasource using the DataSet name, and create DataTable named "Table". ii) FillSchema - adds a DataTable named "Table" to the specified DataSet and configures the schema to match that in the data source based on the specified SchemaType. iii) GetFillParameters - retrieves the parameters set by the user when executing a SQL select statement.
A SqlConnection object uses Tabular Data Services (TDS) with MS SQL Server.
iv) Update - Calls the respective insert, update or delete statements for respective action in the specified DataSet from a DataTable named "Table". What is Dataset object? A dataset is the local repository of the data used to store the tables and disconnected record set. When using disconnected architecture, all the updates are made locally to dataset and then the updates are performed to the database as a batch. What are the various objects in Dataset? DataSet objects are in-memory representations of data. They contain multiple DataTable objects, which contain columns and rows, just like normal data base tables. You can even define relations between tables to create parent-child relationships. The DataSet is specifically designed to help manage data in memory and to support disconnected operations on data, when such a scenario make sense. The DataSet is an object that is used by all of the Data Providers, which is why it does not have a Data Provider specific prefix. How do we connect to SQL SERVER, which namespace do we use? System.data.sqlclient dim con as sqlconnection(uname=sa pwd="" database=ddd) How do we use stored procedure in ADO.NET and how do we provide parameters to the stored procedures? Create a command object (SqlCommand, etc) and specify the stored procedure name. Set the CommandType property of the command object to the CommandType.StoredProcedure enumeration value. This tells the runtime that the command used here is a stored procedure. SqlCommand insertCommand = new SqlCommand("InsertProc", conn); insertCommand.CommandType = CommandType.StoredProcedure; dataAdapter.InsertCommand = insertCommand; insertCommand.UpdatedRowSource = UpdateRowSource.None; How can we force the connection object to close after my dataReader is closed? Command method Executereader takes a parameter called as CommandBehavior where in we can specify saying close connection automatically after the Datareader is closed. pobjDataReader = pobjCommand.ExecuteReader(CommandBehavior.CloseConnection). How to force the DataReader to return only schema of the data store rather than data ? reader = cmd.ExecuteReader(CommandBehavior.SchemaOnly). Which is the best place to store connectionstring in .NET projects? The Web.config file might be a good place. In the system.configuration namespace, you can find the appropriate methods to access this file in you application. How can we fine-tune the command object when we are expecting a single row? What are the steps involved to fill a dataset? Defining the connection string for the database server Defining the connection (SqlConnection, OleDbConnection, etc) to the database using the connection string Defining the command (SqlCommand, OleDbCommand, etc) or command string that contains the query Defining the data adapter (SqlDataAdapter, OleDbDataAdapter, etc) using the command string and the connection object Creating a new DataSet object If the command is SELECT, filling the dataset object with the result of the query through the data adapter Reading the records from the DataTables in the datasets using the DataRow and DataColumn objects If the command is UPDATE, INSERT or DELETE, then updating the dataset through the data adapter Accepting to save the changes in the dataset to the database SqlDataAdapter objDataAdapter = new SqlDataAdapter ("Select CompanyName, ContactName, City, Country, Region from Suppliers", objConnect); DataSet objDS = new DataSet(); objDataAdapter.Fill (objDS); What are the various methods provided by the dataset object to generate XML? o ReadXML: Read‘s a XML document in to Dataset. o GetXML: This is a function which returns the string containing XML document.
o WriteXML: This writes a XML data to disk. How can we save all data from dataset? Dataset has AcceptChanges method which commits all
the changes since last time AcceptChanges has been executed. E.g DataTable dt = ds.Tables["Article"]; dt.Rows["lines"] = 600; da.Update(ds, "Article"); How can we check that some changes have been made to dataset since it was loaded? For tracking down changes DataSet has two methods which comes as rescue GetChanges and HasChanges. o GetChanges Returns DataSet which are changed since it was loaded or since AcceptChanges was executed. o HasChanges This property indicates that has any changes been made since the dataset was loaded or AcceptChanges method was executed. If we want to revert or abandon all changes since the dataset was loaded use RejectChanges. How can we add/remove row is in “Data Table” object of “Dataset”? ―Datatable‖ provides ―NewRow‖ method to add new row to ―DataTable‖. ―DataTable‖ has ―DataRowCollection‖ object which has all rows in a ―DataTable‖ object. Following are the methods provided by ―DataRowCollection‖ object : o Add :Adds a new row in DataTable o Remove :It removes a ―DataRow‖ object from ―DataTable‖ o RemoveAt :It removes a ―DataRow‖ object from ―DataTable‖ depending on index position of the ―DataTable‖. What is basic use of “Data View”? ―DataView‖ represents a complete table or can be small section of rows depending on some criteria. It is best used for sorting and finding data with in ―Datatable‖. Dataview has the following method‘s : o Find :It takes a array of values and returns the index of the row. o FindRow :This also takes array of values but returns a collection of ―DataRow‖. If we want to manipulate data of ―DataTable‖ object create ―DataView‖ (Using the ―DefaultView‖ we can create ―DataView‖ object) of the ―DataTable‖ object and use the following functionalities : o AddNew: Adds a new row to the ―DataView‖ object. o Delete: Deletes the specified row from ―DataView‖ object. How can we load multiple tables in a Dataset? objCommand.CommandText = "Table1" objDataAdapter.Fill(objDataSet, "Table1") objCommand.CommandText = "Table2" objDataAdapter.Fill(objDataSet, "Table2")
Above is a sample code which shows how to load multiple ―DataTable‖ objects in one ―DataSet‖ object. Sample code shows two tables ―Table1‖ and ―Table2‖ in object ObjDataSet.
lstdata.DataSource = objDataSet.Tables("Table1").DefaultView
In order to refer ―Table1‖ DataTable, use Tables collection of DataSet and the Defaultview object will give you the necessary output. How can we add relation between tables in a Dataset? Dim objRelation As DataRelation objRelation=New DataRelation("CustomerAddresses", _ objDataSet.Tables("Customer").Columns("Custid"),objDataSet.Tables("Addresses").Columns("Cus tid_fk")) objDataSet.Relations.Add(objRelation)
Relations can be added between ―DataTable‖ objects using the ―DataRelation‖ object. Above sample code is trying to build a relationship between ―Customer‖ and ―Addresses‖ ―Datatable‖ using ―CustomerAddresses‖ ―DataRelation‖ object.
What is the use of Command Builder? CommandBuilder builds ―Parameter‖ objects
automatically. Below is a simple code which uses commandbuilder to load its parameter objects. Dim pobjCommandBuilder As New OleDbCommandBuilder(pobjDataAdapter) pobjCommandBuilder.DeriveParameters(pobjCommand) Be careful while using ―DeriveParameters‖ method as it needs an extra trip to the Datastore which can be very inefficient.
What’s difference between “Optimistic” and “Pessimistic” locking? In pessimistic locking
when user wants to update data it locks the record and till then no one can update data. Other user‘s can only view the data when there is pessimistic locking. In optimistic locking multiple users can open the same record for updating, thus increase maximum concurrency. Record is only locked when updating the record. This is the most preferred way of locking practically. Now a days browser based application is very common and having pessimistic locking is not a practical solution. How many ways are there to implement locking in ADO.NET? Following are the ways to implement locking using ADO.NET : o When we call ―Update‖ method of DataAdapter it handles locking internally. If the DataSet values are not matching with current data in Database it raises concurrency exception error. We can easily trap this error using Try..Catch block and raise appropriate error message to the user. o Define a Datetime stamp field in the table.When actually you are firing the UPDATE SQL statements compare the current timestamp with one existing in the database. Below is a sample SQL which checks for timestamp before updating and any mismatch in timestamp it will not update the records. This is the best practice used by industries for locking. Update table1 set field1=@test where LastTimeStamp=@CurrentTimeStamp o Check for original values stored in SQL SERVER and actual changed values. In stored procedure check before updating that the old data is same as the current. Example in the below shown SQL before updating field1 we check that is the old field1 value same. If not then some one else has updated and necessary action has to be taken. Update table1 set field1=@test where field1 = @oldfield1value
Locking can be handled at ADO.NET side or at SQL SERVER side i.e. in stored procedures.
How can we perform transactions in .NET? The most common sequence of steps that would be
performed while developing a transactional application is as follows: o Open a database connection using the Open method of the connection object. o Begin a transaction using the Begin Transaction method of the connection object. This method provides us with a transaction object that we will use later to commit or rollback the transaction. Note that changes caused by any queries executed before calling the Begin Transaction method will be committed to the database immediately after they execute. Set the Transaction property of the command object to the above mentioned transaction object. o Execute the SQL commands using the command object. We may use one or more command objects for this purpose, as long as the Transaction property of all the objects is set to a valid transaction object. o Commit or roll back the transaction using the Commit or Rollback method of the transaction object. o Close the database connection. What is difference between Dataset.Clone and Dataset. Copy? Clone: It only copies structure, does not copy data. Copy: Copies both structure and data. you explain the difference between an ADO.NET Dataset and an ADO Record set? There Can two main basic differences between recordset and dataset : o With dataset you an retrieve data from two databases like oracle and sql server and merge them in one dataset , with recordset this is not possible o All representation of Dataset is using XML while recordset uses COM. o Recordset can not be transmitted on HTTP while Dataset can be. Explain in detail the fundamental of connection pooling? When a connection is opened first time a connection pool is created and is based on the exact match of the connection string given to create the connection object. Connection pooling only works if the connection string is the same. If the connection string is different, then a new connection will be opened, and connection pooling won‘t be used. What is Maximum Pool Size in ADO.NET Connection String? The maximum number of connections allowed in the pool. By default, the max pool size is 100. If we try to obtain connection more than max pool size, then ADO.NET waits for Connection Timeout for the connection from the pool. If even after that connection is not available, we get the exception: Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached. SQL SERVER
What is normalization? What are different types of normalization? Database normalization is a
process by which an existing schema is modified to bring its component tables into compliance with a series of progressive normal forms. The goal of database normalization is to ensure that every non-key column in every table is directly dependent on the key, the whole key and nothing but the key and with this goal come benefits in the form of reduced redundancies, fewer anomalies, and improved efficiencies. First normal form: The first normal form (or 1NF) requires that the values in each column of a table are atomic. By atomic we mean that there are no sets of values within a column. Second Normal Form: Where the First Normal Form deals with atomicity of data, the Second Normal Form (or 2NF) deals with relationships between composite key columns and non-key
columns. The second normal form (or 2NF) any non-key columns must depend on the entire primary key. In the case of a composite primary key, this means that a non-key column cannot depend on only part of the composite key. Third Normal Form: Third Normal Form (3NF) requires that all columns depend directly on the primary key. Tables violate the Third Normal Form when one column depends on another column, which in turn depends on the primary key (a transitive dependency). One way to identify transitive dependencies is to look at your table and see if any columns would require updating if another column in the table was updated. If such a column exists, it probably violates 3NF. Boyce-Codd Normal Form (BCNF): A relation R is said to be in BCNF if whenever X -> A holds in R, and A is not in X, then X is a candidate key for R. It should be noted that most relations that are in 3NF are also in BCNF. Infrequently, a 3NF relation is not in BCNF and this happens only if (a) the candidate keys in the relation are composite keys (that is, they are not single attributes), (b) there is more than one candidate key in the relation, and (c) the keys are not disjoint, that is, some attributes in the keys are common. What is denormalization? Denormalization is the process of attempting to optimize the performance of a database by adding redundant data. It is sometimes necessary because current DBMSs implement the relational model poorly. A true relational DBMS would allow for a fully normalized database at the logical level, while providing physical storage of data that is tuned for high performance. What is a candidate key? In the relational model, a candidate key of a relvar (relation variable) is a set of attributes of that relvar such that (1) At all times it holds in the relation assigned to that variable that there are no two distinct tuples with the same values for these attributes and (2) There is not a proper subset of this set of attributes for which (1) holds.
Since a superkey is defined as a set of attributes for which (1) holds, we can also define a candidate key as a minimal superkey, i.e. a superkey of which no proper subset is also a superkey.
What are the different types of joins? What is the difference between them? Join actually puts
data from two or more tables into a single result set. Types of joins: INNER JOINs: It is the most common type of join. Inner joins return all rows from multiple tables where the join condition is met. For example, SELECT suppliers.supplier_id, suppliers.supplier_name, orders.order_date FROM suppliers, orders WHERE suppliers.supplier_id = orders.supplier_id; CROSS JOINs: A cross joins of two tables produces all possible combinations of rows from the two tables. A cross join is also called a cross product or Cartesian product. Each row of the first table appears once with each row of the second table. Hence, the number of rows in the result set is the product of the number of rows in the first table and the number of rows in the second table, minus any rows that are omitted because of restrictions in a WHERE clause. You cannot use an ON phrase with cross joins. However, you can put restrictions in a WHERE clause. OUTER JOINs: SQL server supports three types of outers joins: Left Join, Right Join and Full Join. Left Joins: A left outer join includes all rows from the tables are referenced with a left of LEFT OUTER JOIN. E.g
USE pubs Select a.Au_fname,a.lname,p.Pub_name from Authors A LEFT OUTER JOIN Publishers p on a.city=p.city Order by p.Pub_name ASC, a.Au_lname ASC, a.Au_fname ASC Right Join: A left outer join includes all rows from the tables are referenced with a right of RIGHT OUTER JOIN. E.g USE pubs Select a.Au_fname,a.lname,p.Pub_name from Authors A RIGHT OUTER JOIN Publishers p on a.city=p.city Order by p.Pub_name ASC, a.Au_lname ASC, a.Au_fname ASC Full Join: A full outer join includes all rows from both tables, regardless of whether the tables have matching value. USE pubs Select a.Au_fname,a.lname,p.Pub_name from Authors A FULL OUTER JOIN Publishers p on a.city=p.city Order by p.Pub_name ASC, a.Au_lname ASC, a.Au_fname ASC
What are indexes? What is the difference between clustered and no clustered indexes? Indexes
are structured to facilities the rapid return of result set. When queries are run against a db, an index on that db basically helps in the way the data is sorted to process the query for faster and data retrievals are much faster when we have an index. The difference is that, Clustered index is unique for any given table and we can have only one clustered index on a table. The leaf level of a clustered index is the actual data and the data is resorted in case of clustered index. Whereas in case of non-clustered index the leaf level is actually a pointer to the data in rows so we can have as many non-clustered indexes as we can on the db. How can you increase SQL performance? To increase the speed of SQL SELECT query, you can analyze the following issues: 1. Request Live property value 2. Available indexes for conditions from WHERE clause 3. Rewriting a query with OR conditions as a UNION . 4. Available indexes for JOIN conditions 5. Available indexes for ORDER BY clause . 6. Available indexes for GROUP BY clause . 7. Select from in-memory tables. 8. SELECT INTO vs INSERT SELECT What is the use of OLAP? OLAP stands for On Line Analytical Processing, a series of protocols used mainly for business reporting. Using OLAP, businesses can analyze data in all manner of different ways, including budgeting, planning, simulation, data warehouse reporting, and trend analysis. A main component of OLAP is its ability to make multidimensional calculations,
allowing a wide and lightning-fast array of possibilities. In addition, the bigger the business, the bigger its business reporting needs. Multidimensional calculations enable a large business to complete in seconds what it otherwise would have waited a handful of minutes to receive. One main benefit of OLAP is consistency of calculations. No matter how fast data is processed through OLAP software or servers, the reporting that results is presented in a consistent presentation. What is a measure in OLAP? An OLAP measure is defined precisely like OLAP dimension, i.e., it is a concept hierarchy. Normally however, OLAP dimension has a numeric type so that its items can be arithmetically aggregated. In general case the measure could be non-numeric or even multidimensional where it is a product of several concepts (like dimensions). What are dimensions in OLAP? An OLAP dimension is a concept hierarchy where super concepts correspond to abstract representation with low details while sub concepts correspond to detailed representation. The items are values of this OLAP dimension representing objects with different levels of details. What are levels in dimensions? We consider the dimensions of a data object to be those quantities which are used to select and categorize the data values. To understand the nature of these dimensions, it may help to identify them with the indices of a multi-dimensional array. (This paradigm has its limits, though: data which cannot be described in terms of a regular array can nevertheless be described in terms of these dimensions.) We have proposed four levels of these dimensions: Level 0 dimensions describe the components of the data (these dimensions are unrelated to the position of a datum in a coordinate space). Simple scalar data such as temperature would have a single Level 0 dimension containing a single grid point (i.e, A Level 0 of order 1 with a rank of 1). A two-component horizontal wind vector would also have a single Level 0 dimension, but with two grid points (order of 1, rank of 2). A wind stress tensor (a matrix) would have two Level 0 dimensions, each of which would have three grid points (order of 2, ranks of 3 and 3). In terms of array indices, Level 0 dimensions select components at a fixed position. That is, a wind stress tensor is considered to be a datum, and two Level 0 indices are needed to select a single component of the tensor. Likewise, a wind vector as a whole is a single item, but a Level 0 index would select which component of the wind (North-South or East-West) is desired. For temperature, there is only one component to select; this would correspond to a single array index which can take only a single value. (Because a array is indistinguishable from a array, such an index is superfluous and can be omitted.) Level 1 Dimensions are those that occur within a single data record. Level 2 Dimensions are those that occur across data records. The Level 1 dimensions, together with the Level 2 dimensions, locate a datum in a coordinate space. The difference between the two levels is that Level 1 dimensions vary within a data record in the dataset, while Level 2 dimensions vary between data records. If each data record were read into a separate array variable, then every array would have to have an index for each Level 0 and Level 1 dimension. Suppose, for example, that a set of temperatures is written out in a series of two-dimensional longitude-latitude grids, one for each day. The resulting dataset would have two Level 1 dimensions--longitude and latitude--plus one Level 2 dimension: time. This distinction between the two levels may seem artificial, but it has two advantages: first, it enables a program to read the data either as a single large array variable (whose indices would correspond to each of the Level 0, Level 1, and Level 2 dimensions), or as a series of separate variables (whose array indices correspond to each of the Level 0 and Level 1 dimensions, with the number of arrays calculated from the Level 2 dimensions). Second, if the data are read in as
separate arrays, then those arrays may be of different sizes. That is, the Level 1 dimensions may have different structures over different ranges of Level 2 dimensions. For example, the first five days of the longitude-latitude wind fields might consist of five arrays, while the next ten days might be a series of ten arrays. Level 3 Dimensions specify how the data in the dataset have been averaged, integrated, or summed over which subsets of Level 0, Level 1, and Level 2 dimensions. These are ``virtual dimensions'', in that they do not correspond to any indices in a data array, but otherwise their structure is very similar to that of the other, ``real'' dimensions. A set of monthly averaged data, for example, would have a Level 3 dimension corresponding to time and detailing the days of the month over which the average was taken, as well as what averaging method was used. (Note that instantaneous or point data, consisting of observations or calculations that are considered to occur at fixed points in coordinate space, have no averages and hence have no Level 3 dimensions.) What is drill down and roll up? Drill down corresponds to choosing a sub concept of the current concept involved into the OLAP dimension. Thus we move to more detailed view because sub concepts have more detailed items. Roll up is an opposite operation of moving to a super concept of the current concept. Thus changing the current concept we can the level of details. Generally there are more than one super concept and sub concept defined for an OLAP dimension so the choice may be more complex. What is an OLAP cube? An OLAP cube is the product of the current concepts chosen for each dimension, which consists of a combination of all items from the current concepts. Each such combination of items is called a cell. By applying drill down and roll up we can change the current concepts and thus change the OLAP cube and its granularity. What is an OLAP representation? An OLAP representation is function, which assigns one measure value for each item from the OLAP cube. This means that for each combination of items from the concepts selected in the OLAP dimensions this function finds some item from the measure concept. In general case the measure is not necessarily numeric and then the function assigns any item to each cell in the OLAP cube. What is DTS? I suspect that the term ‗DTS‘ was a marketing invention for a collection of methods based on COM for transferring data between ODBC or OLE DB data sources. In fact, it is one of Microsoft's better ideas, and is usually the ideal way of transferring data. DTS (Data Transformation Services) allows you to get stuff in and out of SQL Server and between any ODBC and OLE DB data source. That includes DB2, Access, Excel and Text, as well as Oracle, MySQL, PostgresSQL and even SQLLite. -in fact, anything with an ODBC driver. Not only does it do the transfer but it allows you to fill in missing data, do column mappings and so on. In other words, you can fiddle with the data as it passes through. Most of the time, you'll create DTS packages using the DTS wizards. These can be stored, edited and reused. You can copy data across as a scheduled event by using the DTSWIZ.EXE command in DOS. There is a DTS designer for developing and maintaining packages and DTS COM objects that allow you to integrate DTS functionality into any scripting language that is able to run a COM object. You can use DTS from within VB, Perl, PHP SQL Server: in fact almost all modern scripting languages will support it. What is fill factor? When creating an index, you can specify a fill factor to leave extra gaps and reserve a percentage of free space on each leaf level page of the index to accommodate future expansion in the storage of the table's data and reduce the potential for page splits. The fill factor value is a percentage from 0 to 100 that specifies how much to fill the data pages after the index are created. A value of 100 means the pages will be full and will take the least amount of storage space. This setting should be used only when there will be no changes to the data, for example, on a read-only table. A lower value leaves more empty space on the data pages, which reduces the need to split data pages as indexes grow but requires more storage space. This setting is more
appropriate when there will be changes to the data in the table. The fill factor option is provided for fine-tuning performance. However, the server-wide default fill factor, specified using the sp_configure system stored procedure, is the best choice in the majority of situations. What is RAID and how does it work? RAID combines two or more physical hard disks into a single logical unit by using either special hardware or software. Hardware solutions often are designed to present themselves to the attached system as a single hard drive, and the operating system is unaware of the technical workings. Software solutions are typically implemented in the operating system, and again would present the RAID drive as a single drive to applications. There are three key concepts in RAID: mirroring, the copying of data to more than one disk; striping, the splitting of data across more than one disk; and error correction, where redundant data is stored to allow problems to be detected and possibly fixed (known as fault tolerance). There are number of different RAID levels: Level 0 -- Striped Disk Array without Fault Tolerance: Provides data striping (spreading out blocks of each file across multiple disk drives) but no redundancy. This improves performance but does not deliver fault tolerance. If one drive fails then all data in the array is lost. Level 1 -- Mirroring and Duplexing: Provides disk mirroring. Level 1 provides twice the read transaction rate of single disks and the same write transaction rate as single disks. Level 2 -- Error-Correcting Coding: Not a typical implementation and rarely used, Level 2 stripes data at the bit level rather than the block level. Level 3 -- Bit-Interleaved Parity: Provides byte-level striping with a dedicated parity disk. Level 3, which cannot service simultaneous multiple requests, also is rarely used. Level 4 -- Dedicated Parity Drive: A commonly used implementation of RAID, Level 4 provides block-level striping (like Level 0) with a parity disk. If a data disk fails, the parity data is used to create a replacement disk. A disadvantage to Level 4 is that the parity disk can create write bottlenecks. Level 5 -- Block Interleaved Distributed Parity: Provides data striping at the byte level and also stripe error correction information. This results in excellent performance and good fault tolerance. Level 5 is one of the most popular implementations of RAID. Level 6 -- Independent Data Disks with Double Parity: Provides block-level striping with parity data distributed across all disks. Level 0+1 -- A Mirror of Stripes: Not one of the original RAID levels, two RAID 0 stripes are created, and a RAID 1 mirror is created over them. Used for both replicating and sharing data among disks. Level 10 -- A Stripe of Mirrors: Not one of the original RAID levels, multiple RAID 1 mirrors are created, and a RAID 0 stripe is created over these. Level 7: A trademark of Storage Computer Corporation that adds caching to Levels 3 or 4. RAID S: (also called Parity RAID) EMC Corporation's proprietary striped parity RAID system used in its Symmetric What is the difference between DELETE TABLE and TRUNCATE TABLE commands? Drop and Truncate are DDL; With Drop Command we can remove entire Table or columns from database. With Truncate we can remove the records in the table by keeping table structure. Drop and Truncate are AutoCommit. TRUNCATE is a DDL command and cannot be rolled back. All of the memory space is released back to the server. DELETE is a DML command and can be rolled back. TRUNCATE : You can't use WHERE clause DELETE : You can use WHERE clause. What are the different locks in SQL SERVER? There are different types of lock in SQL Server 2000 and 2005. These locks are applied in different situations. Here is the list of locks and the situation for the locks. SHARED - This lock is applied for read operation where the data is not updated. A good example would be the select statement. UPDATE – This locked on those resources that can be updated. This lock prevents the common form of dead lock that occurs when multiple sessions are locking the data so that they can update it later.
EXCLUSIVE - Used for data-modification operations, such as INSERT, UPDATE, or DELETE. Ensures that multiple updates cannot be made to the same resource at the same time. INTENT - Used to establish a lock hierarchy. The different types of intent locks are: intent shared, intent exclusive, and shared with intent exclusive. SCHEMA - Used when an operation dependent on the schema of a table is executing. The different types of schema locks are: schema modification and schema stability. BULK UPDATE – This lock is applied when there is a bulk copying of data and the TABLOCK is applied KEY RANGE - Protects the range of rows read by a query when using the serializable transaction isolation level. Ensures that other transactions cannot insert rows that would qualify for the queries of the serializable transaction if the queries were run again. Can we suggest locking hints to SQL SERVER? We can give locking hints that help's you override default decisions made by SQL Server. For instance, you can specify the ROWLOCK hint with your UPDATE statement to convince SQL Server to lock each row affected by that data modification. Whether it's prudent to do so is another story; what will happen if your UPDATE affects 95% of rows in the affected table? If the table contains 1000 rows, then SQL Server will have to acquire 950 individual locks, which is likely to cost a lot more in terms of memory than acquiring a single table lock. So think twice before you bombard your code with ROWLOCKS. What is LOCK escalation? Lock escalation is the process of converting a lot of low level locks (like row locks, page locks) into higher level locks (like table locks). Every lock is a memory structure too many locks would mean, more memory being occupied by locks. To prevent this from happening, SQL Server escalates the many fine-grain locks to fewer coarse-grain locks. What are the different ways of moving data between databases in SQL Server? There are lots of options available; you have to choose your option depending upon your requirements. Some of the options you have are: BACKUP/RESTORE, detaching and attaching databases, replication, DTS, BCP, logshipping, INSERT...SELECT, SELECT...INTO, creating INSERT scripts to generate data. What is the difference between a HAVING CLAUSE and a WHERE CLAUSE? Having Clause is basically used only with the GROUP BY function in a query. WHERE Clause is applied to each row before they are part of the GROUP BY function in a query. What is the difference between UNION and UNION ALL SQL syntax? UNION: The UNION command is used to select related information from two tables, much like the JOIN command. However, when using the UNION command all selected columns need to be of the same data type. With UNION, only distinct values are selected. UNION ALL: The UNION ALL command is equal to the UNION command, except that UNION ALL selects all values. The difference between Union and Union all is that Union all will not eliminate duplicate rows, instead it just pulls all rows from all tables fitting your query specifics and combines them into a table. What is ACID fundamental? What are transactions in SQL SERVER? ACID (Atomicity, Consistency, Isolation, and Durability) is a set of properties that guarantee that database transactions are processed reliably. In the context of databases, a single logical operation on the data is called a transaction. An example of a transaction is a transfer of funds from one account to another, even though it might consist of multiple individual operations (such as debiting one account and crediting another). Transactions in SQL Server are mostly implemented within the stored procedures. Normally three statements are mostly used for implementing transactions, which are as follows; 1). Begin Transaction: The statement indicates the starting of a new transaction. 2). Commit Transaction: The statement indicates the successful completion of a transaction. Upon commit a transaction is considered to be successfully completed and all changes made to the database are implemented.
3). Rollback Transaction: The statement indicates the unsuccessful completion of a transaction. Upon receiving a rollback statement, all changes are discarded and objects are returned to the previous state since the most recent Begin Transaction. What is DBCC? DBCC stands for database consistency checker. We use these commands to check the consistency of the databases, i.e., maintenance, validation task and status checks. E.g. DBCC CHECKDB - Ensures that tables in the db and the indexes are correctly linked. DBCC CHECKALLOC - To check that all pages in a db are correctly allocated. DBCC CHECKFILEGROUP - Checks all tables file group for any damage. What is the purpose of Replication? SQL Server replication allows database administrators to distribute data to various servers throughout an organization. You may wish to implement replication in your organization for a number of reasons, such as: Load balancing. Replication allows you to disseminate your data to a number of servers and then distribute the query load among those servers. Offline processing. You may wish to manipulate data from your database on a machine that is not always connected to the network. Redundancy. Replication allows you to build a fail-over database server that‘s ready to pick up the processing load at a moment‘s notice. What are the different types of replication supported by SQL SERVER? Replication is the process of copying/moving data between databases on the same or different servers. replication has two important aspects publisher and subscriber. Publisher: Database server that makes data available for replication is called as Publisher. Subscriber: Database Servers that get data from the publishers is Called as Subscribers. There are three types of replication supported by SQL SERVER:Snapshot Replication: Snapshot Replication takes snapshot of one database and moves it to the other database. After initial load data can be refreshed periodically. The only disadvantage of this type of replication is that all data has to copied each time the table is refreshed. Transactional Replication: In transactional replication data is copied first time as in snapshot replication, but later only the transactions are synchronized rather than replicating the whole database. You can either specify to run continuously or on periodic basis. Merge Replication: Merge replication combines data from multiple sources into a single central database. Again as usual the initial load is like snapshot but later it allows change of data both on subscriber and publisher, later when they come on-line it detects and combines them and updates accordingly What is a trigger? Triggers are basically used to implement business rules. Triggers are also similar to stored procedures. The difference is that it can be activated when data is added or edited or deleted from a table in a database. What are the different types of triggers in SQL SERVER 2000? Triggers are special types of Stored Procedures that are defined to execute automatically in place of or after data modifications. They can be executed automatically on the INSERT, DELETE and UPDATE triggering actions. There are two different types of triggers in Microsoft SQL Server 2000. They are INSTEAD OF triggers and AFTER triggers. These triggers differ from each other in terms of their purpose and when they are fired. In this article we shall discuss each type of trigger. If we have multiple AFTER Triggers on table how can we define the sequence of the triggers? What is SQL injection? SQL injection is a technique that exploits a security vulnerability occurring in the database layer of an application. The vulnerability is present when user input is either incorrectly filtered for string literal escape characters embedded in SQL statements or user input is not strongly typed and thereby unexpectedly executed. It is in fact an instance of a more general class of vulnerabilities that can occur whenever one programming or scripting language is embedded inside another. What is log shipping? Can we do logshipping with SQL Server 7.0 - Logshipping is a new feature of SQL Server 2000. We should have two SQL Server - Enterprise Editions. From
Enterprise Manager we can configure the logshipping. In logshipping the transactional log file from one server is automatically updated into the backup database on the other server. If one server fails, the other server will have the same db and we can use this as the DR (disaster recovery) plan. What is a Linked Server? Linked Servers is a concept in SQL Server by which we can add other SQL Server to a Group and query both the SQL Server dbs using T-SQL Statements What is a Join in SQL Server? SQL Profiler utility allows us to basically track connections to the SQL Server and also determine activities such as which SQL Scripts are running, failed jobs etc.. What do you mean by COLLATION? Collation is basically the sort order. There are three types of sort order Dictionary case sensitive, Dictionary - case insensitive and binary. What are cursors? Well cursors help us to do an operation on a set of data that we retrieve by commands such as Select columns from table. For example : If we have duplicate records in a table we can remove it by declaring a cursor which would check the records during retrieval one by one and remove rows which have duplicate values. What is a view? If we have several tables in a db and we want to view only specific columns from specific tables we can go for views. It would also suffice the needs of security some times allowing specfic users to see only specific columns based on the permission that we can configure on the view. Views also reduce the effort that is required for writing queries to access specific columns every time. What is a Stored Procedure? Can you give an example of Stored Procedure? It‘s nothing but a set of T-SQL statements combined to perform a single task of several tasks. Its basically like a Macro so when you invoke the Stored procedure, you actually run a set of statements. sp_helpdb , sp_who2, sp_renamedb are a set of system defined stored procedures. We can also have user defined stored procedures which can be called in similar way. What is BCP utility in SQL SERVER? The bcp utility copies data between an instance of Microsoft® SQL Server™ 2000 and a data file in a user-specified format. The Bulk Copy Program (BCP) is a command-line utility that ships with SQL Server 2000. With BCP, you can import and export large amounts of data in and out of SQL Server 2000 databases. Having BCP in your arsenal of DBA tools will add to your skill set and make you a better-rounded DBA. Let's begin by showing you the syntax of BCP as well as how to effectively use the tool. At this point, our BCP statement looks like this: bcp pubs.dbo.authors out c:\temp\authors.bcp What is the difference between Stored Procedure (SP) and User Defined Function (UDF)? SP may or may not return value but Function should always return a value SP can return more than one parameter UDF returns only one parameter must UDF can return a table SP returns a result set UDF scan be used as inline functions and can be executed explicitly ex:select getdate() as date Just u can specify in query