Software Engineering Process - Design Quality Attributes by shaikatdsbd

VIEWS: 6,594 PAGES: 31

The purpose of this document is to define major design quality attributes or parameters along with subattributes and their relationships to make them measurable and to achieve certain values of those attributes.

More Info
									Spring 2009 Assignment # 01

Software Engineering Process - Design Quality Attributes
Software Engineering Process SEN-647

S.M. Saiful Islam ID # 0712004 Program – Master of Software Engineering

January 29, 2009

1. INTRODUCTION............................................................................................................................. 3 1.1 QUESTION ...................................................................................................................................... 3 1.2 BACKGROUND ............................................................................................................................... 3 1.3 PURPOSE OR OUTPUT ..................................................................................................................... 3 2. DESCRIPTION................................................................................................................................. 3 2.1 DESIGN QUALITY ATTRIBUTES...................................................................................................... 3 2.1.1 Main Design Quality attributes.............................................................................................. 3 2.1.2 Sub Design Quality attributes ................................................................................................ 4 2.1.3 Relationship of Quality attributes .......................................................................................... 6 2.2 RECOMMENDED POLICIES ............................................................................................................. 8 2.3 PROCEDURES ................................................................................................................................. 8 2.3.1 Quality attribute issues consideration ................................................................................... 9 2.3.2 Design Procedure Outline ................................................................................................... 16 2.3.3 Design System ...................................................................................................................... 17 2.4 STANDARDS ................................................................................................................................. 19 2.4.1 System Architecture define................................................................................................... 23 2.4.2 Alternate Architecture Assessment....................................................................................... 23 2.4.3 Modular Decomposition ...................................................................................................... 24 2.4.4 Cohesion, Coupling.............................................................................................................. 24 2.4.5 Interface description ............................................................................................................ 24 2.5 KNOWLEDGE, SKILL, ENVIRONMENT AND RESOURCE ............................................................... 24 LIFECYCLE ........................................................................................................................................ 24 REQUIREMENTS ................................................................................................................................. 24 ARCHITECTURE ................................................................................................................................. 24 DESIGN .............................................................................................................................................. 25 CODING ............................................................................................................................................. 25 PEER REVIEW..................................................................................................................................... 25 TESTING ............................................................................................................................................ 25 DEPLOYMENT .................................................................................................................................... 25 2.6 TOOLS .......................................................................................................................................... 26 2.7 MEASUREMENT ........................................................................................................................... 27 2.7.1 Complexity ........................................................................................................................... 27 2.7.2 Coupling............................................................................................................................... 27 2.7.3 Cohesion .............................................................................................................................. 27 2.8 CONTROL ..................................................................................................................................... 29 2.9 IMPORVEMENT ............................................................................................................................ 30 3. CONCLUSION ............................................................................................................................... 31

1. Introduction
1.1 Question
Define major design quality attributes or parameters along with sub-attributes and their relationships to make them measurable. Recommend policies to achieve certain values of those attributes. Define actions and their sequences, that is the procedure, to follow to comply with those policies. Recommend standards in performing those actions. Determine requirements of knowledge, skill and environment to perform those actions by conforming to standards. Needed tools to fully and/or partly automate the execution of those actions for keeping time and cost to minimum level. Recommend measurement practices to collect, analyze and process data to measure those quality attributes of code blocks, which are being produced. Recommend the control actions to deal with QTC variations. Recommend improvements of process elements to be made in order to improve one or more of those code quality parameters.

1.2 Background
The software engineering process is the set of tools, methods, and practices used to produce a software product. The software products could be requirements, design, code, as well as executable software application. In short, the software process is the set of actions required to efficiently transform users’ need into effective software solution. Here the process guidelines have been developed following all the process elements for Software Design process.

1.3 Purpose or output
The purpose of this document is to define major design quality attributes or parameters along with sub-attributes and their relationships to make them measurable and to achieve certain values of those attributes. The output of this process would be design quality manual.

2. Description
2.1 Design Quality attributes
[Q. Define major design quality attributes or parameters along with sub-attributes and their relationships to make them measurable] The major design quality attributes, their sub-attributes and their relationships have been described in the following sections. 2.1.1 Main Design Quality attributes Following are the lsit of main software deisgn quality attributes.

Functionality Functionality is the essential purpose of Software Design Product. The Design functionality is expressed as a totality of essential functions that the Design provides, the capability of the design to provide functions and properties which meet stated and implied needs when the design is used under specified conditions. Other characteristics can only be measured (and are assumed to exist) when the functionality of a given system is present. In this way, for example, a system can not possess usability characteristics if the system does not function correctly. Reliability Once a software system is functioning, as specified, and delivered the reliability characteristic defines the capability of the system to maintain its service provision under defined conditions for defined periods of time. One aspect of this characteristic is fault tolerance that is the ability of a system to withstand component failure. In short, reliability is the capability of the design to maintain a specified level of performance when used under specified conditions. Usability Usability only exists with regard to functionality and refers to the ease of use for a given function. The ability to learn how to use a system (learnability) is also a major sub-characteristic of usability. The usability is the capability of the design to be understood learned and liked by the user, when used under specified conditions. Efficiency This characteristic is concerned with the system resources used when providing the required functionality. The amount of disk space, memory, network etc. provides a good indication of this characteristic. As with a number of these characteristics, there are overlaps. The design efficiency is the capability of the design to provide appropriate performance, relative to the amount of resource used, under stated conditions. Maintainability The ability to identify and fix a fault within a software component is what the maintainability characteristic addresses. Maintainability is impacted by code readability or complexity as well as modularization. Anything that helps with identifying the cause of a fault and then fixing the fault is the concern of maintainability. Also the ability to verify (or test) a system, i.e. testability, is one of the sub-characteristics of maintainability. The maintainability is the capability of the design to be modified. Modifications may include corrections, improvements or adaptation of the design to changes in environments, and in requirements and functional specifications. Portability This characteristic refers to how well the software can adopt to changes in its environment or with its requirements. The sub-characteristics of this characteristic include adaptability. Object oriented design and implementation practices can contribute to the extent to which this characteristic is present in a given system. The portability is the capability of the site to be transferred from one environment to another.

2.1.2 Sub Design Quality attributes Following table is the full list of design quality attributes (ISO 9126-1 Quality Model) Table 1: Design Quality Attributes & Sub-attributes Main Attributes Sub-attributes Descriptions

Main Attributes Functionality

Sub-attributes Suitability

Descriptions This is the essential Functionality characteristic and refers to the appropriateness (to specification) of the functions of the software design. This refers to the correctness of the functions; an ATM may provide a cash dispensing function but is the amount correct? A given software component or system does not typically function in isolation. This sub-attributes concerns the ability of a software component to interact with other components or systems. Where appropriate certain industry (or government) laws and guidelines need to be complied with, i.e. SOX. This sub-characteristic addresses the compliant capability of software. This sub-characteristic relates to unauthorized access to the software functions. This sub-characteristic concerns frequency of failure of the software. The ability of software to withstand (and recover) from component, or environmental, failure. Ability to bring back a failed system to full operation, including data and network connections. Determines the ease of which the systems functions can be understood, relates to user mental models in Human Computer Interaction methods. Learning effort for different users, i.e. novice, expert, casual etc. Ability of the software to be easily operated by a given user in a given environment. Characterizes response times for a given thru put, i.e. transaction rate. Characterizes resources used, i.e. memory, CPU, disk and network usage. Characterizes the ability to identify the root cause of a failure within the software. Characterizes the amount of effort to change a system. Characterizes the sensitivity to change of a given system that is the negative impact that may be caused by system changes. Characterizes the effort needed to verify (test) a system change. Characterizes the ability of the system to change to new

Accurateness

Interoperability

Compliance

Security Reliability Maturity Fault tolerance Recoverability Usability Understandability

Learnability Operability Efficiency Time behavior Resource behavior Maintainability Analyzability Changeability Stability

Testability Portability Adaptability

Main Attributes

Sub-attributes

Descriptions specifications or operating environments.

Installability Conformance

Characterizes the effort required to install the software. Similar to compliance for functionality, but this characteristic relates to portability. One example would be Open SQL conformance which relates to portability of database used. Characterizes the plug and play aspect of software components, that is how easy is it to exchange a given software component within a specified environment. The suitability of the design for reuse in a different context or application.

Replaceability

Reusability

2.1.3 Relationship of Quality attributes The individual measures of quality attribute do not provide an overall measure of design quality. For this, the individual measures are combined or aggregated. Occasionally the individual measures of quality may conflict with each other, and compromises may have to be reached. The Table below summarizes the relationships of quality attributes. Table 2: Quality Attribute Relationship Functionality Functionality Reliability

Reliability Usability Efficiency Maintainability Portability

+ + 0 + 0

Maintainability +

Usability

Efficiency

+ 0 + 0

+ 0

-

From the above table, there are three types of interactions can be recognized. The definitions of the relationships among the attributes are given bellow. 1. Positive, i.e. a good value of one attribute result in a good value of the other (synergistic goals). If characteristic A is enhanced, then characteristic B is likely to be enhanced (+). 2. Negative, i.e. a good value of one attribute result in a bad value of the other (conflicting goals). If a characteristic A is enhanced, then a characteristic B is likely to be degraded (-).
3.

Independent, i.e. the attributes do not affect each other. If a characteristic A is enhanced, then characteristic B is unlikely to be affected (0).

The interactions of the relationship have been demonstrated in the following table.

Portability

Table 3: Interactions of quality attributes SL No. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. Relationship Functionality vs. Reliability Functionality vs. Usability Functionality vs. Efficiency Functionality vs. Maintainability Functionality vs. Portability Reliability vs. Usability Reliability vs. Efficiency Reliability vs. Maintainability Reliability vs. Portability Usability vs. Efficiency Usability vs. Maintainability Usability vs. Portability Efficiency vs. Maintainability Efficiency vs. Portability Maintainability vs. Portability Interaction Positive Positive Negative Positive Independent Positive Independent Positive Independent Negative Independent Independent Negative Negative Positive

The relationship among quality attributes have been demonstrated in following graphs as well.

Figure 1: Relationships between quality attributes

2.2 Recommended Policies
[Q. Recommend policies to achieve certain values of those attributes] Meeting the software design quality goals is the critical requirement for being successful in delivering the quality Software product or serivices. The underlying coding, testing, maintaince are directly subject to design quality. Therefore, best, proven and industry standard design methodolgy will be used to come up with several design alternatives. Doing the verification, validation, comparative analysis and Trade off analysis final design will be produced. Adequate effort would be employed for precisely understanding quality attributes, so that a design can be conceived to address them. Part of the difficultly is that quality attributes are not always explicitly stated in the requirements, or adequately captured by the requirements engineering team. That’s why an architect would be associated with the requirements gathering exercise for system, so that they can ask the right questions to expose and nail down the quality attributes that must be addressed. Understanding the quality attribute requirements is merely a necessary prerequisite to designing a solution to satisfy them. Conflicting quality attributes are a reality in every application of even mediocre complexity. Creating solutions that choose a point in the design space that adequately satisfies these requirements is remarkably difficult, both technically and socially. The latter involves communications with stakeholders to discuss design tolerances, discovering scenarios when certain quality requirements can be safely relaxed, and clearly communicating design compromises so that the stakeholders understand what they are agreeing upon. From a high level pespective following are the policy directives to achieve the desired values of quality attributes:

¬ A clear definition of the quality attributes would be provided ¬ A framework for reasoning about the quality would be developed ¬ A set of architectural and design tactics & guidelines that enhance the quality would be identified
and/or developed

¬ Component-based software engineering (CBSE), or component-based development (CBD) would
be used as a thumb rule

¬ Highly suited techniques and standards would be adapted studying the proven industry standards,
techniques and best practices

¬ Multiple alternatives would be developed, verified and analyzed ¬ One best suited approach would be selected ¬ Structured evaluation process would be used based on quantitative analysis ¬ Best suited solutions would be validated through applying and prototyping ¬ Validate techniques would be used for practices and real implementation

2.3 Procedures
[Q. Define actions and their sequences, that is the procedure, to follow to comply with those policies.] For achieving the quality goals, the quality attributes detail, correponding key issues, the key

decisions requied are described first. Then the generalized designed procedured has been given to meet the quality goals as a whole. 2.3.1 Quality attribute issues consideration

2.3.1.1 Availability
Availability defines the proportion of time that the system is functional and working. It can be measured as a percentage of the total system downtime over a predefined period. Availability will be affected by system errors, infrastructure problems, malicious attacks, and system load. Use the techniques listed below to maximize availability for your application. Key Issues ¬ A physical tier such as the database server or application server can fail or become unresponsive, causing the entire system to fail. ¬ Security vulnerabilities can allow Denial of Service (DoS) attacks, which prevent authorized users from accessing the system. ¬ Inappropriate use of resources can reduce availability. For example, resources acquired too early and held for too long cause resource starvation and an inability to handle additional concurrent user requests. ¬ Bugs or faults in the application can cause a system-wide failure. ¬ Frequent updates, such as security patches and user application upgrades, can reduce the availability of the system, ¬ A network fault can cause the application to be unavailable. Key Decisions ¬ How to design failover support related to different tiers in the system. ¬ How to decide if there is a need for a geographically separate redundant site to failover to in case of natural disasters such as earthquakes or tornados. ¬ How to design for run-time upgrades. ¬ How to design for proper exception handling in order to reduce application failures. ¬ How to handle unreliable network connections.

2.3.1.2 Conceptual Integrity
Conceptual integrity defines the consistency and coherence of the overall design. This includes the way that components or modules are designed, as well as factors such as coding style and variable naming. A coherent system makes it easy to resolve issues because it can be known what is consistent with the overall design. Conversely, a system without conceptual integrity will constantly be affected by changing interfaces, frequently deprecating modules, and lack of consistency in how tasks are performed. Key Issues ¬ Mixing different areas of concern together within your design. ¬ Not using or inconsistent use of a development process. ¬ Collaboration and communication between different groups involved with the application lifecycle. ¬ Lack of design and coding standards.

¬ Existing (legacy) system demands that prevent both refactoring and progression toward a new platform or paradigm. Key Decisions ¬ How to identify areas of concern and group them into logical layers. ¬ How to manage the development process. ¬ How to facilitate collaboration and communication throughout the application lifecycle. ¬ How to establish and enforce design and coding standards. ¬ How to create a migration path away from legacy technologies. ¬ How to isolate applications from external dependencies.

2.3.1.3 Flexibility
Flexibility is the ability of a system to adapt to varying environments and situations, and to cope with changes in business policies and rules. A flexible system is one that can be easily modified in response to different user and system requirements. Key Issues ¬ The code base is large, unmanageable, and fragile. ¬ Refactoring is burdensome due to regression requirements for a large and growing code base. ¬ The existing code is over-complex. ¬ The same logic is implemented in many different ways. Key Decisions ¬ How to handle dynamic business rules, such as changes related to authorization, data, or process. ¬ How to handle a dynamic user interface (UI), such as changes related to authorization, data, or process. ¬ How to respond to changes in data and logic processing. ¬ How to ensure that components and services have well-defined responsibilities and relationships.

2.3.1.4 Interoperability
Interoperability is the ability of diverse components of a system or different systems to operate successfully by exchanging information, often by using services. An interoperable system allows you to exchange and reuse information internally as well as externally. Communication protocols, interfaces, and data formats are the key considerations for interoperability. Standardization is also an important aspect to be considered when designing an interoperable system. Key Issues ¬ Interaction with external or legacy systems that use different data formats. ¬ Boundary blurring, which allows artifacts from one layer, tier, or system to defuse into another. Key Decisions ¬ How to handle different data formats from external or legacy systems.

¬ How to enable systems to interoperate while evolving separately or even being replaced. ¬ How to isolate systems through the use of service interfaces. ¬ How to isolate systems through the use of mapping layers.

2.3.1.5 Maintainability
Maintainability is the ability of a system to undergo changes to its components, services, features, and interfaces as may be required when adding or changing functionality, fixing bugs, and meeting new business requirements. Measurability can be measured in terms of the time it takes to restore the system to its operational status following a failure or removal from operation for upgrading. Improving system maintainability will increase efficiency and reduce run-time defects. Key Issues ¬ Excessive dependencies between components and layers prevent easy replacement, updates, and changes. ¬ Use of direct communication prevents changes to the physical deployment of components and layers. ¬ Reliance on custom implementations of features such as authentication and authorization prevents reuse and hampers maintenance. ¬ Mixing the implementation of cross-cutting concerns with application-specific components makes maintenance harder and reuse difficult. ¬ Components are not cohesive, which makes them difficult to replace and causes unnecessary dependencies on child components. Key Decisions ¬ How to reduce dependencies between components and layers. ¬ How to implement a pluggable architecture that allows easy upgrades and maintenance, and improved testing capabilities. ¬ How to separate the functionality for cross-cutting concerns from application-specific code. ¬ How to choose an appropriate communication model, format, and protocol. ¬ How to create cohesive components.

2.3.1.6 Manageability
Manageability is designing application to be easy to manage, by exposing sufficient and useful instrumentation for use in monitoring systems and for debugging and performance tuning. Key Issues ¬ Lack of diagnostic information ¬ Lack of troubleshooting tools ¬ Lack of performance and scale metrics ¬ Lack of tracing ability ¬ Lack of health monitoring Key Decisions ¬ How to enable the system behavior to change based on operational environment requirements, such as infrastructure or deployment changes.

¬ How to enable the system behavior to change at run time based on system load; for example, by queuing requests and processing them when the system is available. ¬ How to create a snapshot of the system’s state to use for troubleshooting. ¬ How to monitor aspects of the system’s operation and health. ¬ How to create custom instrumentation to provide detailed operational reports. ¬ How to discover details of the requests sent to the system.

2.3.1.7 Performance
Performance is an indication of the responsiveness of a system to execute specific actions in a given time interval. It can be measured in terms of latency or throughput. Latency is the time taken to respond to any event. Throughput is the number of events that take place in a given amount of time. Factors affecting system performance include the demand for a specific action and the system’s response to the demand. Key Issues ¬ Increased client response time, reduced throughput, and server resource over-utilization. ¬ Increased memory consumption, resulting in reduced performance, unable to find data in cache, and increased data store access. ¬ Increased database server processing may cause reduced throughput. ¬ Increased network bandwidth consumption may cause delayed response times, and increased load for client and server systems. ¬ Inefficient queries, or fetching all of the data when only a portion is displayed, may incur unnecessary load on the database server, failure to meet performance objectives, and costs in excess of budget allocations. ¬ Poor resource management can result in the creation of multiple instances of resources, with the corresponding connection overhead, and can increase the application’s response time. Key Decisions ¬ How to determine a caching strategy. ¬ How to design high-performance communication between layers. ¬ How to choose effective types of transactions, locks, threading, and queuing. ¬ How to structure the application. ¬ How to manage resources effectively.

2.3.1.8 Reliability
Reliability is the ability of a system to continue operating as expected over time. Reliability is measured as the probability that a system will not fail and that it will perform its intended function for a specified time interval. Improving the reliability of a system may lead to a more secure system because it helps to prevent the types of failures that a malicious user may exploit. Key Issues ¬ System may crash. ¬ System becomes unresponsive at times.

¬ Output is inconsistent. ¬ System fails because of unavailability of other externalities such as systems, networks, and databases. Key Decisions ¬ How to handle unreliable external systems. ¬ How to detect failures and automatically initiate a failover. ¬ How to redirect load under extreme circumstances. ¬ How to take the system offline but still queue pending requests. ¬ How to handle failed communications. ¬ How to handle failed transactions.

2.3.1.9 Reusability
Reusability is the probability that a component will be used in other components or scenarios to add new functionalities with little or no change. Reusability minimizes the duplication of components and also the implementation time. Identifying the common attributes between various components is the first step in building small reusable components of a larger system. Key Issues ¬ Using different code or components to achieve the same result in different places. ¬ Using multiple similar methods instead of parameters to implement tasks that vary slightly. ¬ Using several systems to implement the same feature or function. Key Decisions ¬ How to reduce duplication of similar logic in multiple components. ¬ How to reduce duplication of similar logic in multiple layers or subsystems. ¬ How to reuse functionality in another system. ¬ How to share functionality across multiple systems. ¬ How to share functionality across different subsystems within an application.

2.3.1.10 Scalability
Scalability is an attribute of a system that displays the ability to function well even with change in demand. Typically, the system should be able to handle increases in size or volume. The aim is to maintain the system’s availability, reliability, and performance even when the load increases. There are two methods for improving scalability: scaling vertically, and scaling horizontally. More resources such as CPU, memory, disk, etc. to a single system are added to scale vertically. More machines, for serving the application, are added to scale horizontally. Key Issues ¬ Applications cannot handle increasing load. ¬ Users incur delays in response and longer completion times. ¬ The system fails. ¬ The system cannot queue excess work and process it during periods of reduced load.

Key Decisions ¬ How to design layers and tiers for scalability. ¬ How to scale up or scale out an application. ¬ How to scale the database. ¬ How to scale the UI. How to handle spikes in traffic and load.

2.3.1.11 Security
Security is an attribute of a system that needs to be protected from disclosure or loss of information. Securing a system aims to protect assets and unauthorized modification of information. The factors affecting system security are confidentiality, integrity, and availability. Authentication, encryption, and auditing and logging are the features used for securing systems. Key Issues ¬ Spoofing of user identity ¬ Tampering with data ¬ Repudiation ¬ Information disclosure ¬ Denial of service (DoS) Key Decisions ¬ How to address authentication and authorization. ¬ How to protect against malicious input. ¬ How to protect sensitive data. ¬ How to protect against SQL injection. ¬ How to protect against cross-site scripting.

2.3.1.12 Supportability
Supportability is the ability to provide support to a system when it fails to work correctly. Key Issues ¬ Lack of diagnostic information ¬ Lack of troubleshooting tools ¬ Lack of performance and scale metrics ¬ Lack of tracing ability ¬ Lack of health monitoring Key Decisions ¬ How to monitor system activity. ¬ How to monitor system performance. ¬ How to implement tracing. ¬ How to provide troubleshooting support. ¬ How to design auditing and logging.

2.3.1.13 Testability
Testability is a measure of how well system or components allow you to create test criteria and execute tests to determine if the criteria are met. Testability allows faults in a system to be isolated in a timely and effective manner. Key Issues ¬ Complex applications with many processing permutations are not tested consistently. ¬ Automated or granular testing cannot be performed because the application has a monolithic design. ¬ Lack of test planning. ¬ Poor test coverage—manual as well as automated. ¬ Input inconsistencies; for the same input, the output is not same. ¬ Output inconsistencies—output does not fully cover the output domain, even though all known variations of input are provided. Key Decisions ¬ How to ensure an early start to testing during the development life cycle. ¬ How to automate user interaction tests. ¬ How to handle test automation and detailed reporting for highly complex functionality, rules, or calculations. ¬ How to separately test each layer or tier. ¬ How to make it easy to specify and understand system inputs and outputs to facilitate the construction of test cases. ¬ How to clearly define component and communication interfaces.

2.3.1.14 User Experience / Usability
The application interfaces must be designed with the user and consumer in mind so that they are intuitive, can be localized and globalized, provide access to disabled users, and provide a good overall user experience. Key Issues ¬ Too much interaction (excessive number of “clicks”) is required for a task. ¬ There is an incorrect flow to the interface. ¬ Data elements and controls are poorly grouped. ¬ Feedback to the user is poor, especially for errors and exceptions. ¬ The application is unresponsive. Key Decisions ¬ How to leverage effective interaction patterns. ¬ How to determine user experience acceptance criteria. ¬ How to improve responsiveness for the user. ¬ How to determine the most effective UI technology.

¬ How to enhance the visual experience.

2.3.2 Design Procedure Outline Following are the recommended design procedure to achieve the quality goals. The design process is divided into Architecture Design or High Level Design and Detailed Design. These are outlined bellow:

2.3.2.1 Architectural Design:
• •

•

•

• • • •

The role of the software within the system and the relationships between this software and other software components. Assumptions about the environment including operating system, user interface, program, data management, data interchange, graphics, and network services, especially assumptions on which safety functions and computer security needs may be based. Architectural Design Description. The design of the software including: o Logical or functional decomposition. o Description of the modules including mention of the safety and computer security functions. o Design of the modules in terms of execution control and data flow including flagging safety and security functions. o Design requirements/constraints on the modules. o Relationships and interactions between the modules. o A schema for the allocation of functions to modules. o Logical data design - conceptual schema. o Entity/data identification and relationships. o Timing and sequencing. o Implementation constraints. o Each state and mode the software operates in and the modules that execute in each state and mode. o Execution control and data flow between modules in the different states and modes. o Memory and processing time allocation to the modules. External Interface Design. Allocation of the software's external interface requirements to modules. The design of each interface in terms of: o Information description. o Initiation criteria. o Expected response. o Protocol and conventions. o Error identification, handling and recovery. o Queuing. o Implementation constraints. o Requirements relative to safety and computer security. Software requirements allocation to modules. Database description. Reference should be made to the document describing the database, its platform, dependencies, etc., or described it here. Human interaction. Screens, types of alert mechanisms. Tracking to any other systems in the plant that have safety functions.

2.3.2.2 Detailed Design:
•

•

•

• •

Modules Design and Traceability to Architectural Design. The design of the software into its modules and traceability between the architectural design modules and software modules. The traceability with flags on safety features and computer security requirements. Descriptions of the modules should include: o Inputs and outputs. o Functions. o Data descriptions and relationships. o Diagrams. o Control and signal flow. o Error handling/messages. o Interfaces between modules. o Packaging details (placement of modules). o Flags on safety, computer security functions. Detailed Design of Modules. Design information necessary to code the modules. The information should include: o Detailed design to the lowest level. o Functions or operations. o Algorithms. o Specific data definitions including data conversions. o Local and global data. o Parameters for initiation and adaptation. o Logic flow (control flow; timing variations; priority assignments; interrupt priorities and handling). o Error detection and handling. o Physical data design (internal schema; query language; access method; key, record, and data element definition and structure). o Device interface. o Interrupts and signals. o Limitations that restrict performance of modules. External Interface Detailed Design. The traceability of the external interface design of the software into its modules. A description of each external interface including: o Type and purpose. o Description of the data transmitted across the interface including purpose, source and destination, data type, data representation, size, units of measure, limit/range, accuracy, precision/resolution. o Messages transmitted and the assignment of data elements to each message. o Priority of interfaces and messages transmitted across them.. o Protocol (fragmentation and reassembly of messages; error control and recovery procedures; synchronization; flow control; data transfer rate and minimum transfer rate; routing, addressing and naming conventions; transmission services; status, identification, notification and other reporting features; security). o Description of user responses to system/software safety/security violations. Coding and Implementation Notes. Information such as stubs for incremental development and use of compiler options. Information On Planning, Designing, Executing Tests.

2.3.3 Design System

2.3.3.1 Design Description
Using a schematic, depict and describe the major system components. Describe how these

components interact with each other. Document the technologies used for implementation.

2.3.3.2 Design Alternatives
Describe the alternate designs that were considered. Discuss the rationale or criteria for evaluating the design including the critical issues and trade/offs, the evaluation done and final selection.

2.3.3.3 Environment Specifications
Include hardware, software, third party software specifications, operating system, network, development tools, and documentation tools.

2.3.3.4 Design System Decomposition
Provide the decomposition of the system by partitioning it into design entities – sub-systems, modules, processes, and data. Explain how the system is structured. Explain briefly each design entity that is identified. For describing the design entities use a representation suitable for the design methodology. Graphical representations such as hierarchical decomposition diagram make the understanding simpler. Supplement graphical depictions with natural language descriptions. Examples of representations that may be used include data and program structure, requirements traceability matrix, data flow diagrams, structure charts, finite-state machine, OOD diagrams. 2.3.3.4.1 Functional Decomposition Describe the decomposed processes/modules in the system. Appropriate diagrams and hierarchy charts may be used for this purpose. Perform this for all processes/modules. Explain how the system will support the operational concepts and scenarios. 2.3.3.4.2 Data Decomposition Using the data entities in the Requirement Analysis document, identify all tables, views and the indexing required.

2.3.3.5 Dependency Description
Specify the relationships between the design entities – which entities are dependent, the type of coupling, information shared, order of execution, parameter interfaces, etc. The chosen method to depict this part of the high level design should adequately represent the type of coupling so that coupling and cohesion can be checked for. Typical tools used include Data Flow Diagrams, structure charts, transaction diagrams finite-state machine, Object-Oriented Design diagrams.

2.3.3.6 Interface Description
Define the criteria for interfaces. For each design entity, give all the information that will be needed to use the functions provided by the entity. This includes external and internal interface specifications. Interface description is used to ensure that the design entities work with each other. With this component of the design, the detailed design of each design entity has a common base. This also gives the information required for preparing user documentation. Also define the external interfaces. Examples of means used to represent this are DFDs, transform specifications, screen layouts, structure charts, scenarios, data dictionary, finite-state machine, OOD diagrams.

2.3.3.7 Traceability Matrix
Give a cross-reference that maps each requirement in the analysis document to each corresponding section(s) in this document. This should include Functionality and Design Criteria and Constraints documented in the Requirement Analysis document.

2.3.3.8 Make, Buy, Reuse Analysis
Identify the components to be developed, reused or acquired (including COTS) to implement the design. Explain the rationale for the decision.

2.3.3.9 User Interface Description
Describe the general functionality of the system from the user’s perspective.

2.3.3.10 Input Processing
Describe the screen layouts, the screen flow, validations, the processing logic and the user-system dialogues.

2.3.3.11 Output Processing
Describe the output layouts, the parameters required to generate the output and the processing logic to generate the output.

2.3.3.12 Data Design
Explain the need for a database, the considerations which led to the choice of a particular type of database; should contain a short description of the data stored in there, an estimate of the size and frequency of updates, some special considerations like security requirements, recovery, interfacing with external systems, report generation, etc.

2.3.3.13 Data Description
Describe the database(s) which is/are part of the system. The tables, indexes, views, referential integrity checks, etc. are specified here.

2.3.3.14 Data Structures
Describe any data structures that are a major part of this system.

2.3.3.15 File and Data Formats
Describe the external data that are stored in files, the configuration files, the imported or exported data files. Enlist the files as well as which module reads/writes them, at what instances and for what purpose. Give the name, or detail description of the file formats.

2.3.3.16 Design Integration Plan
In this section, identify resources required, identify responsibilities, identify the components and subsystems to be integrated, define the integration environment, specify the integration sequence, and define the integration methods, procedures and criteria. Describe the evaluation criteria.

2.4 Standards
[Q. Recommend standards in performing those actions.] There are several Quality Models that can be used as reference standards. Each model describes

certain quality attributes. These are mentioned as bellow. ISO/IEC 9126-1:2001(ISO/IEC, 2001) Functionality Reliability Usability Efficiency Maintainability Portability

¬ ¬ ¬ ¬ ¬ ¬

McCall/GE Model ¬ Product operation • Accuracy • Reliability o Error Tolerance o Consistency o Simplicity • Efficiency • Integrity • Usability ¬ Product Revision • Maintainability • Flexibility • Testability ¬ Product Transition • Interface facility • Reusability • Transferability Boehm model Validity Clarity Understandability Modifiability Modularity Generality Economy Resilience Documentation

¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬

There are summarized in the following table: Table 4: Quality Attribute Standards

After analyzing all standards, one should decide which standards are compliant and can accordingly generate his own standards which best fits. Following check lists will help to deciding the standards of design outputs. Completeness
¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬

Are the SRS requirements fulfilled? Is there enough data (logic diagrams, algorithms, storage allocation charts, etc.) available to ensure design integrity? Are algorithms and equations adequate, accurate, and complete? Are requirements for the support and test software and hardware to be used in the development of the product included? Does the design implement required program behavior with respect to each program interface? Are all program inputs, outputs, and database elements identified and described to the extent needed to code the program? Does the SDD describe the operational environment into which the program must fit? Are all required processing steps included? Are all possible outcomes of each decision point designated? Does the design take into account all expected situations and conditions? Does the design specify appropriate behavior in the face of unexpected or improper inputs and other anomalous conditions? Does the SDD reference all desired programming standards?

Consistency

¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬

Are standard terminology and definitions used throughout the SDD? Are the style of presentation and the level of detail consistent throughout the document. Does the design configuration ensure integrity of changes? Is there compatibility of the interfaces? Is the test documentation compatible with the test requirements of the SRS? Is the SDD free of internal contradictions? Are the models, algorithms, and numerical techniques that are specified mathematically compatible? Are input and output formats consistent to the extent possible? Are the designs for similar or related functions consistent? Are the accuracies and units of inputs, database elements, and outputs that are used together in computations or logical decisions compatible?

Correctness
¬ ¬ ¬ ¬ ¬ ¬ ¬

Does the SDD conform to design documentation standards? Does the design perform only that which is specified in the SRS unless additional functionality is justified? Is the test documentation current and technically accurate? Is the design logic sound -- will the program do what is intended? Is the design consistent with documented descriptions and know properties of the operational environment into which the program must fit? Do interface designs agree with documented descriptions and known properties of the interfacing elements? Does the design correctly accommodate all inputs, outputs, and database elements whose format, content, data rate, etc. are not at the discretion of the designer?

Feasibility
¬

Are the specified models, algorithms, and numerical techniques accepted practices for use within nuclear power plants? ¬ Can they be implemented within the constraints imposed on the system and on the development effort? ¬ Are the functions as designed implementable within the available resources? Modularity
¬ ¬

Is there a schema for modularity, e.g., model-based? Is the design structured so that it comprises relatively small, hierarchically related programs or sets of programs, each performing a particular, unique function? ¬ Does the design use specific criteria to limit program size? Predictability
¬

Does the design contain programs which provide the required response to identified error conditions? ¬ Does the design schedule computer resources in a manner that is primarily deterministic and predictable rather than dynamic? ¬ Does the design contain a minimum number of interrupts and event driven software? Is justification given for uses of these features? ¬ Is plausibility checking performed on the execution of programs to uncover errors associated with the frequency and/or order or program execution and the permissiveness of program execution?

Robustness
¬

Are all SRS requirements related to fault tolerance and graceful degradation addressed in the design?

Structuredness
¬

Does the design use a logical hierarchical control structure?

Understandability
¬ ¬

Does the SDD avoid unnecessarily complex designs and design representations. Is the SDD written to allow unambiguous interpretation?

Verifiability/Testability
¬

Does the SDD describe each function using well-defined notation so that the SDD can be verified against the SRS and the code can be verified against the SDD? ¬ Are conditions, constraints identified quantitatively so that tests may be designed?

The recommended standards for performing design have described bellow. 2.4.1 System Architecture define

2.4.1.1 System Structuring
There are many different standard specific architectural models that could be used like Repository Model, Client-server Model, Abstract machine mode, etc. Select the appropriate one model which best fits requirements. Depict the architecture using block diagram where each box in the diagram will represent a subsystem. Use boxes within boxes to indicate the sub-system decomposed to the sub-systems. Use arrows to denote data and/or control flow from sub-system to sub-system in the direction of the arrows.

2.4.1.2 Control Modeling
To work as a system, sub-systems must be controlled so that their services are delivered to the right place at the right time. Do not use structural models to control information rather use control models at the architectural level to flow control between sub-systems. Use one/both of the two general approaches to control flow viz. Centralized control, and Event-based control.

2.4.1.3 System Decomposition
Depict hierarchical decomposition of the system. 2.4.2 Alternate Architecture Assessment Use DAR (Decision Analysis and Regulation) to assess the alternate architectures. Use architectural Trade-Off analysis Method (ATAM) developed by the Software Engineering Institute (SEI).

2.4.3 Modular Decomposition For functional decomposition use UML to model the classes and components. The most suitable diagram here would be component diagram that will describe the physical partitioning of the software into code components. In addition to Component diagram high level class digram and other diagrams as appropriate can be used. For data decomposition use DFD and ER diagrams. Supplement the diagrams using textual descriptions. 2.4.4 Cohesion, Coupling Component-level design should be functionnally independent. The functionality that is deliverd by a component should be cohesive - that is, it should focus on one and only one function or subfunction. Components should be loosely coupled to one another and to the external environment. The compnent coupling should be kept as low as is reasonable. 2.4.5 Interface description This interface specification must be unambiguous as it allows the sub-system to be used without knowledge of the sub-system operation. Services are allocated to different components and their interfaces of these components are designed. Interfaces will define the boundaries of the system [external from requirements] and sub-systems [internal from HLD]. They will be natural points for integration. Mention the communication standard between the components.

2.5 Knowledge, Skill, Environment and Resource
[Q. Determine requirements of knowledge, skill and environment to perform those actions by conforming to standards.] Knowledge of software architecture, system decomposition, UML, Object Oriented technology, data base design, DFD diagram, ER diagram, etc. is necessary. Here is a base list of what knowledge needed for successful execution of project: Lifecycle It is very important to have clear understanding of development process. It is required to choose the appropriate development lifecycle for a given project because all other activities are derived from this process. Having a well defined process is usually better than having none at all, and in many cases it is less important what process is used than how well it is executed. Requirements Everyone needs to be on the same page before jumping into programming. This is a fundamental truth to almost any endeavor and even more so when accomplishing a group driven programming task. Architecture Choosing the appropriate architecture for the application is the key. One has to know what s/he is building on before he can start a project.

Design Even if one feels great about knowing the architecture of his target platform without a good design he is going to be sunk. One should not fall into the trap though of over-designing the application. Therefore, good design knowledge and skill is very important. Coding Building the code is really just a small part of the total project effort even though it's what most people equate with the whole process since it's the most visible. Other pieces equally or even more important. A best practice for coding involves daily builds and testing. Peer review Need to have clear understanding of reviewing. Looking at other people's work, one can learn from it. If one has a problem, chances are someone else has already had and resolved the same problem. One should return the favor by letting others see his/her code and learn from it. Testing Testing is an integral part of software development that needs to be planned. It is also important that testing is done proactively; meaning that test cases are planned before coding starts and test cases are developed while the application is being designed and coded. Deployment Deployment is the final stage of releasing an application for users. Following are the knowledge requirement at a glance: ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ Knowledge of programming languages Knowledge of current technologies (trends, practices, etc) Knowledge of vendor tools Knowledge of hardware/platforms Knowledge of software engineering design methodology such as RUP, Agile, etc Knowledge of object modeling languages such as UML Knowledge of software quality processes Knowledge of general software development - design patterns, algorithms, security requirements, etc.

Following skill is required for successful execution of Software Development: ¬ Design skill ¬ Programming skill ¬ Articulate and organized written and verbal communication skills – willingness and ability to communicate ¬ Reasoning and decision making skill ¬ Analytical skill - Abstract thinking and analysis

¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬

Critical thinking and analysis Skill Adaptation skill Negotiation and conflict resolution – experience negotiation detail Self-starter; motivated to solve design problems Training/ Mentoring Skills Change-management skills Leadership Skill Technological know-how Organizational skills Basic Project Management Skills Drive to Learn Problem-space modeling and creative problem solving Risk identification and management Strategic (long-term) and tactical (short-term) project planning Continuous research on trends and technologies

Below is a list of the main skills: ¬ ¬ ¬ ¬ ¬ ¬ ¬ Software development languages (implementation on Windows and Linux platforms) Engineering and process modelling languages System engineering standards Application of appropriate development lifecycle System engineering lifecycle management Operational analysis Teaming with other solution providers to develop best of breed applications

2.6 Tools
[Q. Needed tools to fully and/or partly automate the execution of those actions for keeping time and cost to minimum level] Usually, for architectural design and component design Microsoft Visio or Rational Rose is used. For data design ER Studio is used. For brainstorming Mind Manager is used. The followings are the tool that are or can be used for Software Design purposes. SN Type of Tool 1 2 3 4 Modeling & Simulation Architecture and Component Design Database Design Unit Testing Tools Name of Tools Petri Net: HPSim, MapleSim, ExSpect Microsoft Visio, Rational Rose ER Studio TinyUnit, csUnit, nUnit, Roaster, Specter,

jUnit 5 6 7 Configuration Management Brainstorming Project Management CVS, ClearCase, CMZ, VSS Pex Microsoft Project, Project Insight

2.7 Measurement
[Q. Recommend measurement practices to collect, analyze and process data to measure those quality attributes of code blocks, which are being produced] Measurement framework has to be developed for all the performance parameters. Here are some guidelines for measurement. 2.7.1 Complexity For determining design complexity three different complexities would be measured – structural complexity, data complexity and system structural complexity. Size is also measured to measure the complexity. 2.7.2 Coupling The coupling is the physical connections between elements of the object orient design i.e. the number of collaborations between classes or the number of messages passed between objects within an object oriented system. Measure the Coupling between object classes (CBO). 2.7.3 Cohesion Like its counterpart in conventional software, an object oriented component should be designed in a manner that has all operations working together to achieve a single, well-defined purpose. The cohesiveness of a class is determined by examining the degree to which the set of properties it possesses is part of the problem or design domain. Measure the Lack of cohesion in methods (LCOM). Following is the list of some attributes and their sub attributes which can be used for measurement. Here measurement be can conducted on sub attributes providing the scale like 0-9, 0 for no compliance and 9 for full compliance.

Attributes

Description • Using meaningful attributes (Functions, variable, class etc.) name • Using comments efficiently, neither too much brief nor too elaborately • Minimize logical complexity. • Avoiding frequent function calls or variable calls. • Use of Same Intend of Coding Policy for code readability: • All coder should use similar naming conventions for variables, functions, class, objects etc.

Readability Sub-attributes of code readability:

Should put adequate comments. In case of complex functions should give elaborate explanation • Minimize logical complexity, if needed take help from team leader/senior personnel • Should have periodic code review plan Process for code readability: Team leader will briefly give an overview on code readability and it performance parameters to the programmers at the beginning of the process. • Do programming following the standards and in terms of performance parameters • Periodic code review on the first week of every month • Individuals code should submit 2/3 days before deadline after reviewing himself • If someone fails up-to the mark team leader will guide him to improve that Standard: Standard naming convention for variables, functions, class, objects etc Standardize comments size and content. Like: for variable declaration give one line comment and for functions two lines comment having functions purpose • Some standard can be used for maximum LOC for a function, maximum LOC for a source code, maximum global or local variable etc Sub-attributes of code reusability: • Determine project dependent & independent classes • Use dynamic variables • Avoid functions calling from a function • Prepare common purpose functions more generously • Mapping of variable to function • Adequate documentation is must Policy for code reusability: • • • Use dynamic variables and prepare independent functions (not depends on other functions). Prepare project independent functions/classes and common purpose functions/classes more generously and independently so that their maximum reuse confirmed. Use mapping variable to functions so that it will be easier to reuse those • • •

•

Reusability

Process for code reusability: • • • Prepare documentation elaborately Prepare functions confirming performance parameters Map variables to function and functions to the class

Standard: • • Make documentation standards Define standard size of different functions. Like: size may differ for project dependent functions and common purpose functions

Execution Time

Sub-attributes of Execution Time: • • • • • • Proper variable declaration Frequent variable declaration within a function Frequent function calling from a function Never use nested loop more than times (each loop contains n times complexity, more than 2n+1 is bad programming in terms of time complexity) Variable releasing after using For complex query processing use store-procedure to improve execution time

Policy for Execution Time: • • • Need to handle variable effectively (declaration & releasing) Minimize variable declaration within function and maximize variable reuse Minimize/avoid functions calling within functions i.e. construct independent functions

Process for Execution Time: Proper data types need to be defined. Link list can be used rather than arrays • Maximizing reuse of variable and release variables after use • Avoid larger & complex query processing, minimize as much as possible • Test performance periodically Standard: • Define standard for variable declaration & release • Define acceptable query size & complexity Define loading time, instruction executing time and processing reporting time •

2.8 Control
[Q. Recommend the control actions to deal with QTC variations] The design should be developed iteratively. For each iteration, technical review should be conducted, the measurement data would be analyzed and modification would be done. With each iteration, the designer should strive for greater simplicity.

Following are some recommended control actions:

¬ Update past status ¬ Analyze the impact of new changes ¬ Act on the variance between actual and planned performance ¬ Publish schedule changes

¬ Inform senior management ¬ Analyze which jobs are behind schedule and how will they affect the project's completion date ¬ Figure out what caused these jobs to fall behind schedule ¬ Record what steps have been taken, are being taken, or should be taken to correct the situation, and what has been the result of these actions so far ¬ Decide and record what further actions will be necessary to correct the situation

2.9 Imporvement
[Q. Recommend improvements of process elements to be made in order to improve one or more of those code quality parameters] Improve above process elements for achieving the desired QTC and use this learning for other projects also. Following are the list of improvement recommendations:

1. Use formal methodologies in every stage of software development life cycle 2. Use disciplined and organized project plan and it needs to be monitored.
3. The architects and developers need to understand these requirements correctly and unambiguously in order to assess whether they are possible to achieve in the final product. 4. Testing almost always occurs late in projects, usually when the system is close to being complete. This should not happen. The quality assurance has to put in place rather quality control.

5. There are different packages on the market for managing project milestones quick and easy. These tools should be used or put in place. The followings are the advantages of such software:
Modeling & Simulation Requirement Management Development Maintenance Unit Testing Tools Configuration Management Tools Assurance Tools Project Management Tools

Scheduled dates Gantt charts Milestone reports Resource allocation Project costs Cash flow schedules
Work breakdown structure

3. Conclusion
The information produced here is partly based on the information gathered from the industry practices, from the different refernce materials, from the lectures delivered in the school, from self judgement. and making some assumptions. If we want to use this information as reference, then need to update the process elements.


								
To top