Try the all-new QuickBooks Online for FREE.  No credit card required.

Software tester

Document Sample
Software tester Powered By Docstoc
					Software tester (SQA) interview questionsThese questions are used for software tester
or SQA (Software Quality Assurance) positions. Refer to The Real World of Software
Testing for more information in the field.

      The top management was feeling that when there are any changes in the
   technology being used, development schedules etc, it was a waste of time to
   update the Test Plan. Instead, they were emphasizing that you should put your
   time into testing than working on the test plan. Your Project Manager asked for
   your opinion. You have argued that Test Plan is very important and you need to
   update your test plan from time to time. It’s not a waste of time and testing
   activities would be more effective when you have your plan clear. Use some
   metrics. How you would support your argument to have the test plan consistently
   updated all the time.
    The QAI is starting a project to put the CSTE certification online. They will
   use an automated process for recording candidate information, scheduling
   candidates for exams, keeping track of results and sending out certificates. Write a
   brief test plan for this new project.
    The project had a very high cost of testing. After going in detail, someone
   found out that the testers are spending their time on software that doesn’t have too
   many defects. How will you make sure that this is correct?
    What are the disadvantages of overtesting?
    What happens to the test plan if the application has a functionality not
   mentioned in the requirements?
    You are given two scenarios to test. Scenario 1 has only one terminal for entry
   and processing whereas scenario 2 has several terminals where the data input can
   be made. Assuming that the processing work is the same, what would be the
   specific tests that you would perform in Scenario 2, which you would not carry on
   Scenario 1?
    Your customer does not have experience in writing Acceptance Test Plan. How
   will you do that in coordination with customer? What will be the contents of
   Acceptance Test Plan?
    How do you know when to stop testing?
    What can you do if the requirements are changing continuously?
    What is the need for Test Planning?
    What are the various status reports you will generate to Developers and Senior
    Define and explain any three aspects of code review?
    Why do you need test planning?
    Explain 5 risks in an e-commerce project. Identify the personnel that must be
   involved in the risk analysis of a project and describe their duties. How will you
   prioritize the risks?
   What are the various status reports that you need generate for Developers and
Senior Management?
 You have been asked to design a Defect Tracking system. Think about the
fields you would specify in the defect tracking system?
 Write a sample Test Policy?
 Explain the various types of testing after arranging them in a chronological
 Explain what test tools you will need for client-server testing and why?
 Explain what test tools you will need for Web app testing and why?
 Explain pros and cons of testing done development team and testing by an
independent team?
 Differentiate Validation and Verification?
 Explain Stress, Load and Performance testing?
 Describe automated capture/playback tools and list their benefits?
 How can software QA processes be implemented without stifling
 How is testing affected by object-oriented designs?
 What is extreme programming and what does it have to do with testing?
 Write a test transaction for a scenario where 6.2% of tax deduction for the first
$62,000 of income has to be done?
 What would be the Test Objective for Unit Testing? What are the quality
measurements to assure that unit testing is complete?
 Prepare a checklist for the developers on Unit Testing before the application
comes to testing department.
 Draw a pictorial diagram of a report you would create for developers to
determine project status.
 Draw a pictorial diagram of a report you would create for users and
management to determine project status.
 What 3 tools would you purchase for your company for use in testing? Justify
the need?
 Put the following concepts, put them in order, and provide a brief description
of each:
        o system testing
        o acceptance testing
        o unit testing
        o integration testing
        o benefits realization testing
 What are two primary goals of testing?
 If your company is going to conduct a review meeting, who should be on the
review committe and why?
 Write any three attributes which will impact the Testing Process?
    What activity is done in Acceptance Testing, which is not done in System
 You are a tester for testing a large system. The system data model is very large
with many attributes and there are a lot of inter-dependencies within the fields.
What steps would you use to test the system and also what are the effects of the
steps you have taken on the test plan?
 Explain and provide examples for the following black box techniques?
         o Boundary Value testing
         o Equivalence testing
         o Error Guessing
 What are the product standards for?
         o Test Plan
         o Test Script and Test Report
 You are the test manager starting on system testing. The development team
says that due to a change in the requirements, they will be able to deliver the
system for SQA 5 days past the deadline. You cannot change the resources (work
hours, days, or test tools). What steps will you take to be able to finish the testing
in time?
 Your company is about to roll out an e-commerce application. It’s not possible
to test the application on all types of browsers on all platforms and operating
systems. What steps would you take in the testing environment to reduce the
business risks and commercial risks?
 In your organization, testers are delivering code for system testing without
performing unit testing. Give an example of test policy:
         o Policy statement
         o Methodology
         o Measurement
 Testers in your organization are performing tests on the deliverables even after
significant defects have been found. This has resulted in unnecessary testing of
little value, because re-testing needs to be done after defects have been rectified.
You are going to update the test plan with recommendations on when to halt
testing. Wwhat recommendations are you going to make?
 How do you measure:
         o Test Effectiveness
         o Test Efficiency
 You found out the senior testers are making more mistakes then junior testers;
you need to communicate this aspect to the senior tester. Also, you don’t want to
lose this tester. How should one go about constructive criticism?
 You are assigned to be the test lead for a new program that will automate
take-offs and landings at an airport. How would you write a test strategy for this
new program?
SQL Servers
What is a major difference between SQL Server 6.5 and 7.0 platform wise?
   SQL Server 6.5 runs only on Windows NT Server. SQL Server 7.0 runs on
Windows NT Server, workstation and Windows 95/98.

Is SQL Server implemented as a service or an application?
   It is implemented as a service on Windows NT server and workstation and as an
application on Windows 95/98.

What is the difference in Login Security Modes between v6.5 and 7.0?
   7.0 doesn't have Standard Mode, only Windows NT Integrated mode and Mixed
mode that consists of both Windows NT Integrated and SQL Server authentication

What is a traditional Network Library for SQL Servers?
   Named Pipes

What is a default TCP/IP socket assigned for SQL Server?

If you encounter this kind of an error message, what you need to look into to solve
this problem? "[Microsoft][ODBC SQL Server Driver][Named Pipes]Specified SQL
Server not found."
   1.Check if MS SQL Server service is running on the computer you are trying to log
2.Check on Client Configuration utility. Client and Server have to in sync.

What are the two options the DBA has to assign a password to sa?
   a) to use SQL statement
Use master
Exec sp_password NULL,
b) to use Query Analyzer utility

What is new philosophy for database devises for SQL Server 7.0?
   There are no devises anymore in SQL Server 7.0. It is file system now.
When you create a database how is it stored?
   It is stored in two separate files: one file contains the data, system tables, other
database objects, the other file stores the transaction log.

Let's assume you have data that resides on SQL Server 6.5. You have to move it SQL
Server 7.0. How are you going to do it?
   You have to use transfer command.


Have you ever tested 3 tier applications?

Do you know anything about DirectConnect software? Who is a vendor of the

What platform does it run on?

How did you use it? What kind of tools have you used to test connection?
   SQL Server or Sybase client tools.

How to set up a permission for 3 tier application?
   Contact System Administrator.

What UNIX command do you use to connect to UNIX server?
   FTP Server Name

Do you know how to configure DB2 side of the application?
   Set up an application ID, create RACF group with tables attached to this group,
attach the ID to this RACF group.
Web Application

What kind of LAN types do you know?
   Ethernet networks and token ring networks.

What is the difference between them?
  With Ethernet, any devices on the network can send data in a packet to any location
on the network at any time. With Token Ring, data is transmitted in 'tokens' from
computer to computer in a ring or star configuration.

Steve Dalton from ExchangeTechnology: "This is such a common mistake that
people make about TR I didn't want it to propagated further!"
Token ring is the IEEE 802.5 standard that connects computers together in a closed
ring. Devices on the ring cannot transmit data until permission is received from the
network in the form of an electronic 'token'. The token is a short message that can be
passed around the network when the owner is finished. At any time, one node owns
the token and is free to send messages. As with Ethernet the messages are packetized.
The packet = start_flag + address + header + message + checksum + stop_flag. The
message packets circulate around the ring until the addressed recipient receives them.
When the sender is finished sending the full message (normally many packets),he
sends the token.
An Ethernet message is sent in packets too. The sending protocol goes like this:
wait until you see no activity on the network .
begin sending your message pocket.
while sending, check simultaneously for interference (another node wants to send
as long as all clear, continue sending your message.
if you detect interference abort your transmission, wait a random length of time and
try again.

Token ring speed is 4/16 Mbit/sec , Ethernet - 10/100 Mbit/sec
For more info see

What protocol both networks use? What it stands for?
   TCP/IP. Transmission Control Protocol/ Internet Protocol.

How many bits IP Address consist of?
   An IP Address is a 32-bit number.
How many layers of TCP/IP protocol combined of?
   Five. (Application, Transport, Internet, Data link, Physical)

How to define testing of network layers?
   Reviewing with your developers to identify the layers of the Network layered
architecture, your Web client and Web server application interact with. Determine the
hardware and software configuration dependencies for the application under test.

How to test proper TCP/IP configuration Windows machine?
   To run command on:
Windows 95: WINIPCFG
Ping or ping

What is a component-based Architecture? How to approach testing of a component
based application?
· Define how many and what kind of components your application has.
· Identify how server-side components are distributed
· Identify How server-side software components interact with each other
· Identify how Web-To- Database connectivity is implemented
· Identify how processing load is distributed between client and server to prepare for
load stress and performance testing
· Prepare for compatibility and reliability testing

How to maintain Browser settings?
   Go to Control Panel, Internet Option.

What kind of testing considerations you have to have in mind for Security Testing?
   In client/server system, every component carries its own security weaknesses.
The primary components which need to be tested are:
· application software
· the database
· servers
· the client workstations
· the network

How to Hire a QA Person
What criteria do people use to select QA engineers? It’s natural to think that the right
kinds of people to hire are people just like you—but this can be a mistake. In fact,
every job requires its own unique set of skills and personality types, and the skills that
make you successful in your field may have significant differences from the skills
needed for the QA job.
If you read many job posting specifications for QA roles, you’ll find that they
commonly describe skills that are much more appropriate for a developer, including
specific knowledge of the company’s unique technology. Some specifications are so
unique and lofty that it seems the only qualified candidates would be former heads of

Realistically, the QA person you seek should have the adaptability, intelligence, and
QA-specific skills that will enable them to come up to speed on your project quickly.
Relevant experience includes testing procedures, test writing, puzzle solving,
follow-through, communication, and the "QA mindset."

Unless they are testing a programming interface or scripting language, a QA person’s
role is to test the product from the end user’s perspective. Contrast this with
developers, who look at the product from a code perspective. Consider the difference
between being focused on making the code perform in a very specific way and
wondering what would happen if you did "this" instead of "that" through the user

It’s remarkable that the people who are assigned to interview QA candidates tend to
be anything but QA people themselves. Most often, developers and HR people do the
bulk of the interviewing. Yet QA is a unique discipline, and candidates need to be
evaluated from a QA point of view, as would accountants, advertising staff, and other
specialized professionals. QA people often have the feeling that they need to have two
sets of skills: those that interview well with development engineers, and those that
they actually need once they get the job.

What Not to Do
The first mistake you can make is to assume that you don’t really need a QA person.
Code-based unit tests do not represent the end user’s interaction with the product. If
you tell your boss that you "just know" it works, or base your assumptions on unit
tests, she probably won’t feel reassured. Before the big rollout, she is going to want
metrics, generated by a professional.

The second mistake is to conduct the interview as you would for a development
position. Even though more and more QA people are getting into programming, most
of them aren’t developers. If you give most QA people a C++ test, they will fail.

Quite often, developers are tagged and thrown into a room with a QA candidate just to
round out the interview process and make sure that everyone on the team feels
comfortable with the choice. But many developers only know how to interview from a
developer’s perspective. When asked to interview someone, they will usually give
them a programming test, which might eliminate candidates who have the best QA

Unless they are testing from the API level, most QA people don’t go near the code.
They approach the product from a user’s perspective. You are not looking for a
programmer; you are looking for someone to represent the user and evaluate the
product from their perspective.

What QA People Do
If the actual requirements of QA almost never involve any experience with the
programming language, environment, and operating system, and very little to do with
the type of program being created, what criteria should we be looking for? If QA
people aren’t programmers, what do they do?

1. They Are Sleuths. Perhaps most important, a QA person needs to be an investigator
in order to define the job and understand the project. There may or may not be a
product specification (spec) available defining the project. Too often the spec is
nonexistent, bare bones, or woefully out of date. Furthermore, the difference between
the original bare-bones spec and the current but undocumented reality is known and
discussed only in private development meetings at which QA people are usually not
present. QA is usually not deliberately excluded, just overlooked because
development’s focus is to accomplish the task, not necessarily to share their
information with everyone.

Thus a QA person needs to have the investigative skills to seek out information
through all available means: manuals, specs, interviews, emails, and good old trial and
error. What is the product really supposed to do? What is the customer expectation?
How will management know when the product is ready to ship? What measurable
standards must be met? What are the individual developers working on now and what
are they most concerned about? This investigation is the job of all QA people. Some
experienced developers may find this in conflict with their experience, as some
organizations set development tasks in a hierarchical way, with job specifications
coming down from the architect and individual contributors dealing with specific
focused subsets. It may seem natural to expect QA to work the same way, but in fact
each QA person needs to be an independent investigator with a broad picture. Where
developers are making code conform to specifications, QA people are looking for the
needle-in-a-haystack problems and unexpected scenarios, in addition to verifying that
the product actually works as expected.

2. They Know How to Plan. A QA person needs to plan and set priorities. There is a
definable project to test. Given all the possible combinations of expected uses, as well
as all the potential unexpected scenarios including human and mechanical failure, one
can imagine an infinite number of possibilities. Organizing one’s activity to get the
most effective results in the (usually) limited time available is of paramount

Further, this is an ever-changing evaluation. In ideal circumstances, QA is on pace
with or even ahead of development. QA should be included in the earliest planning so
that at the same time developers are figuring out how to build the code, QA is figuring
out how to test the code, anticipating resource needs and planning training. But more
likely, QA is brought to the project late in its development and is racing to catch up.
This requires planning and prioritization with a vengeance.

Consider also that each new build represents a threat to established code. Code that
worked in previous builds can suddenly fail because of coding errors, new conflicts,
miscommunication, and even compiler errors introduced in the latest build. Therefore,
each new build needs to be verified again to assure that good code remains good. A
useful prioritization of tasks would be to

spot-check the new build for overall integrity before accepting it for general testing
verify that new bug fixes have indeed been fixed
exercise new code that was just added, as this is the area most likely to have problems
revalidate the established code in general as much as you can before the next build is
Outside of straightforward functional testing, there may be requirements for
performance testing, platform testing, and compatibility testing that should run in
environments separate from the standard development and test environment. That’s a
lot of testing to manage. QA people have to be able to react at a moment’s notice to
get on top of sudden changes in priority, then return to the game plan again after the
emergency has been met.

3. They See the Big Picture. A QA person needs the "QA mindset." Generally, a
development engineer needs to be a focused person who drives toward a specific goal
with a specific portion of the larger program. Highly focused and detail-oriented
persons tend to do well at this. QA, however, is not a good place for a highly focused
person. QA in fact needs to have multiple perspectives and the ability to approach the
task at many levels from the general to the specific, not to mention from left field. A
highly focused person could miss too many things in a QA role by exhaustively
testing, say, the math functions, but not noticing that printing doesn’t work.

4. They Know How to Document. A major portion of the QA role involves writing.
Plans need to be written, both the master plan kind and the detailed test script kind. As
the project evolves, these documents need to be updated as well. A good QA person
can write testing instructions so that any intelligent person with basic user skills can
pick up the script and test the product unaided. Bug reports are another major
communication tool and QA people need to have the ability to define the bug in steps
that are easy to understand and reproduce. It would be good to ask a candidate to
bring samples of bug reports and testing instructions to the interview. Lacking these,
look for any technical writing samples that show that the candidate can clearly and
economically communicate technical subject matter.

5. They Care About the Whole Project. It’s also important for the candidate to have a
passion for getting things right. Ultimately, QA is entrusted with watching the process
with a big-picture perspective, to see that it all comes together as well as possible.
Everyone has that goal, but most are too busy working on their individual trees to see
how the forest is doing. QA candidates should exhibit a passion for making the project
successful, for fighting for the right thing when necessary, yet with the practical
flexibility to know when to let go and ship the project.

How to Hire Right
So how do you evaluate a complete stranger for QA skills?

Here’s one idea. Find a simple and familiar window dialog such as a print dialog, and
ask your candidates to describe how they would go about writing a test for it. Look
for thoroughness and for the ability to approach the items from many angles. A good
QA person will consider testing that the buttons themselves work (good QA people
don’t trust things that are supposed to work without question), then that the functions
are properly hooked up to the buttons. They should suggest various kinds of print jobs.
They should suggest testing the same dialog on various supported platforms and
exception testing if the network is down or a printer is out of paper. They should
mention the appearance and perhaps the working of the dialog. Performance testing
may also come up, as well as the handling of various kinds of content. The more
variations on a theme they come up with, the stronger a candidate they are.

Another idea is to present them with a function-testing scenario in which there is no
specification from which to develop a test plan. Ask them how they would learn about
the function and the test. Their answers should include documentation, old tests,
marketing people, conversations with the developers, reading the bug database, trial
and error, and writing up presumptions to be presented to developers for evaluation
and correction. Again, look for variety and creativity in finding solutions.

QA people need to be creative problem solvers. They like to grab onto a problem and
figure out the solution by whatever means they can. They will peek at the answers of
a crossword puzzle to break a deadlock. They will come up with a new solution to an
old problem. They are aware of the details when finding a solution, yet they have the
ability to think outside the box, to appreciate new and experimental aspects. Some
successful interviewers keep one or two "brain teaser" types of puzzles on hand for
the candidates to work out. Candidates are asked to solve the problem and explain
their thinking as they go. Whether they find the answer is not as important. Listen to
their thinking process as they work. If they are able to attack the problem from many
directions and not give up after the first failures, they are showing the right thinking
style. Particularly look to see if they dig into the problem with real enjoyment. A true
QA person would.

Of course, QA people need to be intuitively technical. They can usually program a
VCR and use most technical equipment without needing the instructions (at least for
basic functionality). Other people go to them for help with technical things. Listen for
examples of this in their conversation. For example, if they are computer inquisitive,
they don't just use software, they tinker with it. They inquire into the details and
obscure corners of functionality and try things to see how they work. They may have
stories of some creative accomplishment using software differently than intended by
the developers, such as using a spreadsheet to write documents.

Good QA people are always learning, whether advancing their technical skills or
learning something entirely new. Listen for signs of self-directed learning in the

Good QA people have a sense of ownership and follow-through in their work. They
are directly responsible for their work and its contribution to the overall process. They
are good at taking a general instruction and fleshing out the details of their work on
their own. They will work long and hard at it. Let them tell stories of their
achievements and successes in overcoming bad situations. Look for the passion, the
ownership, and the pride.

The key thing to remember is that the kinds of skills and mindset needed for QA work
is different from those needed for other roles. Spend some time getting to know good
QA people in your organization and getting to know what characteristics make them
successful. Seek out their opinions on what to look for. Develop a consistent
interviewing approach that you use over and over so that you become familiar with
the range of responses from various candidates. And for goodness’ sake, use your own
QA people, even the new ones, to evaluate new candidates.

About the Author

Bill Bliss is a QA manager and consultant whose clients include Lotus Development,
Digital, and Dragon Systems. You can send him email at or
visit his Web site at

Mitch Allen is an author and consultant whose many clients have included Fleet,
Caterpillar, IBM, Lotus Development and Dragon Systems. He is currently working
on a book about Flash programming, due to be published by the end of 2002. You can
send him an email at or visit his Web site at
What is 'Software Quality Assurance'?
Software QA involves the entire software development PROCESS - monitoring and
improving the process, making sure that any agreed-upon standards and procedures
are followed, and ensuring that problems are found and dealt with. It is oriented to
'prevention'. (See the Bookstore section's 'Software QA' category for a list of useful
books on Software Quality Assurance.)
What is 'Software Testing'?
Testing involves operation of a system or application under controlled conditions and
evaluating the results (eg, 'if the user is in interface A of the application while using
hardware B, and does C, then D should happen'). The controlled conditions should
include both normal and abnormal conditions. Testing should intentionally attempt to
make things go wrong to determine if things happen when they shouldn't or things
don't happen when they should. It is oriented to 'detection'. (See the Bookstore
section's 'Software Testing' category for a list of useful books on Software Testing.)

Organizations vary considerably in how they assign responsibility for QA and testing.
Sometimes they're the combined responsibility of one group or individual. Also
common are project teams that include a mix of testers and developers who work
closely together, with overall QA processes monitored by project managers. It will
depend on what best fits an organization's size and business structure.
What are some recent major computer system failures caused by software bugs?

A September 2006 news report indicated problems with software utilized in a state
government's primary election, resulting in periodic unexpected rebooting of voter
checkin machines, which were separate from the electronic voting machines, and
resulted in confusion and delays at voting sites. The problem was reportedly due to
insufficient testing.
In August of 2006 a U.S. government student loan service erroneously made public
the personal data of as many as 21,000 borrowers on it's web site, due to a software
error. The bug was fixed and the government department subsequently offered to
arrange for free credit monitoring services for those affected.
A software error reportedly resulted in overbilling of up to several thousand dollars to
each of 11,000 customers of a major telecommunications company in June of 2006. It
was reported that the software bug was fixed within days, but that correcting the
billing errors would take much longer.
News reports in May of 2006 described a multi-million dollar lawsuit settlement paid
by a healthcare software vendor to one of its customers. It was reported that the
customer claimed there were problems with the software they had contracted for,
including poor integration of software modules, and problems that resulted in missing
or incorrect data used by medical personnel.
In early 2006 problems in a government's financial monitoring software resulted in
incorrect election candidate financial reports being made available to the public. The
government's election finance reporting web site had to be shut down until the
software was repaired.
Trading on a major Asian stock exchange was brought to a halt in November of 2005,
reportedly due to an error in a system software upgrade. The problem was rectified
and trading resumed later the same day.
A May 2005 newspaper article reported that a major hybrid car manufacturer had to
install a software fix on 20,000 vehicles due to problems with invalid engine warning
lights and occasional stalling. In the article, an automotive software specialist
indicated that the automobile industry spends $2 billion to $3 billion per year fixing
software problems.
Media reports in January of 2005 detailed severe problems with a $170 million
high-profile U.S. government IT systems project. Software testing was one of the five
major problem areas according to a report of the commission reviewing the project. In
March of 2005 it was decided to scrap the entire project.
In July 2004 newspapers reported that a new government welfare management system
in Canada costing several hundred million dollars was unable to handle a simple
benefits rate increase after being put into live operation. Reportedly the original
contract allowed for only 6 weeks of acceptance testing and the system was never
tested for its ability to handle a rate increase.
Millions of bank accounts were impacted by errors due to installation of inadequately
tested software code in the transaction processing system of a major North American
bank, according to mid-2004 news reports. Articles about the incident stated that it
took two weeks to fix all the resulting errors, that additional problems resulted when
the incident drew a large number of e-mail phishing attacks against the bank's
customers, and that the total cost of the incident could exceed $100 million.
A bug in site management software utilized by companies with a significant
percentage of worldwide web traffic was reported in May of 2004. The bug resulted in
performance problems for many of the sites simultaneously and required disabling of
the software until the bug was fixed.
According to news reports in April of 2004, a software bug was determined to be a
major contributor to the 2003 Northeast blackout, the worst power system failure in
North American history. The failure involved loss of electrical power to 50 million
customers, forced shutdown of 100 power plants, and economic losses estimated at $6
billion. The bug was reportedly in one utility company's vendor-supplied power
monitoring and management system, which was unable to correctly handle and report
on an unusual confluence of initially localized events. The error was found and
corrected after examining millions of lines of code.
In early 2004, news reports revealed the intentional use of a software bug as a
counter-espionage tool. According to the report, in the early 1980's one nation
surreptitiously allowed a hostile nation's espionage service to steal a version of
sophisticated industrial software that had intentionally-added flaws. This eventually
resulted in major industrial disruption in the country that used the stolen flawed
A major U.S. retailer was reportedly hit with a large government fine in October of
2003 due to web site errors that enabled customers to view one anothers' online orders.
News stories in the fall of 2003 stated that a manufacturing company recalled all their
transportation products in order to fix a software problem causing instability in certain
circumstances. The company found and reported the bug itself and initiated the recall
procedure in which a software upgrade fixed the problems.
In August of 2003 a U.S. court ruled that a lawsuit against a large online brokerage
company could proceed; the lawsuit reportedly involved claims that the company was
not fixing system problems that sometimes resulted in failed stock trades, based on
the experiences of 4 plaintiffs during an 8-month period. A previous lower court's
ruling that "...six miscues out of more than 400 trades does not indicate negligence."
was invalidated.
In April of 2003 it was announced that a large student loan company in the U.S. made
a software error in calculating the monthly payments on 800,000 loans. Although
borrowers were to be notified of an increase in their required payments, the company
will still reportedly lose $8 million in interest. The error was uncovered when
borrowers began reporting inconsistencies in their bills.
News reports in February of 2003 revealed that the U.S. Treasury Department mailed
50,000 Social Security checks without any beneficiary names. A spokesperson
indicated that the missing names were due to an error in a software change.
Replacement checks were subsequently mailed out with the problem corrected, and
recipients were then able to cash their Social Security checks.
In March of 2002 it was reported that software bugs in Britain's national tax system
resulted in more than 100,000 erroneous tax overcharges. The problem was partly
attributed to the difficulty of testing the integration of multiple systems.
A newspaper columnist reported in July 2001 that a serious flaw was found in
off-the-shelf software that had long been used in systems for tracking certain U.S.
nuclear materials. The same software had been recently donated to another country to
be used in tracking their own nuclear materials, and it was not until scientists in that
country discovered the problem, and shared the information, that U.S. officials
became aware of the problems.
According to newspaper stories in mid-2001, a major systems development contractor
was fired and sued over problems with a large retirement plan management system.
According to the reports, the client claimed that system deliveries were late, the
software had excessive defects, and it caused other systems to crash.
In January of 2001 newspapers reported that a major European railroad was hit by the
aftereffects of the Y2K bug. The company found that many of their newer trains
would not run due to their inability to recognize the date '31/12/2000'; the trains were
started by altering the control system's date settings.
News reports in September of 2000 told of a software vendor settling a lawsuit with a
large mortgage lender; the vendor had reportedly delivered an online mortgage
processing system that did not meet specifications, was delivered late, and didn't work.
In early 2000, major problems were reported with a new computer system in a large
suburban U.S. public school district with 100,000+ students; problems included
10,000 erroneous report cards and students left stranded by failed class registration
systems; the district's CIO was fired. The school district decided to reinstate it's
original 25-year old system for at least a year until the bugs were worked out of the
new system by the software vendors.
A review board concluded that the NASA Mars Polar Lander failed in December 1999
due to software problems that caused improper functioning of retro rockets utilized by
the Lander as it entered the Martian atmosphere.
In October of 1999 the $125 million NASA Mars Climate Orbiter spacecraft was
believed to be lost in space due to a simple data conversion error. It was determined
that spacecraft software used certain data in English units that should have been in
metric units. Among other tasks, the orbiter was to serve as a communications relay
for the Mars Polar Lander mission, which failed for unknown reasons in December
1999. Several investigating panels were convened to determine the process failures
that allowed the error to go undetected.
Bugs in software supporting a large commercial high-speed data network affected
70,000 business customers over a period of 8 days in August of 1999. Among those
affected was the electronic trading system of the largest U.S. futures exchange, which
was shut down for most of a week as a result of the outages.
In April of 1999 a software bug caused the failure of a $1.2 billion U.S. military
satellite launch, the costliest unmanned accident in the history of Cape Canaveral
launches. The failure was the latest in a string of launch failures, triggering a complete
military and industry review of U.S. space launch programs, including software
integration and testing processes. Congressional oversight hearings were requested.
A small town in Illinois in the U.S. received an unusually large monthly electric bill
of $7 million in March of 1999. This was about 700 times larger than its normal bill.
It turned out to be due to bugs in new software that had been purchased by the local
power company to deal with Y2K software issues.
In early 1999 a major computer game company recalled all copies of a popular new
product due to software problems. The company made a public apology for releasing
a product before it was ready.
The computer system of a major online U.S. stock trading service failed during
trading hours several times over a period of days in February of 1999 according to
nationwide news reports. The problem was reportedly due to bugs in a software
upgrade intended to speed online trade confirmations.
In April of 1998 a major U.S. data communications network failed for 24 hours,
crippling a large part of some U.S. credit card transaction authorization systems as
well as other large U.S. bank, retail, and government data systems. The cause was
eventually traced to a software bug.
January 1998 news reports told of software problems at a major U.S.
telecommunications company that resulted in no charges for long distance calls for a
month for 400,000 customers. The problem went undetected until customers called up
with questions about their bills.
In November of 1997 the stock of a major health industry company dropped 60% due
to reports of failures in computer billing systems, problems with a large database
conversion, and inadequate software testing. It was reported that more than
$100,000,000 in receivables had to be written off and that multi-million dollar fines
were levied on the company by government agencies.
A retail store chain filed suit in August of 1997 against a transaction processing
system vendor (not a credit card company) due to the software's inability to handle
credit cards with year 2000 expiration dates.
In August of 1997 one of the leading consumer credit reporting companies reportedly
shut down their new public web site after less than two days of operation due to
software problems. The new site allowed web site visitors instant access, for a small
fee, to their personal credit reports. However, a number of initial users ended up
viewing each others' reports instead of their own, resulting in irate customers and
nationwide publicity. The problem was attributed to "...unexpectedly high demand
from consumers and faulty software that routed the files to the wrong computers."
In November of 1996, newspapers reported that software bugs caused the 411
telephone information system of one of the U.S. RBOC's to fail for most of a day.
Most of the 2000 operators had to search through phone books instead of using their
13,000,000-listing database. The bugs were introduced by new software modifications
and the problem software had been installed on both the production and backup
systems. A spokesman for the software vendor reportedly stated that 'It had nothing to
do with the integrity of the software. It was human error.'
On June 4 1996 the first flight of the European Space Agency's new Ariane 5 rocket
failed shortly after launching, resulting in an estimated uninsured loss of a half billion
dollars. It was reportedly due to the lack of exception handling of a floating-point
error in a conversion from a 64-bit integer to a 16-bit signed integer.
Software bugs caused the bank accounts of 823 customers of a major U.S. bank to be
credited with $924,844,208.32 each in May of 1996, according to newspaper reports.
The American Bankers Association claimed it was the largest such error in banking
history. A bank spokesman said the programming errors were corrected and all funds
were recovered.
Software bugs in a Soviet early-warning monitoring system nearly brought on nuclear
war in 1983, according to news reports in early 1999. The software was supposed to
filter out false missile detections caused by Soviet satellites picking up sunlight
reflections off cloud-tops, but failed to do so. Disaster was averted when a Soviet
commander, based on what he said was a '...funny feeling in my gut', decided the
apparent missile attack was a false alarm. The filtering software code was rewritten.
Does every software project need testers?
While all projects will benefit from testing, some projects may not require
independent test staff to succeed.

Which projects may not need independent test staff? The answer depends on the size
and context of the project, the risks, the development methodology, the skill and
experience of the developers, and other factors. For instance, if the project is a
short-term, small, low risk project, with highly experienced programmers utilizing
thorough unit testing or test-first development, then test engineers may not be
required for the project to succeed.
In some cases an IT organization may be too small or new to have a testing staff even
if the situation calls for it. In these circumstances it may be appropriate to instead use
contractors or outsourcing, or adjust the project management and development
approach (by switching to more senior developers and agile test-first development, for
example). Inexperienced managers sometimes gamble on the success of a project by
skipping thorough testing or having programmers do post-development functional
testing of their own work, a decidedly high risk gamble.

For non-trivial-size projects or projects with non-trivial risks, a testing staff is usually
necessary. As in any business, the use of personnel with specialized skills enhances an
organization's ability to be successful in large, complex, or difficult tasks. It allows for
both a) deeper and stronger skills and b) the contribution of differing perspectives. For
example, programmers typically have the perspective of 'what are the technical issues
in making this functionality work?'. A test engineer typically has the perspective of
'what might go wrong with this functionality, and how can we ensure it meets
expectations?'. Technical people who can be highly effective in approaching tasks
from both of those perspectives are rare, which is why, sooner or later, organizations
bring in test specialists.

Why does software have bugs?

miscommunication or no communication - as to specifics of what an application
should or shouldn't do (the application's requirements).
software complexity - the complexity of current software applications can be difficult
to comprehend for anyone without experience in modern-day software development.
Multi-tiered applications, client-server and distributed applications, data
communications, enormous relational databases, and sheer size of applications have
all contributed to the exponential growth in software/system complexity.
programming errors - programmers, like anyone else, can make mistakes.
changing requirements (whether documented or undocumented) - the end-user may
not understand the effects of changes, or may understand and request them anyway -
redesign, rescheduling of engineers, effects on other projects, work already completed
that may have to be redone or thrown out, hardware requirements that may be affected,
etc. If there are many minor changes or any major changes, known and unknown
dependencies among parts of the project are likely to interact and cause problems, and
the complexity of coordinating changes may result in errors. Enthusiasm of
engineering staff may be affected. In some fast-changing business environments,
continuously modified requirements may be a fact of life. In this case, management
must understand the resulting risks, and QA and test engineers must adapt and plan
for continuous extensive testing to keep the inevitable bugs from running out of
control - see 'What can be done if requirements are changing continuously?' in the
LFAQ. Also see information about 'agile' approaches such as XP, in Part 2 of the FAQ.
time pressures - scheduling of software projects is difficult at best, often requiring a
lot of guesswork. When deadlines loom and the crunch comes, mistakes will be made.
egos - people prefer to say things like:
   'no problem' 'piece of cake' 'I can whip that out in a few hours' 'it should be
easy to update that old code' instead of: 'that adds a lot of complexity and we could
end up         making a lot of mistakes' 'we have no idea if we can do that; we'll wing
it' 'I can't estimate how long it will take, until I         take a close look at it' 'we
can't figure out what that old spaghetti code        did in the first place' If there are too
many unrealistic 'no problem's', the result is bugs.
poorly documented code - it's tough to maintain and modify code that is badly written
or poorly documented; the result is bugs. In many organizations management provides
no incentive for programmers to document their code or write clear, understandable,
maintainable code. In fact, it's usually the opposite: they get points mostly for quickly
turning out code, and there's job security if nobody else can understand it ('if it was
hard to write, it should be hard to read').
software development tools - visual tools, class libraries, compilers, scripting tools,
etc. often introduce their own bugs or are poorly documented, resulting in added bugs.
How can new Software QA processes be introduced in an existing organization?

A lot depends on the size of the organization and the risks involved. For large
organizations with high-risk (in terms of lives or property) projects, serious
management buy-in is required and a formalized QA process is necessary.
Where the risk is lower, management and organizational buy-in and QA
implementation may be a slower, step-at-a-time process. QA processes should be
balanced with productivity so as to keep bureaucracy from getting out of hand.
For small groups or projects, a more ad-hoc process may be appropriate, depending
on the type of customers and projects. A lot will depend on team leads or managers,
feedback to developers, and ensuring adequate communications among customers,
managers, developers, and testers.
   The most value for effort will often be in (a) requirements management processes,
with a goal of clear, complete, testable requirement specifications embodied in
requirements or design documentation, or in 'agile'-type environments extensive
continuous coordination with end-users, (b) design inspections and code inspections,
and (c) post-mortems/retrospectives.
Other possibilities include incremental self-managed team approaches such as
'Kaizen' methods of continuous process improvement, the Deming-Shewhart
Plan-Do-Check-Act cycle, and others.
Also see 'How can QA processes be implemented without reducing productivity?' in
the LFAQ section.
(See the Bookstore section's 'Software QA', 'Software Engineering', and 'Project
Management' categories for useful books with more information.)
What is verification? validation?
Verification typically involves reviews and meetings to evaluate documents, plans,
code, requirements, and specifications. This can be done with checklists, issues lists,
walkthroughs, and inspection meetings. Validation typically involves actual testing
and takes place after verifications are completed. The term 'IV & V' refers to
Independent Verification and Validation.

What is a 'walkthrough'?
A 'walkthrough' is an informal meeting for evaluation or informational purposes.
Little or no preparation is usually required.

What's an 'inspection'?
An inspection is more formalized than a 'walkthrough', typically with 3-8 people
including a moderator, reader, and a recorder to take notes. The subject of the
inspection is typically a document such as a requirements spec or a test plan, and the
purpose is to find problems and see what's missing, not to fix anything. Attendees
should prepare for this type of meeting by reading thru the document; most problems
will be found during this preparation. The result of the inspection meeting should be a
written report. Thorough preparation for inspections is difficult, painstaking work, but
is one of the most cost effective methods of ensuring quality. Employees who are
most skilled at inspections are like the 'eldest brother' in the parable in 'Why is it often
hard for organizations to get serious about quality assurance?'. Their skill may have
low visibility but they are extremely valuable to any software development
organization, since bug prevention is far more cost-effective than bug detection.

What kinds of testing should be considered?

Black box testing - not based on any knowledge of internal design or code. Tests are
based on requirements and functionality.
White box testing - based on knowledge of the internal logic of an application's code.
Tests are based on coverage of code statements, branches, paths, conditions.
unit testing - the most 'micro' scale of testing; to test particular functions or code
modules. Typically done by the programmer and not by testers, as it requires detailed
knowledge of the internal program design and code. Not always easily done unless the
application has a well-designed architecture with tight code; may require developing
test driver modules or test harnesses.
incremental integration testing - continuous testing of an application as new
functionality is added; requires that various aspects of an application's functionality be
independent enough to work separately before all parts of the program are completed,
or that test drivers be developed as needed; done by programmers or by testers.
integration testing - testing of combined parts of an application to determine if they
function together correctly. The 'parts' can be code modules, individual applications,
client and server applications on a network, etc. This type of testing is especially
relevant to client/server and distributed systems.
functional testing - black-box type testing geared to functional requirements of an
application; this type of testing should be done by testers. This doesn't mean that the
programmers shouldn't check that their code works before releasing it (which of
course applies to any stage of testing.)
system testing - black-box type testing that is based on overall requirements
specifications; covers all combined parts of a system.
end-to-end testing - similar to system testing; the 'macro' end of the test scale;
involves testing of a complete application environment in a situation that mimics
real-world use, such as interacting with a database, using network communications, or
interacting with other hardware, applications, or systems if appropriate.
sanity testing or smoke testing - typically an initial testing effort to determine if a new
software version is performing well enough to accept it for a major testing effort. For
example, if the new software is crashing systems every 5 minutes, bogging down
systems to a crawl, or corrupting databases, the software may not be in a 'sane' enough
condition to warrant further testing in its current state.
regression testing - re-testing after fixes or modifications of the software or its
environment. It can be difficult to determine how much re-testing is needed,
especially near the end of the development cycle. Automated testing tools can be
especially useful for this type of testing.
acceptance testing - final testing based on specifications of the end-user or customer,
or based on use by end-users/customers over some limited period of time.
load testing - testing an application under heavy loads, such as testing of a web site
under a range of loads to determine at what point the system's response time degrades
or fails.
stress testing - term often used interchangeably with 'load' and 'performance' testing.
Also used to describe such tests as system functional testing while under unusually
heavy loads, heavy repetition of certain actions or inputs, input of large numerical
values, large complex queries to a database system, etc.
performance testing - term often used interchangeably with 'stress' and 'load' testing.
Ideally 'performance' testing (and any other 'type' of testing) is defined in
requirements documentation or QA or Test Plans.
usability testing - testing for 'user-friendliness'. Clearly this is subjective, and will
depend on the targeted end-user or customer. User interviews, surveys, video
recording of user sessions, and other techniques can be used. Programmers and testers
are usually not appropriate as usability testers.
install/uninstall testing - testing of full, partial, or upgrade install/uninstall processes.
recovery testing - testing how well a system recovers from crashes, hardware failures,
or other catastrophic problems.
failover testing - typically used interchangeably with 'recovery testing'
security testing - testing how well the system protects against unauthorized internal or
external access, willful damage, etc; may require sophisticated testing techniques.
compatability testing - testing how well software performs in a particular
hardware/software/operating system/network/etc. environment.
exploratory testing - often taken to mean a creative, informal software test that is not
based on formal test plans or test cases; testers may be learning the software as they
test it.
ad-hoc testing - similar to exploratory testing, but often taken to mean that the testers
have significant understanding of the software before testing it.
context-driven testing - testing driven by an understanding of the environment, culture,
and intended use of software. For example, the testing approach for life-critical
medical equipment software would be completely different than that for a low-cost
computer game.
user acceptance testing - determining if software is satisfactory to an end-user or
comparison testing - comparing software weaknesses and strengths to competing
alpha testing - testing of an application when development is nearing completion;
minor design changes may still be made as a result of such testing. Typically done by
end-users or others, not by programmers or testers.
beta testing - testing when development and testing are essentially completed and
final bugs and problems need to be found before final release. Typically done by
end-users or others, not by programmers or testers.
mutation testing - a method for determining if a set of test data or test cases is useful,
by deliberately introducing various code changes ('bugs') and retesting with the
original test data/cases to determine if the 'bugs' are detected. Proper implementation
requires large computational resources.
(See the Bookstore section's 'Software Testing' category for useful books on Software
What are 5 common problems in the software development process?

poor requirements - if requirements are unclear, incomplete, too general, and not
testable, there will be problems.
unrealistic schedule - if too much work is crammed in too little time, problems are
inadequate testing - no one will know whether or not the program is any good until
the customer complains or systems crash.
featuritis - requests to pile on new features after development is underway; extremely
miscommunication - if developers don't know what's needed or customer's have
erroneous expectations, problems are guaranteed.
(See the Bookstore section's 'Software QA', 'Software Engineering', and 'Project
Management' categories for useful books with more information.)
What are 5 common solutions to software development problems?
solid requirements - clear, complete, detailed, cohesive, attainable, testable
requirements that are agreed to by all players. Use prototypes to help nail down
requirements. In 'agile'-type environments, continuous close coordination with
customers/end-users is necessary.
realistic schedules - allow adequate time for planning, design, testing, bug fixing,
re-testing, changes, and documentation; personnel should be able to complete the
project without burning out.
adequate testing - start testing early on, re-test after fixes or changes, plan for
adequate time for testing and bug-fixing. 'Early' testing ideally includes unit testing by
developers and built-in testing and diagnostic capabilities.
stick to initial requirements as much as possible - be prepared to defend against
excessive changes and additions once development has begun, and be prepared to
explain consequences. If changes are necessary, they should be adequately reflected in
related schedule changes. If possible, work closely with customers/end-users to
manage expectations. This will provide them a higher comfort level with their
requirements decisions and minimize excessive changes later on.
communication - require walkthroughs and inspections when appropriate; make
extensive use of group communication tools - groupware, wiki's, bug-tracking tools
and change management tools, intranet capabilities, etc.; insure that
information/documentation is available and up-to-date - preferably electronic, not
paper; promote teamwork and cooperation; use protoypes and/or continuous
communication with end-users if possible to clarify expectations.
(See the Bookstore section's 'Software QA', 'Software Engineering', and 'Project
Management' categories for useful books with more information.)
What is software 'quality'?
Quality software is reasonably bug-free, delivered on time and within budget, meets
requirements and/or expectations, and is maintainable. However, quality is obviously
a subjective term. It will depend on who the 'customer' is and their overall influence in
the scheme of things. A wide-angle view of the 'customers' of a software development
project might include end-users, customer acceptance testers, customer contract
officers,     customer       management,       the      development        organization's
management/accountants/testers/salespeople, future software maintenance engineers,
stockholders, magazine columnists, etc. Each type of 'customer' will have their own
slant on 'quality' - the accounting department might define quality in terms of profits
while an end-user might define quality as user-friendly and bug-free. (See the
Bookstore section's 'Software QA' category for useful books with more information.)

What is 'good code'?
'Good code' is code that works, is bug free, and is readable and maintainable. Some
organizations have coding 'standards' that all developers are supposed to adhere to,
but everyone has different ideas about what's best, or what is too many or too few
rules. There are also various theories and metrics, such as McCabe Complexity
metrics. It should be kept in mind that excessive use of standards and rules can stifle
productivity and creativity. 'Peer reviews', 'buddy checks' code analysis tools, etc. can
be used to check for problems and enforce standards.
For C and C++ coding, here are some typical ideas to consider in setting
rules/standards; these may or may not apply to a particular situation:

minimize or eliminate use of global variables.
use descriptive function and method names - use both upper and lower case, avoid
abbreviations, use as many characters as necessary to be adequately descriptive (use
of more than 20 characters is not out of line); be consistent in naming conventions.
use descriptive variable names - use both upper and lower case, avoid abbreviations,
use as many characters as necessary to be adequately descriptive (use of more than 20
characters is not out of line); be consistent in naming conventions.
function and method sizes should be minimized; less than 100 lines of code is good,
less than 50 lines is preferable.
function descriptions should be clearly spelled out in comments preceding a function's
organize code for readability.
use whitespace generously - vertically and horizontally
each line of code should contain 70 characters max.
one code statement per line.
coding style should be consistent throught a program (eg, use of brackets, indentations,
naming conventions, etc.)
in adding comments, err on the side of too many rather than too few comments; a
common rule of thumb is that there should be at least as many lines of comments
(including header blocks) as lines of code.
no matter how small, an application should include documentaion of the overall
program function and flow (even a few paragraphs is better than nothing); or if
possible a separate flow chart and detailed program documentation.
make extensive use of error handling procedures and status and error logging.
for C++, to minimize complexity and increase maintainability, avoid too many levels
of inheritance in class heirarchies (relative to the size and complexity of the
application). Minimize use of multiple inheritance, and minimize use of operator
overloading (note that the Java programming language eliminates multiple inheritance
and operator overloading.)
for C++, keep class methods small, less than 50 lines of code per method is preferable.
for C++, make liberal use of exception handlers
What is 'good design'?
'Design' could refer to many things, but often refers to 'functional design' or 'internal
design'. Good internal design is indicated by software code whose overall structure is
clear, understandable, easily modifiable, and maintainable; is robust with sufficient
error-handling and status logging capability; and works correctly when implemented.
Good functional design is indicated by an application whose functionality can be
traced back to customer and end-user requirements. (See further discussion of
functional and internal design in 'What's the big deal about requirements?' in FAQ #2.)
For programs that have a user interface, it's often a good idea to assume that the end
user will have little computer knowledge and may not read a user manual or even the
on-line help; some common rules-of-thumb include:

the program should act in a way that least surprises the user
it should always be evident to the user what can be done next and how to exit
the program shouldn't let the users do something stupid without warning them.
What is SEI? CMM? CMMI? ISO? IEEE? ANSI? Will it help?

SEI = 'Software Engineering Institute' at Carnegie-Mellon University; initiated by the
U.S. Defense Department to help improve software development processes.
CMM = 'Capability Maturity Model', now called the CMMI ('Capability Maturity
Model Integration'), developed by the SEI. It's a model of 5 levels of process
'maturity' that determine effectiveness in delivering quality software. It is geared to
large organizations such as large U.S. Defense Department contractors. However,
many of the QA processes involved are appropriate to any organization, and if
reasonably applied can be helpful. Organizations can receive CMMI ratings by
undergoing assessments by qualified auditors.
          Level 1 - characterized by chaos, periodic panics, and heroic
efforts required by individuals to successfully                      complete projects.
Few if any processes in place;                        successes may not be repeatable.
Level      2    -    software    project    tracking,    requirements     management,
realistic planning, and configuration management                         processes are
in place; successful practices can                       be repeated.         Level 3 -
standard software development and maintenance processes                             are
integrated throughout an organization; a Software                          Engineering
Process Group is is in place to oversee                        software processes, and
training programs are used to                               ensure understanding and
compliance.            Level 4 - metrics are used to track productivity, processes,
and products. Project performance is predictable,                        and quality is
consistently high.         Level 5 - the focus is on continouous process improvement.
The                            impact of new processes and technologies can be
predicted and effectively implemented when required.             Perspective on CMM
ratings: During 1997-2001, 1018 organizations               were assessed. Of those,
27% were rated at Level 1, 39% at 2,           23% at 3, 6% at 4, and 5% at 5. (For
ratings during the period         1992-96, 62% were at Level 1, 23% at 2, 13% at 3,
2% at 4, and          0.4% at 5.) The median size of organizations was 100 software
engineering/maintenance personnel; 32% of organizations were               U.S. federal
contractors or agencies. For those rated at            Level 1, the most problematical
key process area was in          Software Quality Assurance.
ISO = 'International Organisation for Standardization' - The ISO 9001:2000 standard
(which replaces the previous standard of 1994) concerns quality systems that are
assessed by outside auditors, and it applies to many kinds of production and
manufacturing organizations, not just software. It covers documentation, design,
development, production, testing, installation, servicing, and other processes. The full
set of standards consists of: (a)Q9001-2000 - Quality Management Systems:
Requirements; (b)Q9000-2000 - Quality Management Systems: Fundamentals and
Vocabulary; (c)Q9004-2000 - Quality Management Systems: Guidelines for
Performance Improvements. To be ISO 9001 certified, a third-party auditor assesses
an organization, and certification is typically good for about 3 years, after which a
complete reassessment is required. Note that ISO certification does not necessarily
indicate quality products - it indicates only that documented processes are followed.
Also see for the latest information. In the U.S. the standards can
be purchased via the ASQ web site at
ISO 9126 defines six high level quality characteristics that can be used in software
evaluation. It includes functionality, reliability, usability, efficiency, maintainability,
and portability.
IEEE = 'Institute of Electrical and Electronics Engineers' - among other things, creates
standards such as 'IEEE Standard for Software Test Documentation' (IEEE/ANSI
Standard 829), 'IEEE Standard of Software Unit Testing (IEEE/ANSI Standard 1008),
'IEEE Standard for Software Quality Assurance Plans' (IEEE/ANSI Standard 730),
and others.
ANSI = 'American National Standards Institute', the primary industrial standards body
in the U.S.; publishes some software-related standards in conjunction with the IEEE
and ASQ (American Society for Quality).
Other software development/IT management process assessment methods besides
CMMI and ISO 9000 include SPICE, Trillium, TickIT, Bootstrap, ITIL, MOF, and
See the 'Other Resources' section for further information available on the web.
What is the 'software life cycle'?
The life cycle begins when an application is first conceived and ends when it is no
longer in use. It includes aspects such as initial concept, requirements analysis,
functional design, internal design, documentation planning, test planning, coding,
document preparation, integration, testing, maintenance, updates, retesting, phase-out,
and other aspects. (See the Bookstore section's 'Software QA', 'Software Engineering',
and 'Project Management' categories for useful books with more information.)


Shared By: