Documents
Resources
Learning Center
Upload
Plans & pricing Sign in
Sign Out

How Web Servers Work - LaSalleChihuahua

VIEWS: 79 PAGES: 159

  • pg 1
									Semana 01 | Lecturas

         "Computer Society and ACM Approve Software Engineering Code of Ethics" Don Gotterbarn, Keith Miller, Simon
          Rogerson. Executive Committee, IEEE-CS/ACM Joint Task Force on Software Engineering Ethics and Professional
          Practices. IEEE Computer, October 1999. http://www.computer.org/computer/code-of-ethics.pdf (Disponible en la
          biblioteca digital)


                                       Computer Society and ACM
                                           Approve Software
                                       Engineering Code of Ethics
                              Don Gotterbarn, Keith Miller, Simon Rogerson
                           Executive Committee, IEEE-CS/ACM Joint Task Force
                         on Software Engineering Ethics and Professional Practices
Software engineering has evolved over the past several years from an activity of computer engineering to a discipline in
its own right. With an eye toward formalizing the field, the IEEE Computer Society has engaged in several activities to
advance the professionalism of software engineering, such as establishing certification requirements for software
developers. To complement this work, a joint task force of the Computer Society and the ACM has recently established
another linchpin of professionalism for software engineering: a code of ethics.

After an extensive review process, version of the Software Engineering Code of Ethics and Professional Practice,
recommended last year by the IEEECS/ACM Joint Task Force on Software Engineering Ethics and Professional
Practices, was adopted by both the IEEE Computer Society and the ACM.

PURPOSE
The Software Engineering Code of Ethics and Professional Practice, intended as a standard for teaching and practicing
software engineering, documents the ethical and professional obligations of software engineers. The code should instruct
practitioners about the standards society expects them to meet, about what their peers strive for, and about what to expect
of one another. In addition, the code should inform the public about the responsibilities that are important to the
profession.

Adopted by the Computer Society and the ACM—two leading international computing societies—the code of ethics
is intended as a guide for members of the evolving software engineering profession. The code was developed by a
multinational task force with additional input from other professionals from industry, government posts, military
installations, and educational professions.

CHANGES TO THE CODE
Major revisions were made between version 3.0—widely distributed through Computer (Don Gotterbarn, Keith Miller,
and Simon Rogerson, ―Software Engineering Code of Ethics, Version 3.0,‖ November 1997, pp. 88-92) and
Communications of the ACM—and version 5.2, the recently approved version. The preamble was significantly revised to
include specific standards that can help professionals make ethical decisions. \

To facilitate a quick review of the principles, a shortened version of the code was added to the front of the full version.
This shortened version is not intended to be a standalone abbreviated code. The details of the full version are necessary to
provide clear guidance for the practical application of these ethical principles. In addition to these changes, the eight
principles were reordered to reflect the order in which software professionals should consider their ethical obligations:
Version 3.0‘s first principle concerned the product, while version 5.2 begins with the public. The primacy of well-being
and quality of life of the public in all decisions related to software engineering is emphasized throughout the code. This
obligation is the final arbiter in all decisions: ―In all these judgements concern for the health, safety and welfare of the
public is primary; that is, the ‗Public Interest‘ is central to this Code.‖ For example, the whistle-blowing clauses
(6.11-6.13) describe a software engineer‘s obligations when public safety is threatened by defective software development
and describe steps to meet those obligations.

The code now contains an open-ended clause (8.07) against using prejudices or bias in any decision making, written
broadly enough to include consideration of new social concerns. Finally, the code includes specific language
about the importance of ethical behavior during the maintenance phase of software development. The new text
reflects the amount of time a computer professional spends modifying and improving existing software and also
makes clear that we need to treat maintenance with the same professionalism as new development. The quality of
maintenance depends upon the professionalism of the software engineer, because maintenance is more likely to be
scrutinized only locally, whereas new development is generally reviewed at a broader corporate level.

In the same spirit that created the code of ethics, the Computer Society and the ACM continue to support the software
engineering profession through the Software Engineering Professionalism and Ethics Project (http://computer.org/
tab/swecc/Sepec.htm). This project will help make the code an effective practical tool by publishing case studies,
supporting further corporate adoption of the code, developing curriculum material, running workshops, and collaborating
with licensing bodies and professional societies.
SHORT VERSION: PREAMBLE

The short version of the code summarizes aspirations at a high level of abstraction. The clauses that are included in the
full version give examples and details of how these aspirations change the way we act as software engineering
professionals. Without the aspirations, the details can become legalistic and tedious; without the details, the aspirations
can become highsounding but empty; together, the aspirations and the details form a cohesive code. Software engineers
shall commit themselves to making the analysis, specification, design, development, testing, and maintenance of software
a beneficial and respected profession. In accordance with their commitment to the health, safety, and welfare of the public,
software engineers shall adhere to the following eight Principles:

     1.   Public. Software engineers shall act consistently with the public interest.
     2.   Client and employer. Software engineers shall act in a manner that is in the best interests of their client and
          employer, consistent with the public interest.
     3.   Product. Software engineers shall ensure that their products and related modifications meet the highest
          professional standards possible.
     4.   Judgment. Software engineers shall maintain integrity and independence in their professional judgment.
     5.   Management. Software engineering managers and leaders shall subscribe to and promote an ethical approach to
          the management of software development and maintenance.
     6.   Profession. Software engineers shall advance the integrity and reputation of the profession consistent with the
          public interest.
     7.   Colleagues. Software engineers shall be fair to and supportive of their colleagues.
     8.   Self. Software engineers shall participate in lifelong learning regarding



                                           Software Engineering Code of
                                          Ethics and Professional Practice
the practice of their profession and shall promote an ethical approach to the practice of the profession.

FULL VERSION: PREAMBLE

Computers have a central and growing role in commerce, industry, government, medicine, education, entertainment, and
society at large. Software engineers are those who contribute, by direct participation or by teaching, to the analysis,
specification, design, development, certification, maintenance, and testing of software systems. Because of their roles in
developing software systems, software engineers have significant opportunities to do good or cause harm, to enable others
to do good or cause harm, or to influence others to do good or cause harm. To ensure, as much as possible, that their
efforts will be used for good, software engineers must commit themselves to making software engineering a beneficial
and respected profession. In accordance with that commitment, software engineers shall adhere to the following Code
of Ethics and Professional Practice. The Code contains eight Principles related to the behavior of and decisions
made by professional software engineers, including practitioners, educators, managers, supervisors, and policy makers, as
well as trainees and students of the profession. The Principles identify the ethically responsible relationships in which
individuals, groups, and organizations participate and the primary obligations within these relationships. The Clauses of
each Principle are illustrations of some of the obligations included in these relationships. These obligations are founded in
the software engineer‘s humanity, in special care owed to people affected by the work of software engineers, and in the
unique elements of the practice of soft- ware engineering. The Code prescribes these as obligations of anyone claiming to
be or aspiring to be a software engineer. It is not intended that the individual parts of the Code be used in isolation to
justify errors of omission or commission. The list of Principles and Clauses is not exhaustive. The Clauses should not be
read as separating the acceptable from the unacceptable in professional conduct in all practical situations. The Code is not
a simple ethical algorithm that generates ethical decisions. In some situations, standards may be in tension with each other
or with standards from other sources. These situations require the software engineer to use ethical judgment to act in a
manner that is most consistent with the spirit of the Code of Ethics and Professional Practice, given the circumstances.
Ethical tensions can best be addressed by thoughtful consideration of fundamental principles, rather than blind
reliance on detailed regulations. These Principles should influence software engineers to consider broadly who is affected
by their work; to examine if they and their colleagues are treating other human beings with due respect; to consider how
the public, if reasonably well informed, would view their decisions; to analyze how the least empowered will be affected
by their decisions; and to consider whether their acts would be judged worthy of the ideal professional working as
a software engineer. In all these judgments concern for the health, safety and welfare of the public is primary; that is, the
―Public Interest‖ is central to this Code. The dynamic and demanding context of software engineering requires a code
that is adaptable and relevant to new situations as they occur. However, even in this generality, the Code provides support
for software engineers and managers of software engineers who need to take positive action in a specific case by
documenting the ethical stance of the profession. The Code provides an ethical foundation to which individuals within
teams and the team as a whole can appeal. The Code helps to define those actions that are ethically improper to
request of a software engineer or teams of software engineers. The Code is not simply for adjudicating
the nature of questionable acts; it also has an important educational function. As this Code expresses the consensus of
the profession on ethical issues, it is a means to educate both the public and aspiring professionals about the ethical
obligations of all software engineers.
PRINCIPLES
Principle 1: Public

Software engineers shall act consistently with the public interest. In particular, software engineers shall, as appropriate:
1.01 Accept full responsibility for their own work.
1.02 Moderate the interests of the software engineer, the employer, the client, and the users with the public good.
1.03 Approve software only if they have a well-founded belief that it is safe, meets specifications, passes appropriate tests,
     and does not diminish quality of life, diminish privacy, or harm the environment. The ultimate effect of the work
     should be to the public good.
1.04 Disclose to appropriate persons or authorities any actual or potential danger to the user, the public, or the
     environment, that they reasonably believe to be associated with software or related documents.
1.05 Cooperate in efforts to address matters of grave public concern caused by software, its installation, maintenance,
     support, or documentation.
1.06 Be fair and avoid deception in all statements, particularly public ones, concerning software or related documents,
     methods, and tools.
1.07 Consider issues of physical disabilities, allocation of resources, economic disadvantage, and other factors that can
     diminish access to the benefits of software.
1.08 Be encouraged to volunteer professional skills to good causes and to contribute to public education concerning the
     discipline.

Principle 2: Client and employer

Software engineers shall act in a manner that is in the best interests of their client and employer, consistent with the
public interest. In particular, software engineers shall, as appropriate:

     2.01 Provide service in their areas of competence, being honest and forthright about any limitations of their
          experience and education.
     2.02 Not knowingly use software that is obtained or retained either illegally or unethically.
     2.03 Use the property of a client or employer only in ways properly authorized, and with the client‘s or employer‘s
          knowledge and consent.
     2.04 Ensure that any document upon which they rely has been approved, when required, by someone authorized to
          approve it.
     2.05 Keep private any confidential information gained in their professional work, where such confidentiality is
          consistent with the public interest and consistent with the law.
     2.06 Identify, document, collect evidence, and report to the client or the employer promptly if, in their opinion, a
          project is likely to fail, to prove too expensive, to violate intellectual property law, or otherwise to be
          problematic.
     2.07 Identify, document, and report significant issues of social concern, of which they are aware, in software or
          related documents, to the employer or the client.
     2.08 Accept no outside work detrimental to the work they perform for their primary employer.
     2.09 Promote no interest adverse to their employer or client, unless a higher ethical concern is being compromised; in
          that case, inform the employer or another appropriate authority of the ethical concern.

Principle 3: Product

Software engineers shall ensure that their products and related modifications meet the highest professional standards
possible. In particular, software engineers shall, as appropriate:

3.01 Strive for high quality, acceptable cost, and a reasonable schedule, ensuring significant tradeoffs are clear to and
     accepted by the employer and the client, and are available for consideration by the user and the public.
3.02 Ensure proper and achievable goals and objectives for any project on which they work or propose.
3.03 Identify, define, and address ethical, economic, cultural, legal, and environmental issues related to work projects.
3.04 Ensure that they are qualified for any project on which they work or propose to work, by an appropriate combination
     of education, training, and experience.
3.05 Ensure that an appropriate method is used for any project on which they work or propose to work.
3.06 Work to follow professional standards, when available, that are most appropriate for the task at hand, departing from
     these only when ethically or technically justified.
3.07 Strive to fully understand the specifications for software on which they work.
3.08 Ensure that specifications for software on which they work have been well documented, satisfy the user‘s
     requirements, and have the appropriate approvals.
3.09 Ensure realistic quantitative estimates of cost, scheduling, personnel, quality, and outcomes on any project on which
     they work or propose to work and provide an uncertainty assessment of these estimates.
3.10 Ensure adequate testing, debugging, and review of software and related documents on which they work.
3.11 Ensure adequate documentation, including significant problems discovered and solutions adopted, for any project on
     which they work.
3.12 Work to develop software and related documents that respect the privacy of those who will be affected by that
     software.
3.13 Be careful to use only accurate data derived by ethical and lawful means, and use it only in ways properly authorized.
3.14 Maintain the integrity of data, being sensitive to outdated or flawed occurrences.
3.15 Treat all forms of software maintenance with the same professionalism as new development.
Principle 4: Judgment

Software engineers shall maintain integrity and independence in their professional judgment. In particular, software
engineers shall, as appropriate:

4.01 Temper all technical judgments by the need to support and maintain human values.
4.02 Only endorse documents either prepared under their supervision or within their areas of competence and with which
     they are in agreement.
4.03 Maintain professional objectivity with respect to any software or related documents they are asked to evaluate.
4.04 Not engage in deceptive financial practices such as bribery, double billing, or other improper financial practices.
4.05 Disclose to all concerned parties those conflicts of interest that cannot reasonably be avoided or escaped.
4.06 Refuse to participate, as members or advisors, in a private, governmental, or professional body concerned with
     software-related issues in which they, their employers, or their clients have undisclosed potential conflicts of interest.

Principle 5: Management

Software engineering managers and leaders shall subscribe to and promote an ethical approach to the management of
software development and maintenance. In particular, those managing or leading software engineers shall, as appropriate

5.01 Ensure good management for any project on which they work, including effective procedures for promotion of
     quality and reduction of risk.
5.02 Ensure that software engineers are informed of standards before being held to them.
5.03 Ensure that software engineers know the employer‘s policies and procedures for protecting passwords, files, and
     information that is confidential to the employer or confidential to others.
5.04 Assign work only after taking into account appropriate contributions of education and experience tempered with a
     desire to further that education and experience.
5.05 Ensure realistic quantitative estimates of cost, scheduling, personnel, quality, and outcomes on any project on which
     they work or propose to work, and provide an uncertainty assessment of these estimates.
5.06 Attract potential software engineers only by full and accurate description of the conditions of employment.
5.07 Offer fair and just remuneration.
5.08 Not unjustly prevent someone from taking a position for which that person is suitably qualified.
5.09 Ensure that there is a fair agreement concerning ownership of any software, processes, research, writing, or other
     intellectual property to which a software engineer has contributed.
5.10 Provide for due process in hearing charges of violation of an employer‘s policy or of this Code.
5.11 Not ask a software engineer to do anything inconsistent with this Code.
5.12 Not punish anyone for expressing ethical concerns about a project.

Principle 6: Profession

Software engineers shall advance the integrity and reputation of the profession consistent with the public interest. In
particular, software engineers shall, as appropriate:

6.01 Help develop an organizational environment favorable to acting ethically.
6.02 Promote public knowledge of software engineering.
6.03 Extend software engineering knowledge by appropriate participation in professional organizations, meetings, and
     publications.
6.04 Support, as members of a profession, other software engineers striving to follow this Code.
6.05 Not promote their own interest at the expense of the profession, client, or employer.
6.06 Obey all laws governing their work, unless, in exceptional circumstances, such compliance is inconsistent with the
     public interest.
6.07 Be accurate in stating the characteristics of software on which they work, avoiding not only false claims but also
     claims that might reasonably be supposed to be speculative, vacuous, deceptive, misleading, or doubtful.
6.08 Take responsibility for detecting, correcting, and reporting errors in software and associated documents on which
     they work.
6.09 Ensure that clients, employers, and supervisors know of the software engineer‘s commitment to this Code of Ethics,
     and the subsequent ramifications of such commitment.
6.10 Avoid associations with businesses and organizations which are in conflict with this Code.
6.11 Recognize that violations of this Code are inconsistent with being a professional software engineer.
6.12 Express concerns to the people involved when significant violations of this Code are detected unless this is
     impossible, counterproductive, or dangerous.
6.13 Report significant violations of this Code to appropriate authorities when it is clear that consultation with people
     involved in these significant violations is impossible, counterproductive, or dangerous.
Principle 7: Colleagues

Software engineers shall be fair to and supportive of their colleagues. In particular, software engineers shall, as
appropriate:

7.01 Encourage colleagues to adhere to this Code.
7.02 Assist colleagues in professional development.
7.03 Credit fully the work of others and refrain from taking undue credit.
7.04 Review the work of others in an objective, candid, and properly documented way.
7.05 Give a fair hearing to the opinions, concerns, or complaints of a colleague.
7.06 Assist colleagues in being fully aware of current standard work practices including policies and procedures for
     protecting passwords, files, and other confidential information, and security measures in general.
7.07 Not unfairly intervene in the career of any colleague; however, concern for the employer, the client, or public interest
     may compel software engineers, in good faith, to question the competence of a colleague.
7.08 In situations outside of their own areas of competence, call upon the opinions of other professionals who have
     competence in those areas.

Principle 8: Self

Software engineers shall participate in lifelong learning regarding the practice of their profession and shall promote an
ethical approach to the practice of the profession. In particular, software engineers shall continually endeavor to:


      Further their knowledge of developments in the analysis, specification, design, development, maintenance, and
testing of software and related documents, together with the management of the development process.
      Improve their ability to create safe, reliable, and useful quality software at reasonable cost and within a reasonable
time.8.03.
      Improve their ability to produce accurate, informative, and wellwritten documentation.
      Improve their understanding of the software and related documents on which they work and of the environment in
which they will be used.
      Improve their knowledge of relevant standards and the law governing the software and related documents on which
they work.
      Improve their knowledge of this Code, its interpretation, and its application to their work.
      Not give unfair treatment to anyone because of any irrelevant prejudices.
      Not influence others to undertake any action that involves a breach of this Code.
      Recognize that personal violations of this Code are inconsistent with being a professional software engineer.
       "Education for Sustainable Development Toolkit" Rosalyn McKeown,Ph.D. Version 2, July 2002.
        http://www.esdtoolkit.org/discussion/default.htm


Education is an essential tool for achieving sustainability. People around the world recognize that current
economic development trends are not sustainable and that public awareness, education, and training are
key to moving society toward sustainability. Beyond that, there is little agreement. People argue about the
meaning of sustainable development and whether or not it is attainable. They have different visions of what
sustainable societies will look like and how they will function. These same people wonder why educators
have not moved more quickly to develop education for sustainability (EfS) programs. The lack of agreement
and definition have stymied efforts to move education for sustainable development (ESD) forward.

It is curious to note that while we have difficulty envisioning a sustainable world, we have no difficulty
identifying what is unsustainable in our societies. We can rapidly create a laundry list of problems -
inefficient use of energy, lack of water conservation, increased pollution, abuses of human rights, overuse of
personal transportation, consumerism, etc. But we should not chide ourselves because we lack a clear
definition of sustainability. Indeed, many truly great concepts of the human world - among them democracy
and justice - are hard to define and have multiple expressions in cultures around the world.

In the Toolkit, we use three terms synonymously and interchangeably: education for sustainable
development (ESD), education for sustainability (EfS), and sustainability education (SE). We use ESD most
often, because it is the terminology used frequently at the international level and within UN documents.
Locally or nationally, the ESD effort may be named or described in many ways because of language and
cultural differences. As with all work related to sustainable development, the name and the content must be
locally relevant and culturally appropriate.

An important distinction is the difference between education about sustainable development and education
for sustainable development. The first is an awareness lesson or theoretical discussion. The second is the use
of education as a tool to achieve sustainability. In our opinion, more than a theoretical discussion is needed
at this critical juncture in time. While some people argue that "for" indicates indoctrination, we think "for"
indicates a purpose. All education serves a purpose or society would not invest in it. Driver education, for
example, seeks to make our roads safer for travelers. Fire-safety education seeks to prevent fires and tragic
loss of lives and property. ESD promises to make the world more livable for this and future generations. Of
course, a few will abuse or distort ESD and turn it into indoctrination. This would be antithetical to the
nature of ESD, which, in fact, calls for giving people knowledge and skills for lifelong learning to help them
find new solutions to their environmental, economic, and social issues.

Sustainable Development

Sustainable development is a difficult concept to define; it is also continually evolving, which makes it
doubly difficult to define. One of the original descriptions of sustainable development is credited to the
Brundtland Commission: "Sustainable development is development that meets the needs of the present
without compromising the ability of future generations to meet their own needs" (World Commission on
Environment and Development, 1987, p 43). Sustainable development is generally thought to have three
components: environment, society, and economy. The well-being of these three areas is intertwined, not
separate. For example, a healthy, prosperous society relies on a healthy environment to provide food and
resources, safe drinking water, and clean air for its citizens. The sustainability paradigm rejects the
contention that casualties in the environmental and social realms are inevitable and acceptable
consequences of economic development. Thus, the authors consider sustainability to be a paradigm for
thinking about a future in which environmental, societal, and economic considerations are balanced in the
pursuit of development and improved quality of life.
Principles of Sustainable Development

Many governments and individuals have pondered what sustainable development means beyond a simple
one-sentence definition. The Rio Declaration on Environment and Development fleshes out the definition by
listing 18 principles of sustainability.

People are entitled to a healthy and productive life in harmony with nature.

Development today must not undermine the development and environment needs of present and future
generations.

Nations have the sovereign right to exploit their own resources, but without causing environmental damage
beyond their borders.

Nations shall develop international laws to provide compensation for damage that activities under their
control cause to areas beyond their borders.

Nations shall use the precautionary approach to protect the environment. Where there are threats of
serious or irreversible damage, scientific uncertainty shall not be used to postpone cost-effective measures
to prevent environmental degradation.

In order to achieve sustainable development, environmental protection shall constitute an integral part of
the development process, and cannot be considered in isolation from it. Eradicating poverty and reducing
disparities in living standards in different parts of the world are essential to achieve sustainable
development and meet the needs of the majority of people.

Nations shall cooperate to conserve, protect and restore the health and integrity of the Earth's ecosystem.
The developed countries acknowledge the responsibility that they bear in the international pursuit of
sustainable development in view of the pressures their societies place on the global environment and of the
technologies and financial resources they command.

Nations should reduce and eliminate unsustainable patterns of production and consumption, and promote
appropriate demographic policies.

Environmental issues are best handled with the participation of all concerned citizens. Nations shall facilitate
and encourage public awareness and participation by making environmental information widely available.

Nations shall enact effective environmental laws, and develop national law regarding liability for the victims
of pollution and other environmental damage. Where they have authority, nations shall assess the
environmental impact of proposed activities that are likely to have a significant adverse impact.

Nations should cooperate to promote an open international economic system that will lead to economic
growth and sustainable development in all countries. Environmental policies should not be used as an
unjustifiable means of restricting international trade.

The polluter should, in principle, bear the cost of pollution.

Nations shall warn one another of natural disasters or activities that may have harmful transboundary
impacts.

Sustainable development requires better scientific understanding of the problems. Nations should share
knowledge and innovative technologies to achieve the goal of sustainability.

The full participation of women is essential to achieve sustainable development. The creativity, ideals and
courage of youth and the knowledge of indigenous people are needed too. Nations should recognize and
support the identity, culture and interests of indigenous people.

Warfare is inherently destructive of sustainable development, and Nations shall respect international laws
protecting the environment in times of armed conflict, and shall cooperate in their further establishment.
Peace, development and environmental protection are interdependent and indivisible.

The "Rio principles" give us parameters for envisioning locally relevant and culturally appropriate
sustainable development for our own nations, regions, and communities. These principles help us to grasp
the abstract concept of sustainable development and begin to implement it.

Sustainability

Here are some effective explanations of sustainable development created for different audiences.

Sustainable development has three components: environment, society, and economy. If you consider the
three to be overlapping circles of the same size, the area of overlap in the center is human well-being. As the
environment, society, and economy become more aligned, the area of overlap increases, and so does
human well-being.

The National Town Meeting on Sustainability (May 1999) in Detroit, Michigan, established that the term
"sustainable development," although frequently used, is not well understood. We believe that it means new
technologies and new ways of doing business, which allow us to improve quality of life today in all
economic, environmental, and social dimensions, without impairing the ability of future generations to
enjoy quality of life and opportunity at least as good as ours.

The human rights community says that sustainability is attainable through and supported by peace, justice,
and democracy.

The Great Law of the Hau de no sau nee (Six Nations Iroquois Confederation) says that in every deliberation
we must consider the impact on the seventh generation.

Economics educators say sustainability is living on the interest rather than the principle.

History of Education for Sustainable Development

From the time sustainable development was first endorsed at the UN General Assembly in 1987, the parallel
concept of education to support sustainable development has also been explored. From 1987 to 1992, the
concept of sustainable development matured as committees discussed, negotiated, and wrote the 40
chapters of Agenda 21. Initial thoughts concerning ESD were captured in Chapter 36 of Agenda 21,
"Promoting Education, Public Awareness, and Training."

Unlike most education movements, ESD was initiated by people outside of the education community. In
fact, one major push for ESD came from international political and economic forums (e.g., United Nations,
Organization for Economic Co-operation and Development, Organization of American States). As the
concept of sustainable development was discussed and formulated, it became apparent that education is
key to sustainability. In many countries, ESD is still being shaped by those outside the education community.
The concepts and content of ESD in these cases are developed by ministries, such as those of environment
and health, and then given to educators to deliver. Conceptual development independent of educator input
is a problem recognized by international bodies as well as educators.
Education: Promise and Paradox

Two of the major issues in the international dialog on sustainability are population and resource
consumption. Increases in population and resource use are thought to jeopardize a sustainable future, and
education is linked both to fertility rate and resource consumption. Educating females reduces fertility rates
and therefore population growth. By reducing fertility rates and the threat of overpopulation a country also
facilitates progress toward sustainability. The opposite is true for the relationship between education and
resource use. Generally, more highly educated people, who have higher incomes, consume more resources
than poorly educated people, who tend to have lower incomes. In this case, more education increases the
threat to sustainability.

Unfortunately, the most educated nations leave the deepest ecological footprints, meaning they have the
highest per-capita rates of consumption. This consumption drives resource extraction and manufacturing
around the world. The figures from the United Nations Educational, Scientific and Cultural Organization
(UNESCO) Statistical Yearbook and World Education Report, for example, show that in the United States
more than 80 percent of the population has some post-secondary education, and about 25 percent of the
population has a four-year degree from a university. Statistics also show that per-capita energy use and
waste generation in the United States are nearly the highest in the world. In the case of the United States,
more education has not led to sustainability. Clearly, simply educating citizenry to higher levels is not
sufficient for creating sustainable societies. The challenge is to raise the education levels without creating an
ever-growing demand for resources and consumer goods and the accompanying production of pollutants.
Meeting this challenge depends on reorienting curriculums to address the need for more-sustainable
production and consumption patterns.

Every nation will need to reexamine curriculum at all levels (i.e., pre-school to professional education).
While it is evident that it is difficult to teach environmental literacy, economics literacy, or civics without
basic literacy, it is also evident that simply increasing basic literacy, as it is currently taught in most
countries, will not support a sustainable society.

Thresholds of Education and Sustainability

Consider for instance, that when education levels are low, economies are often limited to resource
extraction and agriculture. In many countries, the current level of basic education is so low that it severely
hinders development options and plans for a sustainable future. A higher education level is necessary to
create jobs and industries that are "greener" (i.e., those having lower environmental impacts) and more
sustainable.

The relationship between education and sustainable development is complex. Generally, research shows
that basic education is key to a nation's ability to develop and achieve sustainability targets. Research has
shown that education can improve agricultural productivity, enhance the status of women, reduce
population growth rates, enhance environmental protection, and generally raise the standard of living. But
the relationship is not linear. For example, four to six years of education is the minimum threshold for
increasing agricultural productivity. Literacy and numeracy allow farmers to adapt to new agricultural
methods, cope with risk, and respond to market signals. Literacy also helps farmers mix and apply chemicals
(e.g., fertilizers and pesticides) according to manufacturers' directions, thereby reducing the risks to the
environment and human health. A basic education also helps farmers gain title to their land and apply for
credit at banks and other lending institutions. Effects of education on agriculture are greatest when the
proportion of females educated to threshold level equals that of males.

Education benefits a woman in life-altering ways. An educated woman gains higher status and an enhanced
sense of efficacy. She tends to marry later and have greater bargaining power and success in the "marriage
market." She also has greater bargaining power in the household after marriage. An educated woman tends
to desire a smaller family size and seek the health care necessary to do so. She has fewer and healthier
children. An educated woman has high educational and career expectations of her children, both boys and
girls. For females, education profoundly changes their lives, how they interact with society, and their
economic status. Educating women creates more equitable lives for women and their families and increases
their ability to participate in community decision making and work toward achieving local sustainability
goals.
Another educational threshold is primary education for women. At least a primary education is required
before birthrate drops and infant health and children's education improve. Nine to 12 years of education are
required for increased industrial productivity. This level of education also increases the probability of
employment in a changing economy. Few studies have been carried out on how education affects
environmental stewardship, but one study suggests that a lower-secondary education (or approximately
nine years) is necessary to intensify use of existing land and to provide alternative off-farm employment and
migration from rural areas. Finally, a subtle combination of higher education, research, and life-long learning
is necessary for a nation to shift to an information or knowledge-based economy, which is fueled less by
imported technology and more by local innovation and creativity (UNESCO-ACEID, 1997).

Education directly affects sustainability plans in the following three areas:

Implementation. An educated citizenry is vital to implementing informed and sustainable development. In
fact, a national sustainability plan can be enhanced or limited by the level of education attained by the
nation's citizens. Nations with high illiteracy rates and unskilled workforces have fewer development
options. For the most part, these nations are forced to buy energy and manufactured goods on the
international market with hard currency. To acquire hard currency, these countries need international trade;
usually this leads to exploitation of natural resources or conversion of lands from self-sufficient family-based
farming to cash-crop agriculture. An educated workforce is key to moving beyond an extractive and
agricultural economy.

Decision making. Good community-based decisions - which will affect social, economic, and environmental
well-being - also depend on educated citizens. Development options, especially "greener" development
options, expand as education increases. For example, a community with an abundance of skilled labor and
technically trained people can persuade a corporation to locate a new information-technology and software-
development facility nearby. Citizens can also act to protect their communities by analyzing reports and data
that address community issues and helping shape a community response. For example, citizens who were
concerned about water pollution reported in a nearby watershed started monitoring the water quality of
local streams. Based on their data and information found on the World Wide Web, they fought against the
development of a new golf-course, which would have used large amounts of fertilizer and herbicide in
maintenance of the grounds.

Quality of life. Education is also central to improving quality of life. Education raises the economic status of
families; it improves life conditions, lowers infant mortality, and improves the educational attainment of the
next generation, thereby raising the next generation's chances for economic and social well-being. Improved
education holds both individual and national implications.
        How to Achieve Sustainable Software Development". Date: Jan 27, 2007 By Kevin Tate. Sample Chapter is provided
         courtesy of Addison Wesley Professional. http://www.informit.com/articles/printerfriendly.asp?p=433344.


Very little software is written once, installed, and then never changed over the course of its lifetime. And
yet, the most prevalent development practices used in the industry treat change as an afterthought. This
chapter will teach you to not only anticipate change in your software but develop specifically with change in
mind.




Sustainable software development is a mindset (principles) and an accompanying set of practices that
enable a team to achieve and maintain an optimal development pace indefinitely. I feel that the need for
                                                                                                  [1]
sustainable development is an important but unrecognized issue facing software organizations and teams
today. One of the more interesting paradoxes in the high-tech sector is that while the pace of innovation is
increasing, the expected lifetime of successful software applications is not decreasing, at least not in a
related way. This chapter outlines the value of sustainable development, while the next chapter discusses
the pitfalls of unsustainable development.

The more successful an application or tool is, the greater the demands placed on the development team to
keep up the pace of innovation and feature development. Think of products like Adobe Photoshop,
PowerPoint, SAP, or Oracle. These products are all successful and continue to be successful because their
development teams have been able to meet user's needs over a long period of time despite persistent
competitive pressures and changing technology and market conditions.

Unfortunately, there are too many projects where there is a myopic focus on the features in the next
release, the next quarter, and the current issues such as defects and escalations reported by customers. The
software is both brittle and fragile as a result of factors such as over- (or under-) design, a code first then fix
defects later (code-then-fix) mentality, too many dependencies between code modules, the lack of
safeguards such as automated tests, and supposedly temporary patches or workarounds that are never
addressed. These are projects that are unknowingly practicing unsustainable development.

In unsustainable development, teams are primarily reactive to changes in their ecosystem. By and large,
these teams are caught in a vicious cycle of reacting to events and working harder and longer hours akin to
being on a treadmill or walking up a down escalator. The result is a project death spiral, where the rapidity
of descent depends on the amount of complexity faced by the team and its principles and practices and
discipline.

In sustainable development, teams are able to be proactive about changes in their ecosystem. Their ability to
be proactive is enabled by their attention to doing the work that is of the highest value to customers with
high quality and reliability and an eye toward continual improvement despite increasing complexity. These
teams are in a virtuous cycle, where the more team is able to improve themselves and how they work
together, the greater their ability to deal with increasing complexity and change.

Underlying sustainable development is a mindset that the team is in it for the long haul. The team adopts
and fosters principles and practices that help them continually increase their efficiency, so that as the
project gets larger and more complex and customer demands increase, the team can continue at the same
pace while keeping quality high and sanity intact. They do this by continually minimizing complexity,
revisiting their plans, and paying attention to the health of their software and its ability to support change.
Sustainable Development

Sustainable development is a mindset (principles) and an accompanying set of practices that enable a team
to achieve and maintain an optimal development pace indefinitely. Note that optimal doesn't mean
fastest— that would be pure coding, such as for a prototype.

Sustainable development is about efficiency and balancing the needs of the short and long term. It means
doing just the right amount of work to meet the needs of customers in the short term while using practices
that support the needs of the long term. There are not enough software projects today where over time a
team can stay the same size (or even shrink) and still deal with the increasing complexity of its software and
its ecosystem and increasing customer demands. In sustainable development, the needs of the short term
are met by regularly producing software that has the highest possible value to customers. This is done while
keeping the cost of change as low as possible, which lays the foundation for future changes and makes it
possible to quickly respond to changes in the ecosystem.

Sustainable development, as depicted in Figure 1-1, is a liberating experience for the lucky teams who can
achieve it. While they have to deal with stress in the form of constant change, they have the advantage that
they are in control of the situation and can out ship their competitors because they are able to respond
more rapidly and at a much lower cost. They are also able to be proactive about new technologies or new
opportunities in any form.




Figure 1-1 In sustainable development, the cost of change stays low over time. The team is able to respond
to changing requirements and changes to the software's ecosystem. This is a pace that the team can
maintain indefinitely. Key indicators of sustainable development are an ability to keep the number of
defects relatively constant over time while recognizing that the software must be modified to keep the cost
of change under control.

Chemical Manufacturing and Sustainable Development

Some software companies are able to periodically reinvent themselves and their products. These companies
don't need to completely rewrite their software products and in fact are able over time to add to their
product line, usually with the same underlying technology and implementations. How do these companies
do it? For some of the answers, let's look at some interesting research from a seemingly unrelated industry:
chemical manufacturing.

Some interesting research into productivity at chemical manufacturing plants has parallels in software
                                               [2]
development [Repenning and Sterman 2001] This research focused on chemical plants that are in deep
trouble. These are plants that had low productivity, low employee morale, etc. The companies who owned
the plants were seriously considering, or in the process of, closing them down and moving operations to
another location with higher returns on investment.

What the researchers found is that in the plants in question the first response to trouble is to ask people to
work harder, usually through longer hours. However, while working harder results in a definite short-term
increase in overall capability, the long-term effect is actually declining capability, as shown in Figure 1-2.
Figure 1-2 Working harder (more hours) results in declining capability over time. Working smarter, with an
emphasis on continual improvement, leads to increasing capability. From [Repenning and Sterman 2001].

One of the reasons for declining capability over the long term when working harder is the resulting vicious
cycle or death spiral. This cycle is due to unanticipated side effects of the decision to work harder: As more
hours are worked, more mistakes are made, and there is a greater emphasis on quick fixes and responding
to problems. These mistakes and quick fixes lead to the requirement for more work.

In a chemical plant a mechanic might incorrectly install a seal on a pump. The seal might rupture hours or
days later. When it does, the entire pump would need to be replaced, which takes an entire section of the
plant offline while leaking more chemicals into the environment. People will think they aren't working hard
enough, so they'll put in more hours. Then, the extra costs of all the overtime and environmental cleanups
kick in and costs are cut in other areas such as carrying fewer spare parts to compensate. When parts aren't
available when needed, the plant is down longer. The longer the plant is down, the greater the reduction in
revenue and the higher the costs. The greater the reduction in revenue, the greater the pressures to further
reduce costs. Eventually, people are going to be laid off, and the fewer people available, the less the ability
of the plant to produce output. And so it goes.

Parking Lot Managers

I believe there are too many people in the software industry, managers especially, who judge the morale or
productivity of their company by how many hours their employees work on evenings and weekends on a
regular basis. I call these people parking lot managers because they're often proud of the fact that their
company's parking lot is still full at midnight and on weekends. However, very few of these managers realize
that full parking lot effort is not sustainable, that working harder may be valid in the short-term when a
concerted effort is required, but it is definitely not in the best long-term interests of the company.

Companies need people who treasure the contribution they make when at work and who are passionate
about the success of the company. This has no correlation with the number of hours worked. . .
The largest reason for a decline in long-term capability is that working harder results in an inability to
implement necessary improvements. In the plants studied, mechanics were too busy fixing problems in the
pumps to do anything else. As any car owner who ignores basic regular maintenance knows, the longer
mechanical parts are left untended, the greater the chance they will eventually fail, not to mention the
greater the eventual cost. This leads to another vicious cycle: The harder people work and the more
problems they are trying to fix (or more appropriately, the more fires they're trying to put out), the greater
the chance that problems will continue to build and grow worse over time. No doubt you've been in
situations like this. The problem quickly becomes one of having time stand still through continuous death
march releases, or fixing things.

The employees of the chemical plants turned things around by developing a realistic simulation of their
situation. The simulation was developed in such a way that it demonstrated to participants the results of
various decisions. Importantly, the simulation was not designed to teach or test skills. They recognized that
the mechanics, for example, didn't need to be taught to be better mechanics; after all, they were very adept
at their craft through all the crucial problems they had to fix on the spot. The simulation, implemented as a
game, realistically demonstrated the various important tradeoffs that can be made in a plant between
working harder and working smarter. In a chemical plant, working smarter consists of activities like
preventive maintenance, where a pump is taken offline, disassembled, examined, and lubricated on a
regularly scheduled basis. Working smarter is also taking the time to examine the entire plant's systems and
processes and continually attempting to identify problems before they occur while reducing the complexity
of the overall system in order to increase efficiency.

The results of the simulation were an eye opener for the plant's employees. The results were also
counterintuitive to many: They showed that working smarter (especially doing preventive maintenance)
consistently produced better results over the long term. The simulation also demonstrated that with any
attempt to work smarter there is an initial dip in capability caused by the need to let some problems go
unfixed while initial improvements are made as shown in Figure 1-2. This helped the employees expect the
dip in capability and have the persistence to follow through with the changes (a perfectly human response
would be to think that the dip was permanent and revert back to the old ways). Eventually, as the number of
improvements started to make a difference, capability would climb until a point where the plant entered a
virtuous cycle, where each additional investment in improvements led to further efficiencies and gains in
output with lower environmental impact, which in turn led to more time being available to make further
improvements, and so on. People were actually able to accomplish more work by working smarter than they
had before.

As a result of the simulation, the chemical plants described experienced a complete turn-around. Not only
were these plants kept open, but they also received additional work and their business grew. And some of
the changes introduced by the employees had a lasting effect, some with a return on investment of 700,000
percent! The most astonishing thing, which perhaps isn't so astonishing when you consider the rut these
companies were stuck in, is that virtually all of the changes that were required to make the turnaround were
well known to the employees but they'd never been implemented because they were always too busy!
Continual Improvement: The Accelerator Button

The study on capability in a chemical manufacturing plant is surprisingly relevant for software companies.
The main lesson is that in sustainable software development, you need to strive for continual improvement
while resisting the temptation to focus on features and simply working long hours to meet demands. A
software company's business is only as sound as its factory, where the factory is made up of a number of
"pumps"; the software that the company produces. You may need to take some of your pumps offline
occasionally, and every few years you will realize that your factory is completely different than it was
previously because of all the changes made to the pumps. And as with real-world factories, only in extreme
circumstances, such as when a disruptive technology is on the verge of widespread adoption or a paradigm
shift is about to occur, will you need to completely replace the factory.

Each of the following questions examines some of the parallels between chemical manufacturing plants and
software development.

How many developers view writing test code, building testability into their code, enhancing a test
framework, or cleaning up their product's build system as nonessential to their work and hence something
that lower paid people should be doing?

Somehow, many developers have a mindset that non-feature work is not part of their job. This is not only
preposterous but also extremely dangerous and is akin to a mechanic at a chemical plant believing the basic
upkeep of his tools and workshop is not part of his job. If developers don't write automated tests, then
chances are the culture of the company is focused on defect detection, not defect prevention. As shown in
Chapter 5, this is the worst possible situation a software company can get itself into because it is extremely
expensive and wasteful, with many defects being found by customers (which is the most expensive scenario
of all). Likewise, if developers don't personally pay attention to the infrastructural details of their products
(such as build and configuration management systems), it is all too easy for problems to creep in that
eventually impact key performance indicators that also impact developer's productivity. Examples are build
times, code bloat, source tree organization, and improper dependencies between build units creeping into
the product.

How many managers refer to non-feature work as a tax that is unessential to product development?

If a company is in a feature war with its competition, the company's management needs to understand how
the decisions and statements they make ultimately impact the long-term success of the company. Managers
must recognize that they have a key role in leading the attitude of the organization. In this case,
management, or even in many cases software developers themselves, need to realize that the only tax on
the organization is their attitude about non-feature work. As in chemical manufacturing plants, it is
counterintuitive that ongoing investment in continual improvement will lead to greater capability than solely
concentrating on features and bug fixing.

How many teams are too busy to implement any improvements, such as adding to their automated tests,
improving their build environment, installing a configuration management system, rewriting a particularly
buggy or hard to maintain piece of code, because they're "too busy" developing features and bug fixing?

Just as the mechanics in the chemical engineering plants are unable to perform preventive maintenance by
taking a pump offline on a regularly scheduled basis because they're too busy fixing problems (i.e., fighting
fires), many software teams are also unable to pay attention to the "health" of the underlying software or
the development infrastructure. As described earlier in this chapter, this is a key part of the software
development death spiral. It will eventually lead to a virtual crisis with a team spending an increasing
amount of time during each release fixing defects at the expense of other work. Either the company will
eventually give up on the product or be forced into a likely rewrite of major portions of the software (or
even the complete software) at great business expense.

When developers are fixing defects or adding new features, how often do they use shortcuts that
compromise the underlying architecture because they're "under a tight deadline"?

The main danger with these shortcuts is that they're almost always commented (e.g., ugly hack introduced
before shipping version 2) but rarely fixed. These shortcuts are broken windows (see chapter 4 for more
details) in the software and are another part of the software death spiral.
Are the people who develop the products in your company proud of the products, but not proud of how the
products were created?

If your products are absolutely brilliant but your people are burned out by the experience of individual
releases, maybe it's time for a step back. Maybe there is too much heroism required, or too little planning
takes place, or there is an ongoing requirement for long hours because the schedule is slipping. There are
many possible factors, but the underlying cause can usually be traced back to a lack of focus on continual
improvement and technical excellence.

The Genius of the AND versus the Tyranny of the OR

One way to think about continual improvement is through the genius of the AND and the tyranny of the OR
[Collins and Porras 2002]. Software organizations need to stop thinking about features and bug fixing as
being exclusive from underlying software health and infrastructure improvements. This is the tyranny of the
OR; thinking you get features OR software health. By focusing on continual improvement through paying
constant attention to sustainable practices, software companies will be able to achieve the genius of the
AND: features and bug fixing AND underlying software health. Greater capability leads to an increased ability
to introduce more features with fewer defects, and hence have more time to innovate, not less. Hence,
                                                                          [3]
focusing on continual improvement is not a tax it's an accelerator button !

A Sustainable Development Experience

I have been fortunate to work on at least one team that achieved sustainable development. In one case, I
worked on a software project that was released to customers over the course of several years. Over this
time, the capabilities, complexity, and size of our product increased but the number of defects in the
product stayed constant and our ability to innovate increased. And our team did not increase in size. How
did we do it?

       Our software worked every day.
       We relied heavily on automated testing to catch problems while we were working on new features.
       We had high standards of testing and code quality, and we held each other to those standards.
       We didn't overdesign our work and built only what our customers needed.
       We were uncompromising with defects, and made sure that all the known defects that were
        important to our customers were fixed in a feature before we moved on to the next one. Hence, we
        never had a defect backlog.
       We replanned our work as often as we could.
       We were always looking for ways to improve how we worked and our software.

Other teams in our company were incredulous that we could produce the amount of work we did and keep
quality so high. For example, the company had an advanced practice of frequent integration, which was vital
because the product was developed across many time zones. Because of our stability and quality we were
able to pick up and fix integration problems extremely early in a development cycle. This was vital to the
success of the company's products.

Think of Cobol when you think of sustainable development: The original developers of the Cobol language
could not conceive of programs written in Cobol that would still be in use in 1999, and yet when the year
2000 came along, all of a sudden there was a huge demand to fix all the Cobol applications. Here's a related
joke:

It is the year 2000 and a Cobol programmer has just finished verifying that the Y2K fixes he has made to a
computer system critical to the U.S. government are correct. The president is so grateful that he tells the
programmer that he can have anything he wants. After thinking about it for a while, the programmer replies
that he would like to be frozen and reawakened at some point in the future so he can experience the future
of mankind. His wish is granted.

Many years pass. When the programmer is woken up, people shake his hand and slap him on the back. He is
led on a long parade, with people cheering him as he goes to a huge mansion. The master of the universe
greets him enthusiastically. Pleased but puzzled, the programmer asks the master of the universe why he has
received such a warm reception. The master of the universe replies "Well, it's the year 2999 and you have
Cobol on your resume!"
Summary

Developing software is a complex undertaking that is performed in an environment of constant change and
uncertainty. In the Introduction, I likened this to a coral reef, not only because of the complexity of the
software ecosystem and the need for constant change, but also because of the fragility of existence. It is
very hard to build or inhabit a software ecosystem that thrives over the long term.

Very little software is written once, installed, and then never changed over the course of its lifetime. And
yet, the most prevalent development practices used in the industry treat change as an afterthought.
Competition, the changing ecosystem, and the fact that users (and society in general) are becoming
increasingly reliant on software, ensure that the software must change and evolve over time. The resulting
combination of increasing complexity, need for change, and desire to control costs is a volatile one because
very few software organizations and teams are equipped with the mindset, discipline, and practices to both
manage and respond to complexity and change.

The answer to all the stresses placed on software organizations and teams lies in sustainable development,
which is the ability to maintain an optimal pace of development indefinitely. In sustainable development,
teams are able to be proactive about changes in their ecosystem. Their ability to be proactive is enabled by
their attention to doing the work that is of the highest value to customers with high quality and reliability
and an eye toward continual improvement despite increasing complexity. These teams are in a virtuous
cycle, where the more the team is able to improve themselves and how they work together, the greater
their ability to deal with increasing complexity and change.

The next chapter describes unsustainable development and its causes. This is important to understand
before considering how to achieve sustainability.
SEMANA 02
Semana 02 | Lecturas

      Capítulos 1 y 2 del libro de texto (Pags. 1-49)
                 Cap 01: Introduction to Computers and Internet
                 Cap 02: Web Browser Basics


   Semana 02 | Cap 02 | Árbol de Ideas

      Services
            o Google
                      Search
                              PageRank
                      AdWords
                      AdSense
            o Yahoo!
                      Overture .- Search Marketing
            o MSN
            o Ask
            o Joost
            o Last.fm
            o iTunes
                      DRM
      Search
            o Vertical Search Engines
                      Travel
                              Kayak
                              Expidia
                      Real-Estate
                              Zillow
                              Trulia
                      Job
                              Monster
                              InDeed
                      Shopping
                              ShopZilla
                              MySimon
            o Location Based Search
            o Customized Search Engines
            o
            o Search Engine Strategies
                      SEO
                              White hat
                              Black hat
                      SEM
            o Search Engine Watch/Land
      Discovery
      Link Building
            o Link Popularity
            o Reciprocal Linking
            o Link baiting (viral)
            o Natural linking
   User generated content
        o Sites
                 eBay
                 Montser
                 Content NWs
                         about.com
                         b5media
                         corante
                         deitel
                         eHow
                         gawker
                         howStuffWorks
                         lifeTips
                         9rules
                         suite101
                         webLogs
        o Collective intelligence VS wisdom of crowds
                 Wikis
                 Filtering
        o Blogs.- Democratization of media (blog-o-sphere)
                 Sites
                         Xanga
                         Live Journal
                         WordPress
                         TypePad
                         Blogger
                 Components
                         Reader comments
                         Permalinks
                         Trackbacks
                         Blogroll
                         Tagging
                                  o Clouds
                                  o Flolksnomies
                                  o GeoTagging
                         RSS Feeds
                                  o PodCasting
                                  o GeoRSS
                 Terms
                         mobLogging
                         vLogging
                 Search engines
                         Technorati
                         Google Blog Search
                   Social NWs.- Metcalfe’s Law
                         Sites
                                 o Friendster
                                 o MySpace
                                 o FaceBook
                                 o LinkedIn
                                 o Xing
                                 o Second Life
                                 o Gaia
                                 o Twitter
                                 o YouTube
                                 o Digg
                                 o Flickr
                                 o BookMarking
                                          del.icio.us
                                          ma.gnolia
   SW
         o   WebOS
                   WebTop
         o   SaaS
         o   Perpetual Beta
         o   Open Source
         o   VoIP
         o   RIAs
                   JavaScript
                           AJAX
                                o Dojo
                                o jQuery
                                o script.aculo.us
                                o JSON
                   Flex
                   SilverLight
                   JavaFX
                   RoR
                   JSF
                   .net
                   Air
                   Google Gears
                   XML
                           RSS
                                o Atom
                           RDF (resource description FrameWork)
                           OWL (ontologies)
       o    Web Services
                 Sites
                         Amazon
                         Flickr
                 Technologies
                         SOAP
                         REST
        o Location-Based Services
                 GPS
                 IP
                 ZipCode
        o APIs
        o Mashups
        o Widgets/Gadgets
   Business models
        o Affiliate
                 NW
                 Program
        o Ad
                 Banner
                 Blog
                         Paid
                         Search engine
                         NW
                 Contextual
                         In-text
                 InterSitial
                 Performance-based
                 RSS
                 Exchange
        o Cost-Per
                 Action
                 Click
                 Impression
        o eCommerce
        o Lead generation
        o Premium content
        o Virtual Words
        o Domains
        o Competitive intelligence
        o Content NW
        o Discovery
       The Philosophy of Software Development , John Jesudason


The Philosophy of Software Development
Archives > Development Philosophy

By John Jesudason

Curtain Raiser
As in any other engineering disciplines, software engineering also has some structured models for software
development. This document will provide you with a generic overview about different software
development methodologies adopted by the contemporary software firms. Before we get into the of the
different models, we shall have a brief insight about the basics that spread throughout these models.

Like any other engineering products, software products are oriented towards customers. It is either market
driven (or) it drives the market. Customer Satisfaction was the buzzword of the 80's. Customer Delight is
today's buzzword and Customer Ecstasy is the buzzword of the new millennium. Products that are not
customer (user) friendly have no place in the market although they are engineered using the best
technology. The interface of the product is as crucial as the internal technology of the product.

Market Research
Market study is made to identify the potential customers need. This process is also known as market
research. In the market research, the already existing need and the possible/potential needs that are
available in a segment of the society is studied carefully. The market study is done based on lot of
assumptions. Assumptions are the crucial factors in the development or inception of a product
development. Unrealistic assumptions can become a nosedive in the entire venture. Though assumptions
are abstract, there should be a move to develop tangible assumptions to come up with a successful product.

Research and Development
Once the Market study is made, the customer's need is given to the Research and Development (R&D)
Department to conceptualize a cost-effective system that could potentially solve customer's need better
than the competitors. Once the conceptual system is developed and tested in a hypothetical environment,
the development team takes control of it. The development team adopts one of the software development
methodologies that is given below, develop the proposed system, and gives it to the customers.

The Marketing group starts selling the product to the available customers and simultaneously work in
developing a niche segment that could potentially buy the product. In addition, the marketing group passes
the feedback from the customers to the developers and the R&D group to make possible value additions in
the product.

While developing a product, the company outsources** the non-core activities to the other companies who
specialize in those activities. This accelerates the product development process largely. Some companies
work on tie-ups to bring out highly matured product in a short period.

Following are the basic popular models used by many software development firms.

       System Development Life Cycle Model
       Prototyping Model
       Rapid Application Development Model
       Component Assembly Model

Back to top
Let us watch one by one in the following chapters.

System Development Life Cycle Model
This is also known as Classic Life Cycle Model (or) Linear Sequential Model (or) Waterfall Method. This has
the following activities.

1. System/Information Engineering and Modeling

2. Software Requirements Analysis

3. Systems Analysis and Design

4. Code Generation

5. Testing

6. Maintenance

Back to top




System/Information Engineering and Modeling
As software is always of a large system (or business), work begins by establishing requirements for all system
elements and then allocating some subset of these requirements to software. This system view is essential
when software must interface with other elements such as hardware, people and other resources. System is
the basic and very critical requirement for the existence of software in any entity. So if the system is not in
place, the system should be engineered and put in place. In some cases to extract the maximum output,
system should be re-engineered and spiced up. Once the ideal system is engineered or tuned up, the
development team studies the software requirement for the system.

Software Requirements Analysis
This is also known as feasibility study. In this phase, the development team visits the customer and studies
their system. They investigate the need for possible software automation in the given system. By the end of
the feasibility study, the team furnishes a document that holds the different specific recommendations for
the candidate system. It also includes the personnel assignments, costs, project schedule, and target dates.
The requirements gathering process is intensified and focussed specially on software. To understand the
nature of the program(s) to be built, the system engineer ("analyst") must understand the information
domain for the software, as well as required function, behavior, performance and interfacing. The essential
purpose of this phase is to find the need and to define the problem that needs to be solved .

System Analysis and Design
In this phase, the software's overall structure and its nuances are defined. In terms of the client/server
technology, the number of tiers needed for the package architecture, the database design, the data
structure design etc are all defined in this phase. Analysis and Design are very crucial in the whole
development cycle. Any glitch in the design phase could be very expensive to solve in the later stage of the
software development. Much care is taken during this phase. The logical system of the product is developed
in this phase.

Code Generation
The design must be translated into a machine-readable form. The code generation step performs this task. If
design is performed in a detailed manner, code generation can be accomplished with out much
complication. Programming tools like Compilers, Interpreters, Debuggers are used to generate the code.
Different high level programming languages like C, C++, Pascal, Java are used for coding. With respect to the
type of application, the right programming language is chosen.

Testing
Once the code is generated, the program testing begins. Different testing methodologies are available to
unravel the bugs that were committed during the previous phases. Different testing tools and
methodologies are already available. Some companies built there own testing tools that are tailor made for
there own development operations.

Maintenance
Software will definitely undergo change once it is delivered to the customer. There are many reasons for the
change. Change could happen because of some unexpected input values into the system. In addition, the
changes in the system could directly affect the software operations. The software should be developed to
accommodate changes that could happen during the post implementation period.

Prototyping Model
This is a cyclic version of the linear model. In this model, once the requirement analysis is done and the
design for a prototype is made, the development process gets started. Once the prototype is created, it is
given to the customer for evaluation. The customer tests the package and gives his/her feed back to the
developer who refines the product according to the customer's exact expectation. After a finite number of
iterations, the final software package is given to the customer. In this methodology, the software is evolved
as a result of periodic shuttling of information between the customer and developer. This is the most
popular development model in the contemporary IT industry. Most of the successful software products have
been developed using this model - as it is very difficult (even for a whiz kid!) to comprehend all the
requirements of a customer in one shot. There are many variations of this model skewed with respect to the
project management styles of the companies. New versions of software product evolve as a result of
prototyping.

Back to top




Rapid Application Development (RAD) Model
The RAD is a linear sequential software development process that emphasizes an extremely short
development cycle. The RAD model is a "high speed" adaptation of the linear sequential model in which
rapid development is achieved by using a component-based construction approach. Used primarily for
information systems applications, the RAD approach encompasses the following phases:

Business modeling
The information flow among business function is modeled in a way that answers the following questions:
What information drives the business process?
What information is generated?
Who generates it?
Where does the information go?
Who processes it?

Data modeling
The information flow defined as part of the business modeling phase is refined into a set of data objects that
are needed to support the business. The characteristic (called attributes) of each object share identified and
the relationships between these objects are defined.

Process modeling
The data objects defined in the data-modeling phase are transformed to achieve the information flow
necessary to implement a business function. Processing the descriptions are created for adding, modifying,
deleting, or retrieving a data object.

Application generation
RAD assumes the use of the RAD tools like VB, VC++, Delphi etc rather than creating software using
conventional third generation programming languages. The RAD works to reuse existing program
components (when possible) or create reusable components (when necessary). In all cases, automated tools
are used to facilitate construction of the software.
Testing and turnover

Since the RAD process emphasizes reuse, many of the program components have already been tested. This
minimizes the testing and development time.

Component Assembly Model
Object technologies provide the technical framework for a component-based process model for software
engineering. The object oriented paradigm emphasizes the creation of classes that encapsulate both data
and the algorithm that are used to manipulate the data. If properly designed and implemented, object
oriented classes are reusable across different applications and computer based system architectures.
Component Assembly Model leads to software reusability. The integration/assembly of the already existing
software components accelerate the development process. Nowadays many component libraries are
available in the Internet. If the right components are chosen, the integration aspect is made much simpler.

Conclusion
All these different models have their own advantages and disadvantages. Nevertheless, in the contemporary
commercial software development world, the fusion of all these methodologies is incorporated. Timing is
very crucial in software development. If a delay happens in the development phase, the market could be
taken over by the competitor. Also if a 'bug' filled product is launched in a short period of time (quicker than
the competitors), it may affect the reputation of the company. So, there should be a tradeoff between the
development time and the quality of the product. Customers don't expect a bug free product but they
expect a user-friendly product. That results in Customer Ecstasy!
        Software development process, Wikipedia


A software development methodology or system development methodology in software engineering is a
                                                                                                        [1]
framework that is used to structure, plan, and control the process of developing an information system.




  The three basic patterns in software development methodologies.

[edit] Overview

A software development methodology refers to the framework that is used to structure, plan, and control
the process of developing an information system. A wide variety of such frameworks have evolved over the
years, each with its own recognized strengths and weaknesses. One system development methodology is
not necessarily suitable for use by all projects. Each of the available methodologies is best suited to specific
                                                                                                  [1]
kinds of projects, based on various technical, organizational, project and team considerations.

The framework of a software development methodology consists of:

        A software development philosophy, with the approach or approaches of the software
         development process
        Multiple tools, models and methods, to assist in the software development process.

These frameworks are often bound to some kind of organization, which further develops, supports the use,
and promotes the methodology. The methodology is often documented in some kind of formal
documentation.
[edit] History

One of the oldest software development tools is flowcharting, which has its roots in the 1920s. The software
development methodology didn't emerge until the 1960s. According to Elliott (2004) the Systems
development life cycle (SDLC) can be considered to be the oldest formalized methodology for building
information systems. The main idea of the SDLC has been "to pursue the development of information
systems in a very deliberate, structured and methodical way, requiring each stage of the life cycle from
                                                                                                      [2]
inception of the idea to delivery of the final system, to be carried out in rigidly and sequentially". The main
target of this methodology in the 1960s has been "to develop large scale functional business systems in an
age of large scale business conglomerates. Information systems activities revolved around heavy data
                                               [2]
processing and number crunching routines".

[edit] Specific software development methodologies
1970s

        Structured programming since 1969
        Cap Gemini SDM, originally from PANDATA, the first English translation was published in 1974. SDM
         stands for System Development Methodology

1980s

        Structured Systems Analysis and Design Methodology (SSADM) from 1980 onwards

1990s

        Object-oriented programming (OOP) has been developed since the early 1960s, and developed as
         the dominant programming methodology during the mid-1990s.
        Rapid application development (RAD) since 1991.
        Scrum (development), since the late 1990s
        Team software process developed by Watts Humphrey at the SEI

2000s

        Rational Unified Process (RUP) since 1998.
        Extreme Programming since 1999
        Agile Unified Process (AUP) since 2005 by Scott Ambler
        Integrated Methodology (QAIassist-IM) since 2007

[edit] Software development approaches

Every software development methodology has more or less its own approach to software development.
There is a set of more general approaches, which are developed into several specific methodologies. These
                  [1]
approaches are:

        Waterfall: linear framework type.
        Prototyping: iterative framework type
        Incremental: combination of linear and iterative framework type
        Spiral: combination of linear and iterative framework type
        Rapid Application Development (RAD): Iterative Framework Type
        Extreme Programming.
[edit] Waterfall model

The waterfall model is a sequential development process, in which development is seen as flowing steadily
downwards (like a waterfall) through the phases of requirements analysis, design, implementation, testing
(validation), integration, and maintenance. The first formal description of the waterfall model is often cited
                                               [3]
to be an article published by Winston W. Royce in 1970 although Royce did not use the term "waterfall" in
this article.

                                               [1]
Basic principles of the waterfall model are:

        Project is divided into sequential phases, with some overlap and splashback acceptable between
         phases.
        Emphasis is on planning, time schedules, target dates, budgets and implementation of an entire
         system at one time.
        Tight control is maintained over the life of the project through the use of extensive written
         documentation, as well as through formal reviews and approval/signoff by the user and
         information technology management occurring at the end of most phases before beginning the
         next phase.

[edit] Prototyping

Software prototyping, is the framework of activities during software development of creating prototypes,
i.e., incomplete versions of the software program being developed.

                                       [1]
Basic principles of prototyping are:

        Not a standalone, complete development methodology, but rather an approach to handling
         selected portions of a larger, more traditional development methodology (i.e. Incremental, Spiral,
         or Rapid Application Development (RAD)).
        Attempts to reduce inherent project risk by breaking a project into smaller segments and providing
         more ease-of-change during the development process.
        User is involved throughout the process, which increases the likelihood of user acceptance of the
         final implementation.
        Small-scale mock-ups of the system are developed following an iterative modification process until
         the prototype evolves to meet the users’ requirements.
        While most prototypes are developed with the expectation that they will be discarded, it is possible
         in some cases to evolve from prototype to working system.
        A basic understanding of the fundamental business problem is necessary to avoid solving the wrong
         problem.
        Mainframes have a lot to do with this sort of thing that consist of: PB&J

[edit] Incremental

Various methods are acceptable for combining linear and iterative systems development methodologies,
with the primary objective of each being to reduce inherent project risk by breaking a project into smaller
segments and providing more ease-of-change during the development process.

                                                     [1]
Basic principles of incremental development are:

        A series of mini-Waterfalls are performed, where all phases of the Waterfall development model
         are completed for a small part of the systems, before proceeding to the next incremental, or
        Overall requirements are defined before proceeding to evolutionary, mini-Waterfall development
         of individual increments of the system, or
        The initial software concept, requirements analysis, and design of architecture and system core are
         defined using the Waterfall approach, followed by iterative Prototyping, which culminates in
         installation of the final prototype (i.e., working system).
[edit] Spiral




The spiral model.

The spiral model is a software development process combining elements of both design and prototyping-in-
                                                                                                 [1]
stages, in an effort to combine advantages of top-down and bottom-up concepts. Basic principles:

        Focus is on risk assessment and on minimizing project risk by breaking a project into smaller
         segments and providing more ease-of-change during the development process, as well as providing
         the opportunity to evaluate risks and weigh consideration of project continuation throughout the
         life cycle.
        "Each cycle involves a progression through the same sequence of steps, for each portion of the
         product and for each of its levels of elaboration, from an overall concept-of-operation document
                                                            [4]
         down to the coding of each individual program."
        Each trip around the spiral traverses four basic quadrants: (1) determine objectives, alternatives,
         and constraints of the iteration; (2) Evaluate alternatives; Identify and resolve risks; (3) develop and
                                                                                  [5]
         verify deliverables from the iteration; and (4) plan the next iteration.
        Begin each cycle with an identification of stakeholders and their win conditions, and end each cycle
                                         [6]
         with review and commitment.
[edit] Rapid Application Development (RAD)

Rapid Application Development (RAD) is a software development methodology, which involves iterative
development and the construction of prototypes. Rapid application development is a term originally used to
describe a software development process introduced by James Martin in 1991.

                [1]
Basic principles:

        Key objective is for fast development and delivery of a high quality system at a relatively low
         investment cost.
        Attempts to reduce inherent project risk by breaking a project into smaller segments and providing
         more ease-of-change during the development process.
        Aims to produce high quality systems quickly, primarily through the use of iterative Prototyping (at
         any stage of development), active user involvement, and computerized development tools. These
         tools may include Graphical User Interface (GUI) builders, Computer Aided Software Engineering
         (CASE) tools, Database Management Systems (DBMS), fourth-generation programming languages,
         code generators, and object-oriented techniques.
        Key emphasis is on fulfilling the business need, while technological or engineering excellence is of
         lesser importance.
        Project control involves prioritizing development and defining delivery deadlines or “timeboxes”. If
         the project starts to slip, emphasis is on reducing requirements to fit the timebox, not in increasing
         the deadline.
        Generally includes Joint Application Development (JAD), where users are intensely involved in
         system design, either through consensus building in structured workshops, or through
         electronically facilitated interaction.
        Active user involvement is imperative.
        Iteratively produces production software, as opposed to a throwaway prototype.
        Produces documentation necessary to facilitate future development and maintenance.
        Standard systems analysis and design techniques can be fitted into this framework.

[edit] Other software development approaches

Other method concepts are:

        Object oriented development methodologies, such as Grady Booch's Object-oriented design (OOD),
         also known as object-oriented analysis and design (OOAD). The Booch model includes six diagrams:
                                                                            [7]
         class, object, state transition, interaction, module, and process.
        Top-down programming: evolved in the 1970s by IBM researcher Harlan Mills (and Niklaus Wirth) in
         developed structured programming.
        Unified Process (UP) is an iterative software development methodology approach, based on UML.
         UP organizes the development of software into four phases, each consisting of one or more
         executable iterations of the software at that stage of development: Inception, Elaboration,
         Construction, and Guidelines. There are a number of tools and products available designed to
         facilitate UP implementation. One of the more popular versions of UP is the Rational Unified
         Process (RUP).
        Agile Software Development refers to a group of software development methodologies based on
         iterative development, where requirements and solutions evolve through collaboration between
         self-organizing cross-functional teams. The term was coined in the year 2001 when the Agile
         Manifesto was formulated.
        Integrated Methodology Software Development refers to a group of software development
         practices and deliverables that can be applied in a multitude (iterative, waterfall, spiral, agile) of
         software development environments, where requirements and solutions evolve through
         collaboration between self-organizing cross-functional teams.

[edit] Software development methodology topics
[edit] View model




The TEAF Matrix of Views and Perspectives.

A View model is framework which provides the viewpoints on the system and its environment, to be used in
the software development process. It is a graphical representation of the underlying semantics of a view.

The purpose of viewpoints and views is to enable human engineers to comprehend very complex systems,
and to organize the elements of the problem and the solution around domains of expertise. In the
engineering of physically-intensive systems, viewpoints often correspond to capabilities and responsibilities
                                     [8]
within the engineering organization.

Most complex system specifications are so extensive that no single individual can fully comprehend all
aspects of the specifications. Furthermore, we all have different interests in a given system and different
reasons for examining the system's specifications. A business executive will ask different questions of a
system make-up than would a system implementer. The concept of viewpoints framework, therefore, is to
provide separate viewpoints into the specification of a given complex system. These viewpoints each satisfy
an audience with interest in a particular set of aspects of the system. Associated with each viewpoint is a
viewpoint language that optimizes the vocabulary and presentation for the audience of that viewpoint.

[edit] Business process and data modelling

Graphical representation of the current state of information provides a very effective means for presenting
information to both users and system developers.
                                                                         [9]
example of the interaction between business process and data models.

        A business model illustrates the functions associated with the process being modeled and the
         organizations that perform these functions. By depicting activities and information flows, a
         foundation is created to visualize, define, understand, and validate the nature of a process.
        A data model provides the details of information to be stored, and is of primary use when the final
         product is the generation of computer software code for an application or the preparation of a
         functional specification to aid a computer software make-or-buy decision. See the figure on the
                                                                                              [9]
         right for an example of the interaction between business process and data models.

Usually, a model is created after conducting an interview, referred to as business analysis. The interview
consists of a facilitator asking a series of questions designed to extract required information that describes a
process. The interviewer is called a facilitator to emphasize that it is the participants who provide the
information. The facilitator should have some knowledge of the process of interest, but this is not as
important as having a structured methodology by which the questions are asked of the process expert. The
methodology is important because usually a team of facilitators is collecting information cross the facility
                                                                                                  [9]
and the results of the information from all the interviewers must fit together once completed.

The models are developed as defining either the current state of the process, in which case the final product
is called the "as-is" snapshot model, or a collection of ideas of what the process should contain, resulting in
a "what-can-be" model. Generation of process and data models can be used to determine if the existing
processes and information systems are sound and only need minor modifications or enhancements, or if
reengineering is required as corrective action. The creation of business models is more than a way to view or
automate your information process analysis can be used to fundamentally reshape the way your business or
                                      [9]
organization conducts its operations.

[edit] Computer-aided Software Engineering

Computer-Aided Software Engineering (CASE), in the field software engineering is the scientific application
of a set of tools and methods to a software which results in high-quality, defect-free, and maintainable
                    [10]
software products. It also refers to methods for the development of information systems together with
                                                                          [11]
automated tools that can be used in the software development process. The term "Computer-aided
software engineering" (CASE) can refer to the software used for the automated development of systems
software, i.e., computer code. The CASE functions include analysis, design, and programming. CASE tools
automate methods for designing, documenting, and producing structured computer code in the desired
                         [12]
programming language.

                                                                               [13]
Two key ideas of Computer-aided Software System Engineering (CASE) are:

        The harboring of computer assistance in software development and or software maintenance
         processes, and
        An engineering approach to the software development and or maintenance.

Some typical CASE tools are Configuration management tools, Data modeling tools, Model transformation
tools, Refactoring tools, Source code generation tools, and Unified Modeling Language.
[edit] Integrated development environment




Anjuta, a C and C++ IDE for the GNOME environment

An integrated development environment (IDE) also known as integrated design environment or integrated
debugging environment is a software application that provides comprehensive facilities to computer
programmers for software development. An IDE normally consists of a:

        source code editor,
        compiler and/or interpreter,
        build automation tools, and
        debugger (usually).

IDEs are designed to maximize programmer productivity by providing tightly-knit components with similar
user interfaces. Typically an IDE is dedicated to a specific programming language, so as to provide a feature
set which most closely matches the programming paradigms of the language.
[edit] Modeling language

A modeling language is any artificial language that can be used to express information or knowledge or
systems in a structure that is defined by a consistent set of rules. The rules are used for interpretation of the
                                                                                               [14]
meaning of components in the structure. A modeling language can be graphical or textual. Graphical
modeling languages use a diagram techniques with named symbols that represent concepts and lines that
connect the symbols and that represent relationships and various other graphical annotation to represent
constraints. Textual modeling languages typically use standardised keywords accompanied by parameters to
make computer-interpretable expressions.

Example of graphical modelling languages in the field of software engineering are:

        Business Process Modeling Notation (BPMN, and the XML form BPML) is an example of a Process
         Modeling language.
        EXPRESS and EXPRESS-G (ISO 10303-11) is an international standard general-purpose data modeling
         language.
        Extended Enterprise Modeling Language (EEML) is commonly used for business process modeling
         across a number of layers.
        Flowchart is a schematic representation of an algorithm or a stepwise process,
        Fundamental Modeling Concepts (FMC) modeling language for software-intensive systems.
        IDEF is a family of modeling languages, the most notable of which include IDEF0 for functional
         modeling, IDEF1X for information modeling, and IDEF5 for modeling ontologies.
        LePUS3 is an object-oriented visual Design Description Language and a formal specification
         language that is suitable primarily for modelling large object-oriented (Java, C++, C#) programs and
         design patterns.
        Specification and Description Language(SDL) is a specification language targeted at the
         unambiguous specification and description of the behaviour of reactive and distributed systems.
        Unified Modeling Language (UML) is a general-purpose modeling language that is an industry
         standard for specifying software-intensive systems. UML 2.0, the current version, supports thirteen
         different diagram techniques, and has widespread tool support.

Not all modeling languages are executable, and for those that are, the use of them doesn't necessarily mean
that programmers are no longer required. On the contrary, executable modeling languages are intended to
amplify the productivity of skilled programmers, so that they can address more challenging problems, such
as parallel computing and distributed systems.
[edit] Programming paradigm

A programming paradigm is a fundamental style of computer programming, in contrast to a software
engineering methodology, which is a style of solving specific software engineering problems. Paradigms
differ in the concepts and abstractions used to represent the elements of a program (such as objects,
functions, variables, constraints...) and the steps that compose a computation (assignation, evaluation,
continuations, data flows...).

A programming language can support multiple paradigms. For example programs written in C++ or Object
Pascal can be purely procedural, or purely object-oriented, or contain elements of both paradigms. Software
designers and programmers decide how to use those paradigm elements. In object-oriented programming,
programmers can think of a program as a collection of interacting objects, while in functional programming
a program can be thought of as a sequence of stateless function evaluations. When programming computers
or systems with many processors, process-oriented programming allows programmers to think about
applications as sets of concurrent processes acting upon logically shared data structures.

Just as different groups in software engineering advocate different methodologies, different programming
languages advocate different programming paradigms. Some languages are designed to support one
particular paradigm (Smalltalk supports object-oriented programming, Haskell supports functional
programming), while other programming languages support multiple paradigms (such as Object Pascal, C++,
C#, Visual Basic, Common Lisp, Scheme, Python, Ruby and Oz).

Many programming paradigms are as well known for what techniques they forbid as for what they enable.
For instance, pure functional programming disallows the use of side-effects; structured programming
disallows the use of the goto statement. Partly for this reason, new paradigms are often regarded as
                                                                   [citation needed]
doctrinaire or overly rigid by those accustomed to earlier styles.                   Avoiding certain techniques can
make it easier to prove theorems about a program's correctness—or simply to understand its behavior.

[edit] Software framework

A software framework is a re-usable design for a software system or subsystem. A software framework may
include support programs, code libraries, a scripting language, or other software to help develop and glue
together the different components of a software project. Various parts of the framework may be exposed
through an API.

[edit] Software development process

A Software development process is a structure imposed on the development of a software product.
Synonyms include software life cycle and software process. There are several models for such processes,
each describing approaches to a variety of tasks or activities that take place during the process.

A largely growing body of software development organizations implement process methodologies. Many of
them are in the defense industry, which in the U.S. requires a rating based on 'process models' to obtain
contracts. The international standard for describing the method of selecting, implementing and monitoring
the life cycle for software is ISO 12207.

A decades-long goal has been to find repeatable, predictable processes that improve productivity and
quality. Some try to systematize or formalize the seemingly unruly task of writing software. Others apply
project management techniques to writing software. Without project management, software projects can
easily be delivered late or over budget. With large numbers of software projects not meeting their
expectations in terms of functionality, cost, or delivery schedule, effective project management appears to
be lacking.
Semana 02 Examen 01

   1. A computer is a device that can perform computations and make logical decisions billions
      of times faster than human beings can
           a. True
           b. False

   2. Web 2.0 is focus on a relatively small number of companies and advertisers producing
      content for users to access *some people called it “brochure web”+
         a. True
         b. False

   3. According with our txt book, computer languages may be divided into three general types:
      Machine languages, assembly languages and high-level languages.
         a. True
         b. False

   4. Technologies such as XHTML, JavaScript, Flash, Flex, Dreamweaver and XML are used to
      build the portions of web-based applications that reside on the server side
          a. True
          b. False

   5. RIA’s are being developed using technologies [such as AJAX] that have the look and feel of
      desktop software, enhancing a user’s overall experience
          a. True
          b. False

   6. Objects are essentially reusable software components that model real-world items
         a. True
         b. False

   7. Ensuring a consistent look and feel on client-side browsers is one of the great challenges
      of developing web-based applications
          a. True
          b. False

   8. Bandwidth refers to the amount of data that can be transferred through a
      communications medium in a fixed amount of time
         a. True
         b. False

   9. Education about sustainable development is the same as education for sustainable
      development.
         a. True
         b. False

   10. Developing software is a simple undertaking that is performed an environment static and
       certainty
           a. True
           b. False
1. The arithmetic and logic unit [ALU] is the “administrative” section of the computer; it
    coordinates and supervises the operation of the other sections
        a. True
        b. False


8. Education directly affects sustainability plans in the following three areas:
        a.   Implementation, decision making and quality of life



9. Sustainable development is a mindset [principles] and an accompanying set of practices
    that enable a team to achieve and maintain an optimal development pace indefinitely.
        a. True
        b. False
SEMANA 03
Semana 03 | Lecturas

        Capítulos 21 y 22
                   Cap 21: Web Servers
                   Cap 22: DBMSs
        How Web Servers Work, Marshall Brain.


Semana 03 | Lecturas | How Web Servers Work

How Web Servers Work

by Marshall Brain

Browse the article How Web Servers Work

Introduction to How Web Servers Work

Internet Connection Image Gallery

Web servers allow you to surf the Internet. See more internet connection pictures.

Have you ever wondered about the mechanisms that delivered this page to you? Chances are you are sitting
at a computer right now, viewing this page in a browser. So, when you clicked on the link for this page, or
typed in its URL (uniform resource locator), what happened behind the scenes to bring this page onto your
screen?

If you've ever been curious about the process, or have ever wanted to know some of the specific
mechanisms that allow you to surf the Internet, then read on. In this article, you will learn how Web servers
bring pages into your home, school or office. Let's get started!



Your Browser Does Not Support iFrames

The Basic Process

Let's say that you are sitting at your computer, surfing the Web, and you get a call from a friend who says, "I
just read a great article! Type in this URL and check it out. It's at http://www.howstuffworks.com/web-
server.htm." So you type that URL into your browser and press return. And magically, no matter where in
the world that URL lives, the page pops up on your screen.

At the most basic level possible, the following diagram shows the steps that brought that page to your
screen:




Your browser formed a connection to a Web server, requested a page and received it.
On the next page, we'll dig a bit deeper.

Behind the Scenes

If you want to get into a bit more detail on the process of getting a Web page onto your computer screen,
here are the basic steps that occurred behind the scenes:

        The browser broke the URL into three parts:
              1. The protocol ("http")
              2. The server name ("www.howstuffworks.com")
              3. The file name ("web-server.htm")
        The browser communicated with a name server to translate the server name
         "www.howstuffworks.com" into an IP Address, which it uses to connect to the server machine.
        The browser then formed a connection to the server at that IP address on port 80. (We'll discuss
         ports later in this article.)
        Following the HTTP protocol, the browser sent a GET request to the server, asking for the file
         "http://www.howstuffworks.com/web-server.htm." (Note that cookies may be sent from browser
         to server with the GET request -- see How Internet Cookies Work for details.)
        The server then sent the HTML text for the Web page to the browser. (Cookies may also be sent
         from server to browser in the header for the page.)
        The browser read the HTML tags and formatted the page onto your screen.

If you've never explored this process before, that's a lot of new vocabulary. To understand this whole
process in detail, you need to learn about IP addresses, ports, protocols... The following sections will lead
you through a complete explanation.

The Internet

So what is "the Internet"? The Internet is a gigantic collection of millions of computers, all linked together on
a computer network. The network allows all of the computers to communicate with one another. A home
computer may be linked to the Internet using a phone-line modem, DSL or cable modem that talks to an
Internet service provider (ISP). A computer in a business or university will usually have a network interface
card (NIC) that directly connects it to a local area network (LAN) inside the business. The business can then
connect its LAN to an ISP using a high-speed phone line like a T1 line. A T1 line can handle approximately 1.5
million bits per second, while a normal phone line using a modem can typically handle 30,000 to 50,000 bits
per second.

ISPs then connect to larger ISPs, and the largest ISPs maintain fiber-optic "backbones" for an entire nation or
region. Backbones around the world are connected through fiber-optic lines, undersea cables or satellite
links (see An Atlas of Cyberspaces for some interesting backbone maps). In this way, every computer on the
Internet is connected to every other computer on the Internet.
Clients and Servers

In general, all of the machines on the Internet can be categorized
as two types: servers and clients. Those machines that provide
services (like Web servers or FTP servers) to other machines are
servers. And the machines that are used to connect to those
services are clients. When you connect to Yahoo! at
www.yahoo.com to read a page, Yahoo! is providing a machine
(probably a cluster of very large machines), for use on the
Internet, to service your request. Yahoo! is providing a server.
Your machine, on the other hand, is probably providing no services
to anyone else on the Internet. Therefore, it is a user machine,
also known as a client. It is possible and common for a machine to
be both a server and a client, but for our purposes here you can
think of most machines as one or the other.

A server machine may provide one or more services on the
Internet. For example, a server machine might have software
running on it that allows it to act as a Web server, an e-mail server and an FTP server. Clients that come to a
server machine do so with a specific intent, so clients direct their requests to a specific software server
running on the overall server machine. For example, if you are running a Web browser on your machine, it
will most likely want to talk to the Web server on the server machine. Your Telnet application will want to
talk to the Telnet server, your e-mail application will talk to the e-mail server, and so on...

IP Addresses

To keep all of these machines straight, each machine on the Internet is assigned a unique address called an
IP address. IP stands for Internet protocol, and these addresses are 32-bit numbers, normally expressed as
four "octets" in a "dotted decimal number." A typical IP address looks like this:

     216.27.61.137

The four numbers in an IP address are called octets because they can have values between 0 and 255, which
    8
is 2 possibilities per octet.

Every machine on the Internet has a unique IP address. A server has a static IP address that does not change
very often. A home machine that is dialing up through a modem often has an IP address that is assigned by
the ISP when the machine dials in. That IP address is unique for that session -- it may be different the next
time the machine dials in. This way, an ISP only needs one IP address for each modem it supports, rather
than for each customer.

If you are working on a Windows machine, you can view a lot of the Internet information for your machine,
including your current IP address and hostname, with the command WINIPCFG.EXE (IPCONFIG.EXE for
Windows 2000/XP). On a UNIX machine, type nslookup at the command prompt, along with a machine
name, like www.howstuffworks.com -- e.g. "nslookup www.howstuffworks.com" -- to display the IP address
of the machine, and you can use the command hostname to learn the name of your machine. (For more
information on IP addresses, see IANA.)

As far as the Internet's machines are concerned, an IP address is all you need to talk to a server. For
example, in your browser, you can type the URL http://209.116.69.66 and arrive at the machine that
contains the Web server for HowStuffWorks. On some servers, the IP address alone is not sufficient, but on
most large servers it is -- keep reading for details.
Domain Names

Because most people have trouble remembering the strings of numbers that make up IP addresses, and
because IP addresses sometimes need to change, all servers on the Internet also have human-readable
names, called domain names. For example, www.howstuffworks.com is a permanent, human-readable
name. It is easier for most of us to remember www.howstuffworks.com than it is to remember
209.116.69.66.

The name www.howstuffworks.com actually has three parts:

    1.   The host name ("www")
    2.   The domain name ("howstuffworks")
    3.   The top-level domain name ("com")

Domain names within the ".com" domain are managed by the registrar called VeriSign. VeriSign also
manages ".net" domain names. Other registrars (like RegistryPro, NeuLevel and Public Interest Registry)
manage the other domains (like .pro, .biz and .org). VeriSign creates the top-level domain names and
guarantees that all names within a top-level domain are unique. VeriSign also maintains contact information
for each site and runs the "whois" database. The host name is created by the company hosting the domain.
"www" is a very common host name, but many places now either omit it or replace it with a different host
name that indicates a specific area of the site. For example, in encarta.msn.com, the domain name for
Microsoft's Encarta encyclopedia, "encarta" is designated as the host name instead of "www."

Name Servers

The whois Command

On a UNIX machine, you can use the whois command to look up information about a domain name. You can
do the same thing using the whois form at VeriSign. If you type in a domain name, like
"howstuffworks.com," it will return to you the registration information for that domain, including its IP
address.



A set of servers called domain name servers (DNS) maps the human-readable names to the IP addresses.
These servers are simple databases that map names to IP addresses, and they are distributed all over the
Internet. Most individual companies, ISPs and universities maintain small name servers to map host names
to IP addresses. There are also central name servers that use data supplied by VeriSign to map domain
names to IP addresses.

If you type the URL "http://www.howstuffworks.com/web-server.htm" into your browser, your browser
extracts the name "www.howstuffworks.com," passes it to a domain name server, and the domain name
server returns the correct IP address for www.howstuffworks.com. A number of name servers may be
involved to get the right IP address. For example, in the case of www.howstuffworks.com, the name server
for the "com" top-level domain will know the IP address for the name server that knows host names, and a
separate query to that name server, operated by the HowStuffWorks ISP, may deliver the actual IP address
for the HowStuffWorks server machine.

On a UNIX machine, you can access the same service using the nslookup command. Simply type a name like
"www.howstuffworks.com" into the command line, and the command will query the name servers and
deliver the corresponding IP address to you.

So here it is: The Internet is made up of millions of machines, each with a unique IP address. Many of these
machines are server machines, meaning that they provide services to other machines on the Internet. You
have heard of many of these servers: e-mail servers, Web servers, FTP servers, Gopher servers and Telnet
servers, to name a few. All of these are provided by server machines.
Ports

Any server machine makes its services available to the Internet using numbered ports, one for each service
that is available on the server. For example, if a server machine is running a Web server and an FTP server,
the Web server would typically be available on port 80, and the FTP server would be available on port 21.
Clients connect to a service at a specific IP address and on a specific port.

Each of the most well-known services is available at a well-known port number. Here are some common
port numbers:

       echo 7
       daytime 13
       qotd 17 (Quote of the Day)
       ftp 21
       telnet 23
       smtp 25 (Simple Mail Transfer, meaning e-mail)
       time 37
       nameserver 53
       nicname 43 (Who Is)
       gopher 70
       finger 79
       WWW 80

If the server machine accepts connections on a port from the outside world, and if a firewall is not
protecting the port, you can connect to the port from anywhere on the Internet and use the service. Note
that there is nothing that forces, for example, a Web server to be on port 80. If you were to set up your own
machine and load Web server software on it, you could put the Web server on port 918, or any other
unused port, if you wanted to. Then, if your machine were known as xxx.yyy.com, someone on the Internet
could connect to your server with the URL http://xxx.yyy.com:918. The ":918" explicitly specifies the port
number, and would have to be included for someone to reach your server. When no port is specified, the
browser simply assumes that the server is using the well-known port 80.
Protocols

Once a client has connected to a service on a particular port, it accesses the service using a specific protocol.
The protocol is the pre-defined way that someone who wants to use a service talks with that service. The
"someone" could be a person, but more often it is a computer program like a Web browser. Protocols are
often text, and simply describe how the client and server will have their conversation.

Perhaps the simplest protocol is the daytime protocol. If you connect to port 13 on a machine that supports
a daytime server, the server will send you its impression of the current date and time and then close the
connection. The protocol is, "If you connect to me, I will send you the date and time and then disconnect."
Most UNIX machines support this server. If you would like to try it out, you can connect to one with the
Telnet application. In UNIX, the session would look like this:

%telnet web67.ntx.net 13
Trying 216.27.61.137...
Connected to web67.ntx.net.
Escape character is '^]'.
Sun Oct 25 08:34:06 1998
Connection closed by foreign host.

On a Windows machine, you can access this server by typing "telnet web67.ntx.net 13" at the MSDOS
prompt.

In this example, web67.ntx.net is the server's UNIX machine, and 13 is the port number for the daytime
service. The Telnet application connects to port 13 (telnet naturally connects to port 23, but you can direct it
to connect to any port), then the server sends the date and time and disconnects. Most versions of Telnet
allow you to specify a port number, so you can try this using whatever version of Telnet you have available
on your machine.

Most protocols are more involved than daytime and are specified in Request for Comment (RFC) documents
that are publicly available (see http://sunsite.auc.dk/RFC/ for a nice archive of all RFCs). Every Web server
on the Internet conforms to the HTTP protocol, summarized nicely in The Original HTTP as defined in 1991.
The most basic form of the protocol understood by an HTTP server involves just one command: GET. If you
connect to a server that understands the HTTP protocol and tell it to "GET filename," the server will respond
by sending you the contents of the named file and then disconnecting. Here's a typical session:

%telnet www.howstuffworks.com 80
Trying 216.27.61.137...
Connected to howstuffworks.com.
Escape character is '^]'.
GET http://www.howstuffworks.com/



 ...
Connection closed by foreign host.

In the original HTTP protocol, all you would have sent was the actual filename, such as "/" or "/web-
server.htm." The protocol was later modified to handle the sending of the complete URL. This has allowed
companies that host virtual domains, where many domains live on a single machine, to use one IP address
for all of the domains they host. It turns out that hundreds of domains are hosted on 209.116.69.66 -- the
HowStuffWorks IP address.
Putting It All Together

Now you know a tremendous amount about the Internet. You know that when you type a URL into a
browser, the following steps occur:

        The browser breaks the URL into three parts:
              1. The protocol ("http")
              2. The server name ("www.howstuffworks.com")
              3. The file name ("web-server.htm")
        The browser communicates with a name server to translate the server name,
         "www.howstuffworks.com," into an IP address, which it uses to connect to that server machine.
        The browser then forms a connection to the Web server at that IP address on port 80.
        Following the HTTP protocol, the browser sends a GET request to the server, asking for the file
         "http://www.howstuffworks.com/web-server.htm." (Note that cookies may be sent from browser
         to server with the GET request -- see How Internet Cookies Work for details.)
        The server sends the HTML text for the Web page to the browser. (Cookies may also be sent from
         server to browser in the header for the page.)
        The browser reads the HTML tags and formats the page onto your screen.

Extras: Security

You can see from this description that a Web server can be a pretty simple piece of software. It takes the file
name sent in with the GET command, retrieves that file and sends it down the wire to the browser. Even if
you take into account all of the code to handle the ports and port connections, you could easily create a C
program that implements a simple Web server in less than 500 lines of code. Obviously, a full-blown
enterprise-level Web server is more involved, but the basics are very simple.

Most servers add some level of security to the serving process. For example, if you have ever gone to a Web
page and had the browser pop up a dialog box asking for your name and password, you have encountered a
password-protected page. The server lets the owner of the page maintain a list of names and passwords for
those people who are allowed to access the page; the server lets only those people who know the proper
password see the page. More advanced servers add further security to allow an encrypted connection
between server and browser, so that sensitive information like credit card numbers can be sent on the
Internet.

That's really all there is to a Web server that delivers standard, static pages. Static pages are those that do
not change unless the creator edits the page.

Extras: Dynamic Pages

But what about the Web pages that are dynamic? For example:

        Any guest book allows you to enter a message in an HTML form, and the next time the guest book
         is viewed, the page will contain the new entry.
        The whois form at Network Solutions allows you to enter a domain name on a form, and the page
         returned is different depending on the domain name entered.
        Any search engine lets you enter keywords on an HTML form, and then it dynamically creates a
         page based on the keywords you enter.

In all of these cases, the Web server is not simply "looking up a file." It is actually processing information and
generating a page based on the specifics of the query. In almost all cases, the Web server is using something
called CGI scripts to accomplish this feat. CGI scripts are a topic unto themselves, and are described in the
HowStuffWorks article How CGI Scripting Work.

For more information on Web servers and related topics, check out the links on the next page.
Semana 03 | Lecturas | Web engineering

                                         Web engineering: managing the complexity of web systems development, Athula Ginige


ABSTRACT
In the last few years our knowledge about how to develop large complex web systems have grown rapidly. In this paper
we attempt to arrange this knowledge into a schema based on how our knowledge gets matured as we get more experience
in developing large complex web systems. Based on this we propose a systematic approach to developing large complex
web systems. We call this body of knowledge consisting of technologies, methodologies and standards that enable us to
successfully develop large complex web systems Web Engineering.

Categories and Subject Descriptors

H [Information Systems], H.3.4 [Systems and Software]

General Terms
Design,

Keywords
Web Engineering, Process Model, Product Model, Web site Design, Web Site construction, Web page Design, Web Page
Construction, Web System Design


1 WHAT IS WEB ENGINEERING?
In the last few years our knowledge about how to develop large complex web systems have grown rapidly. We are now
beginning to appreciate the complexities involved in developing large Web Systems. To successfully develop a large Web
System we need a team of people with wide ranging knowledge and skills. We need Graphic Designers to develop the
look and feel. We need people with library science background to organize the information, and develop navigation and
search mechanisms. We need database designers to develop the optimum way to store the information that is to be
accessed through the web system, programmers to develop the code, network security experts to look at required security
aspects, computer experts to decide on the appropriate hardware architecture for the Web system based on performance
requirements. We also need web architects who can come up with an overall architecture for the Web system that shows
how individual parts are put together to create the web system and people who has the knowledge to plan a web
development project and manage it. It is useful to organise the knowledge that we have gained in the
last few years about how to develop a complex web system in to some schema. This will enable someone who wants to
develop a large complex web system to identify the existing knowledge and make optimum use of this in developing the
new web system. Being able to use proven approaches the developers will be able to ensure that the end product meets the
intended purpose and also it can be developed within an agreed time and budget. Also when we try to organise the current
knowledge into some schema it will highlight any gaps in our knowledge. The need to have a well-understood
methodology to develop these large complex web systems becomes paramount as more and more organisations are
beginning to use the Web as a major business tool. Today the Web is extensively used as a major means of
communication with the external world as well as within an organisation and also as a tool to assist in carrying out
its business processes in a more effective way.

Many organizations and developers have successfully developed large, high-performance Web sites and applications, but
others have failed or are facing the possibility of major failures. A recent survey on Web-based application development
by the Cutter Consortium [1] highlighted the problems plaguing large Web-based projects:
         84% of the time delivered systems didn‘t meet business needs.
         53% of the time delivered systems didn‘t have the required functionality.
         79% of the time schedule delays plagued the projects.
         63% of the time projects exceeded the budget.

The primary causes of Web-systems failures are a flawed design and development process, and poor management of
development efforts. The way we address these concerns is critical to realizing the Web‘s full potential.

The knowledge that we gained over many centuries as to how to build bridges, sky-scrapes, roads, irrigation systems,
motor cars, aeroplanes, global communication systems and in more recent times large software applications has now been
consolidated as Engineering disciplines. Some examples are Civil Engineering, Mechanical Engineering,
Telecommunication Engineering and Software Engineering. The American Heritage® Dictionary of the English
Language, (Third Edition) defines engineering as: The application of scientific and mathematical principles topractical
ends such as the design, manufacture, and operation of efficient and economical structures, machines, processes, and
systems.

The researchers who first proposed the need for an Engineering approach for developing large complex web systems
defined Web Engineering as follows [2].

Web Engineering deals with the establishment and use of sound scientific, engineering and management principles,
disciplined and systematic approaches to the successful development, deployment and maintenance of high quality
Web-based systems and applications.
2 DO WE NEED AN ENGINEERING APPROACH TO DEVELOPING LARGE
COMPLEX WEB SYSTEMS
In a new and emerging discipline such as development of large complex Web systems, it is common to learn through
experience. Development of large-scale software also went through a similar phase in the early 70s. Lack of suitable
process models and application architectures gave rise to a software crisis [3]. Most large-scale software that was
developed either did not meet the specification, did not work properly, was over budget and could not be delivered within
the agreed time frame. To manage the complexity of these software systems people developed new process models,
application architectures and development methodologies. This was the birth of Software Engineering, Requirements
Engineering, Object Oriented development techniques etc. Now that we are in the process of developing a new discipline
―Web Engineering‖ we have a lot to learn from Software Engineering experience. Some time ago an interesting debate
―Can Internet-Based applications be Engineered?‖ took place [4]. Some argued that Web Engineering is very similar to
Software Engineering. There are many similarities between Software Engineering and Web Engineering; but we also have
to acknowledge the differences. Users are an integral part of a Web system. Thus, when developing any Web system it is
essential to have appropriate steps built into the development process that will cater for user related issues. Also the
information content and the functions of a Web site tend to evolve while being developed much more than software
applications. Often if a web system is developed to address a business problem, the introduction of a web system as a
solution changes the original problem. Thus this requires further changes to the web system as users beginning to realise
what can be done with a Web system they will ask for new features and functionalities. Further the user interface of a
Web site needs to have much stronger aesthetic appeal to the intended user group compared to the traditional form based
user interface that we are accustomed to in many business software applications.

The two key attributes of Web systems that distinguish their development from traditional software development are the
growth of their requirements and the continual change in their information content. These two attributes mandate that Web
systems are easily scalable and maintainable, and thereby they impact the way we build these systems. Web systems need
to be designed and built for scalability and maintainability; these features can‘t be added later. Successfully building,
implementing and maintaining Web systems depend on how well we address the requirements of scalability and
maintainability, among other needs.

Furthermore, a Web system needs to meet the needs of many types of its stakeholders – diverse range of system‘s users,
persons who maintain the system, the organisation that need the system, and also those who fund the system development.
This makes the design and development of the system further complex and difficult. In addition, development of Web
system calls for knowledge and expertise from many different disciplines and requires a team of diverse group of people
with expertise in different areas [5]. Thus whatever the development methodology we are going to use, it needs to have
specific activities that will take into account the scalability and maintainability requirements, space for creativity that will
enhance the aesthetic appeal of the user interface and the ability to respond to continuously changing user requirements. If
we can come with an engineering approach that can accommodate all these then we can Engineer Web sites.


3 WHAT KNOWLEDGE CONSTITUTES WEB ENGINEERING?
There are many ways to classify the knowledge that Web Engineering should encompass. One way would be to organise
this into Technologies, Methodologies, Standards, Protocols, and Project planning and management techniques. We found
looking at how we learn to develop Web systems gives more interesting insights to what knowledge should constitute
Web Engineering.

When we gain new knowledge we should be able to do something with it. Conversely if we need to do something new
we go and find the necessary knowledge for doing it. Thus trying to identify starting from a simple web page to how
someone progresses making more complex web systems will provide a good overview of the knowledge that Web
Engineering should encompass.

After we become familiar with how to use the Web and the concept of a ULR we often start constructing our first web
page. For this we need to know about “Web Page Construction”. To construct a Web page we should know about
appropriate technologies, HTML and HTTP standards and a tool that can be used to construct the Web pages. A basic tool
would be a text editor to write the necessary HTML code. If we have access to a more sophisticated tools such as
FrontPageor Dreamweaverthen we don‘t need to know HTML in depth as this knowledge is now embedded in to the
tool.

Though we can now construct a web page it will not be aesthetically pleasing, as we have still not learned about how to
design a web page. Thus next stage is to learn about “Web page design”. Now we will learn about the colour schemes and
fonts, the way eye traverse web pages to guide us how we should lay out the contents on a Web page, the magical number
72 principle and the KISS principle etc. Also we might get introduced to some of the legal and ethical issues at this
stage. Itis very easy to copy other peoples images and animation to include into your web page. We need to know what
can becopied and what we should not copy. Once you learn how to design and create web pages the next attempt is to
develop a web site. Often the first effort is to designthe Web site, then design the individual pages in the web site
and construct these pages. For this we need to learn about “WebSite Design”. Now we are beginning to learn about user
requirement analysis and specification. We need to know about the stakeholders, what their requirements are, who is
going to use this web site and for what purpose. We need to decide what information should be on the Web site, what is
the best way tostructure this information so as to make it easy for the user to find the information they are looking for,
what navigation structure is required etc. Sometimes we use ―storyboards‖ to show the proposed structure.
After a while we find, constructing each and every page is a laborious task specially if the application that we are trying to
develop has many similar pages such as an electronic parts catalogue. Further even if we succeed in creating all the pages
maintaining these became difficult if not impossible. Now we start to learn about “Web Site Construction”. We learn how
we can store the catalogue information in a database and dynamically generate many similar looking catalogue pages. We
learn that many things that we knew about database design can be used in Web site construction. We will also learn that
relational databases are not the only way to store the information but there are other approaches such as XML repositories
that we can use. If this is an e commerce type-site we will learn about secure transactions and payment gateways.
After we gain the knowledge about web site construction we can develop fairly large and complex web systems. After a
while information in these web systems change. If we have not thought about how to mange information and designed the
appropriate infrastructure to manage the information, maintaining these web systems becomes a problem. Now we learn
design for maintainability and scalability is not some thing that we can add later on but has to be incorporated into the
design from the very beginning. We now have to learn about “Web System Design”.

This will include development of policies to manage information within the organisation and back-end systems to assist in
implementing these information management policies. Also we need to address overall performance issues especially if
we are expecting tens of thousands of simultaneous users. If the system down time is going to cost lot of money to the
organisation due to the lost business we need to develop web system architectures with build in redundancies.
As you can see the way we approach how we develop a web system changes as we get more and more experience in
developing and maintaining web systems. A person with experience in developing and maintaining large web system will
approach the development of a new web system in a very different way to some one who only knows about web site
design or web site construction. Thus the next level of maturity we will reach in terms of complex web systems
development is proper “Web Project Planning and Management”. The Figure 1 shows how our knowledge about Web
system development evolves with time.EB PAGE CONSTRUCTION
WEB PAGE DESIGN
WEB SITE DESIGN
There is plenty of literature about Web page construction, Web page design, Web Site design and also Web Site
construction. But there is very little published information about Web System design, and Web project planning and
management as now only people are beginning to appreciate the challenges of developing scalable and maintainable Web
systems. Another very important thing to remember when developing a Web system for an organisation is that at the start
of the project we cannot get a full set of specifications. As the project progress there will be requests for more and more
functionality to be added on to the system or to put more information on the Web.

4 WEB SYSTEM DESIGN
Building and deploying a Web system is a multi-step process, in which many steps
influence one another and are iterative. As we discussed in the previous sections,
most Web systems are bound to continuously evolve and change, to meet the
changing/growing needs of the organisation. Also, development of a large complex
Web system requires knowledge and expertise from many different disciplines. Thus
we need a team involving a diverse group of people with expertise in different areas.
As the system itself is complex, the process of building and deploying a successful
large Web system meeting diverse (and possibly conflicting) needs becomes even
more complex and challenging.

We need a sound process for building Web systems that:
        Assists us in capturing the changing requirements and managing the complexity of the development process
        Assists in the integration of the know-how from various disciplines,
        Facilitates the communication among various members involved in the development process – the development
         team, stakeholders and end-users -, and
        Supports the continuous evolution and maintenance and management of the content.


their intended operational environment. One can develop a technically best Web site, but if it is not properly used for
various reasons, it is a major failure. Often a Web system deployed in an organisation will have a front end and a back
end. Front end is for the public to access information about the organization. We have no control of who will be visiting
the front-end web site. The back end is for employees of the organisation to manage and up-date the information in the
Web system. Our experience is often the employees need some training and also it is necessary to build the various
information management tasks into their job descriptions. Based on our experience, we would like to emphasize that
introduction of a Web information or electronic-business (ebusiness) system in an organisation causes a paradigm shift
and can significantly impact on the work and the way various business processes are carried out. To successfully manage
the impact and transition, the ultimate internal users of the system need to be retrained to enhance their understanding and
use of Web and web-related technologies and to successfully cope with the transition. Other factors, including
reengineering of business processes, organizational policy, changes to recruitment and human resources policies in an
organisation also contribute to successful deployment and use of the Web systems. Based on the experience we gained
from building and deploying many very large Web-based systems [6] and our studies, we strongly recommend to
incorporate in the overall development process the seven essential steps (in the order) highlighted in Table 1. Main reason
for major failures of most Web-based systems is that some of these essential steps were neglected or only given a cursory
treatment. This knowledge related to Web System Development need to be incorporated into web project planning and
management.
5 WEB PROJECT PLANNING AND
MANAGEMENT
Once we understand the various issues that need to be addressed when
developing a Web system we can then develop a process that will address all
these issues. We have been using a development process for developing Web
systems as shown in Figure 2.
5.1 Context Analysis
The first essential step in developing a Web system is context analysis. In this, we elicit and understand the major
objectives and corporate requirements of the system, gather information about the operational/application environment
and identify the stakeholders. Then, based on these needs, we decide on both the technical and non-technical
requirements, which can be classified into broader requirements (such as what the system should do) and specific
requirements (such as security, access control and performance). Within the broader context of how the Web system will
be used, the developers have to understand the specific needs associated with scalability, maintainability, availability and
performance of the system. For instance, if the information content and the functions offered by the system are going to
evolve considerably, we have to design the system in such a way that it is scalable. But, if the information on the Web site
changes frequently and if there is a need to keep the information current, then we need to design the system in a manner
that facilitate easy maintainability of information. Instead, if it is critical to ensure very high availability and catering for
very high peak or uncertain demand, we need to design the Web site to run on multiple servers with load balancing and
other performance enhancement mechanisms.

The features such as design for scalability, maintainability and performance need to be built into the initial system
architecture, as it is impossible or very hard to add these features if the initial architecture does not support it.
For example, let us consider an E-business Web site that gives product information. Information about a product (colour,
rice and availability) may appear on many different pages and this information can change frequently. If this Web site was
designed as static HTML (hypertext mark-up language) pages, then every time the product information changes one has to
change each and every page that contains this information. But in reality, very often changes are made only in some pages
and not in all, and hence the information the system provides on different pages will become inconsistent. Instead, if the
information about a product is stored in a central database and the various web pages that contain this information is
dynamically created by extracting the relevant information from this database, then we only need to change information in
one place to keep the Web site current. Such a Web site will have a completely different architecture to a Web site that
consist of only static HTML pages. If a Web site is databasedriven then we also can have a back-end system that will
allow an authorised person, who need not be skilled in Web page development, to make changes easily through a form
based interface.

Thus, context analysis is an important and significant first step in development of a system since we identify in this stage
the broader requirements such as purpose of the Web site as well as specific requirements such as maintainability,
scalability, performance, quality control mechanism for information that will be placed on the Web site and security. It
provides us with sufficient information to decide on the broader architecture or Product Model of the system. More
detailed analysis of stakeholder requirements is carried out later during the Web Site development phase.


5.2 Product Model
Based on the broader requirements and specific requirements, we need to develop a Product Model, which shows how
various components are linked together to meet those requirements. The product model should include the overall
physical system architecture (the network and the various servers - web servers, application servers, database servers, etc),
application architecture (a map of the various information modules and the functions available) and the software
architecture (various software and database modules required to implement the application architecture).
Appropriate system architecture is very important especially if performance and security of the Web system are critical
actors.

For example, when developing the web site for 1998 Olympic Winter Games in Nagano, Japan meeting the performance
requirement was a critical factor [7]. Hence, a network and server architecture that includes redundant hardware, load
balancing, Web server acceleration, and efficient management of dynamic data was designed and implemented. If the
performance and security is not a critical issue, one can use a standard server architecture connected to the Internet,
without the need for special design effort.

The application architecture, also called as product model, shows a map of various information and functional modules.
Information modules can provide the same information to all the users or customised information to each user. Examples
of functional modules are a login page, shopping trolley etc that will capture user input and process it. Figure 3 is an
example of the application architecture developed for the ABC Internet College (http://www.abccollege.com)
which provides students online personalised tutoring. The system dynamically generates learning materials based on
student‘s past performance. When a student logs in, learning activities appropriate for that day are given on the student‘s
personalised home page. Based on their past performance, the student is directed to take the next standard test module, a
personalised revision paper or some practice questions in an area in which the student did not perform well in the past.
The system
has been successfully operational for the last four years, easily accommodating upgrades and enhancements.
Once the application architecture is designed, it needs to be mapped into a software architecture as shown in figure 4
displaying various software and database modules required to accomplish the required functionality. The specific
requirements associated with scalability, maintainability and quality control of information determine the appropriate
software architecture. Table 2 highlights the specific requirements and means fulfilling each requirement.
The product model will help us in deciding what processes we need to follow in order to develop the Web system and in
estimation of development time and cost.

5.3 Process Model
The implementation of the system based on the product
model, calls for a set of activities, which include a
detailed analysis of requirements, design, testing and
deployment. We also need to carry out a set of activities
to address the non-technical issues identified in context
analysis. A process model specifies a set of sub-projects
or sub-processes
that need to be carried out to develop and implement the
overall system. For example the development of the
front-end web site and the back-end web site can be
considered as two sub projects. This enables parallel
development reducing the over all development time.
To carry out some of these sub-projects we can adopt various development models such as the waterfall model, the spiral
model or the rapid prototyping model from the Software Engineering discipline [8]. We can design appropriate process
models to suit different types of development and development issues, based on the type of application. For example, a
model incorporating iterative refinement of an initial prototype may be best suited to small-scale trial applications. A
competent Web engineer should be able to use the most appropriate model for the given problem, adapting it in a way that
takes into account the application being developed and the limitations and strengths of the model.
Large Web systems use server side programming to dynamically create web pages from information stored in databases.
Arious interactive functions are also implemented in software. Thus we need a process to develop the content structure,
contents, the screen layout and the navigation mechanism. Also we need a process to develop software that will be used to
deliver the content as well as provide various functions to assist in maintenance and quality assurance of this content.
In a large, complex Web application development, the persons who develop the content generally come from journalism,
library science, marketing or public relations background. The people who develop the screen layout come from a visual
arts and graphic design background. The software developers come from computing, software and IT background. Thus it
is important to make sure that the various processes enable these three groups to work and communicate effectively.


5.4 Sub Project Planning
The next phase is to develop a project plan for each of the sub
projects identified in the process model. These project plans
should list the tasks that need to be done, a timeline and the
resource requirements. There are well-developed techniques or
project planning in conventional engineering and we can use
these to develop these project plans. Based on the project plan,
development activities can take place. In order to successfully
plan various sub projects one needs to know what tasks are
required to carry out, people with what type of skills are required
to carry out these tasks, and the time
estimates. Often the development of front-end Web site and the
back-end web site are done as two sub projects. The coupling of
the front-end web site and the back-end web site will be via the
data repository as shown in Figure 4. In order to properly plan
the development of these Web sites one should know how to
develop Web sites.


5.5 Web Site Development
Most Web site development consists of designing the web site and constructing the web site to deliver the content and the
required functionality. Early Web development activities were mainly focused on development of content, its presentation
and navigation. The information was stored in a server as a set of static HTML pages and the same information was
presented to all the users. Now, with the growing number of Web based business applications, there is a need to provide
customised information to users and also to get various information from the users and process them. Based on our
experience, we have developed and refined a twostage approach to web site development; designing the web site
and construction of the web site (Figure 5). This approach decouples the creative design phase often done by Graphic
designers from construction phase which is often done by software developers. As often, content developers and software
developers come from different backgrounds, this de-coupling is very advantageous.

5.5.1 Web Site Design
The web site design process starts with a detailed analysis of requirements and
development of appropriate specification. A prototype is constructed from the
design which usually contains a set of sample pages that can be used for
evaluating the screen layout and navigation among different pages. Based on the
feedback, either the design or the specifications can get changed.

One can iterate through this process until stakeholders are happy with the screen
layout and the navigation structure.
The creative part of this process is the design. Figure 6 shows various inputs to the
design process and the typical utcomes
from this process. The stakeholder requirements are given as detailed specification
that was developed at the start of the development process. Designer should also
take into account non-technical aspects such as legal, moral and cultural issues that
are relevant to the environment in which this application will be used.
Also knowledge of user‘s cognitive issues is important to develop a good design [ 9]. The designer should know how users
would perceive and comprehend information and how the fonts, colour and layout contribute towards enhancing
comprehension. Available technology will determine what is feasible. For example, use of very large graphics or
including video may not be possible due to limitations of the bandwidth. It is important for the designers to be aware of
these technology issues. The outcomes of the design process consist of information structure, information access methods,
various screen layouts orlook-and-feel and guidelines for new content development or processing legacy material if the
content is derived from a legacy database or other systems. Irrespective of how the contents are derived, we need to create
a structure to organise this content. The content structure depends on factors such as nature of the application, nature of
the information and what technology will be used to store the contents. This will also determine the granularity of
information that can be accessed. For instance, we can store the contents as HTML files or as XML files or in a database.
If we are going to store the contents using a database or using XML technology, we can sub-divide the contents into small
sub sections (such as title, sub-headings, document author, key words etc) and provide access to the contents based on
these subsections. If the decision was to store the contents as set of HTML files, we cannot provide this fine granularity in
terms of information access, as HTML is not a content markup language, but a presentation markup language. Once we
develop a structure for organising the information, we have to decide on navigation mechanisms that need to be provided
to access this content. Access to information can be provided by hyperlinks or search facility.

Based on the information structure, we need to develop sample Web pages to display each type of information. For
example, if the application is a product catalogue, we need to develop the Homepage and a sample page for each product
type. These sample pages will act as templates for creating other product pages which will be done by application
software in response to a user request when web site is operational. We can use tools such as Frontpage or Dreamwever
for developing these sample pages and the prototype Web site. By developing a prototype web site based on sample pages
we can test the proposed navigation mechanisms for its ease of use and other features. If new content needs to be
developed, it is better to come up with guidelines to assist the content development process as part of the design process.
If content is drawn from a legacy system we need to develop a process to convert the legacy information to the required
structure and format. The output of the design process and the original specifications that were developed at the start of
the Web site development process will form the input to the next stage – the web site construction process.

5.5.2 Web Site Construction
Now, almost every Web server provides a suitable interface to communicate and to make use of external software
modules. Common Gateway Interface (CGI) was one of the early standards that was adopted for communication between
the web server and the external software modules. Today, there are various implementations of this concept in the form of
Application Program Interfaces (API) such as ISAPI used by the Microsoft IIS Web server. Except for simple systems
that are based on HTML pages, every other system makes use of external software modules to provide the interactivity
and customisation of information provided to the user. A simple example is processing the information a user submits via
a form. These software modules specifically developed for these applications are known as application software.
Use of application software enables us to develop maintainable and scalable Web sites, in addition to making the Web site
interactive. The basic principle of building a maintainable web site is not to store data that is going to change with time at
multiple locations. If a data element needs to appear on different web pages (for example name and telephone number of a
Professor that should appear in every subject he/she teaches) this information has to be extracted from a single location
where it is stored and then presented to the user. Thus, when the data changes (for example the Professor‘s telephone
number) it needs to be changed only in one location. We can further enhance the maintainability by developing a
back-end Web site that will enable authorised people (based on a user ID and a password) to make these changes through
a set of forms. This type of approach will enable us to have a decentralised maintenance approach where different people
can have responsibility for maintaining different information on the Web Site. To facilitate scalability, a Web system
needs to be built using a component- based architecture and the navigational links and buttons need to be dynamically
created.

Use of componentbased architecture enables as to easily add new functions or information modules, by simply adding
these functions as new components. As we emphasised earlier, it is very important to think about scalability and
maintainability issues right up front and the system needs to be designed for scalability and maintainability. These features
cannot easily be added later on. That is the reason why we emphasise in our methodology to decide application software
architecture as part of the Product Model. This application software architecture is going to impact on the processes that
are going to be used for developing the system and this in turn will impact on the development time and cost. The
application software architecture shown in Figure 4 is suitable for a scalable and maintainable web site. It can also support
a back-end web site if it is necessary to implement a decentralized maintenance scheme. Typical application software
architecture will consist of set of routines or scripts that will communicate with the Web server via a Common Gateway
Interface or Application Program Interface. This application software can be written using a scripting language such as
Perl or ASP. Often these application software programs have to store and extract information from a data repository. This
data repository can be a relational or object oriented database or set of files where information is structured using
eXtended Markup Language (XML). If the data repository is constructed using a relational database we can use ODBC
(open database connectivity) as the interface between the application software and the database.
5.6 Web Site Maintenance
Once the Web site is developed and commissioned, we enter then into the maintenance phase. There are three major types
of maintenance:

         Content maintenance
         Software maintenance, and
         Hardware and network maintenance.

The decision on how the content will be maintained is taken at the context analysis stage. Thus what is required at the
maintenance phase is to implement the appropriate content maintenance procedures. The software maintenance can be
sub-divided into four categories: corrective, preventive, perfective and adaptive maintenance. Often, when the system is
in operation, various bugs in software can surface. Thus we need to implement a set of procedures to correct these bugs
and document the changes that were made. This is known as corrective maintenance. Also, we invariably come across
errors or omissions in software, especially in the business logic before a problem occurs. Rectifying these is preventive
maintenance. Though the system may be functioning flawlessly, we may come up with a better way to implement a
function, and the process of carrying out this modification is known as perfective maintenance. Also, from time to time,
the requirements can change marginally such as how the commission or tax is calculated. Thus we need to carry out
adaptive maintenance to adapt the system for new business rules. It is useful to break the software maintenance activities
into these four categories, as this will enable us to prioritize the activities. For example, we need to immediately carry out
any corrective maintenance. Once a problem is discovered preventive maintenance needs to be carried out as soon as
possible. The other two we can schedule to carry out at a convenient time. We also need to periodically maintain the
hardware and the network, and fix the failures as and when they surface.

5.7 Project Management, Documentation and, Quality Control and Assurance

These functions are spread throughout the lifecycle of the system. There are well-established methodologies and
techniques to perform these tasks. Proven techniques used in systems engineering can be used with minor changes for
development of large and complex web sites.

6 PROSPECTS OF WEB ENGINEERING

As we improve our ability to build Web systems, the systems we need to build are likely to get more complex. The quality
requirements and features of these systems may also change, with more emphasis on performance, correctness and
availability of Web systems, as we will increasingly dependent on Web systems in a number of critical applications,
where the consequences and impact of errors and failures could be serious. Further, as systems become larger, a large
team of people with different types and levels of skills would be required, necessitating distributed collaborative
development. As we try to exploit some of the yet unrealised potentials of the Internet and Web, there will be many new
challenges and problems, and hopefully new approaches and directions would be developed to meet the challenges and
solve the problems we may face on our mission to build a better cyberspace for us. Like the Web, which is dynamic and
open, Web engineering needs to evolve rapidly, adapting to the changes and responding to the newer needs. Successfully
convincing developers of Web applications about the need for and the benefits of Web engineering approaches, which if
implemented thoughtfully, will go a long way to reduce the complexity and lead to successful development.
Semana 03 | Lecturas | PHP+MySQL

   Build your own Database Driven Website using PHP & MySQL - Third Edition, Kevin Yank. Leer solamente los capítulos 1 y 2 de
                                                                                                                este material.


PHP and MySQL have changed.

Back in 2001, when I wrote the first edition of this book, readers were astonished to discover that you could
create a site full of web pages without having to write a separate HTML file for each page. PHP stood out
from the crowd of programming languages, mainly because it was easy enough for almost anyone to learn
and free to download and install. The MySQL database, likewise, provided a simple and free solution to a
problem that, up until that point, had been solvable only by expert programmers with corporate budgets.

Back then, PHP and MySQL were special—heck, they were downright miraculous! But over the years, they
have gained plenty of fast-moving competition. In an age when anyone with a free WordPress account can
set up a full-featured blog in 30 seconds flat, it’s no longer enough for a programming language like PHP to
be easy to learn; nor is it enough for a database like MySQL to be free.

Indeed, as you sit down to read this book, you probably have ambitions that extend beyond what you can
throw together using the free point-and-click tools of the Web. You might even be thinking of building an
exciting, new point-and-click tool of your own. WordPress, after all, is built using PHP and MySQL, so why
limit your vision to anything less?

To keep up with the competition, and with the needs of more demanding projects, PHP and MySQL have
had to evolve. PHP is now a far more intricate and powerful language than it was back in 2001, and MySQL is
a vastly more complex and capable database. Learning PHP and MySQL today opens up a lot of doors that
would have remained closed to the PHP and MySQL experts of 2001.

That’s the good news. The bad news is that, in the same way that a butter knife is easier to figure out than a
Swiss Army knife (and less likely to cause self-injury!), all these dazzling new features and improvements
have indisputably made PHP and MySQL more difficult for beginners to learn.

Worse yet, PHP has completely abandoned several of the beginner-friendly features that gave it a
competitive advantage in 2001, because they turned out to be oversimplifications, or could lead
inexperienced programmers into building web sites with gaping security holes. This is a problem if you’re the
author of a beginner’s book about PHP and MySQL.

PHP and MySQL have changed, and those changes have made writing this book a lot more difficult. But they
have also made this book a lot more important. The more twisty the path, the more valuable the map, right?

In this book, I’ll provide you with a hands-on look at what’s involved in building a database driven web site
using PHP and MySQL. If your web host provides PHP and MySQL support, you’re in great shape. If not, I’ll
show you how to install them on Windows, Mac OS X, and Linux computers, so don’t sweat it.

This book is your map to the twisty path that every beginner must navigate to learn PHP and MySQL today.
Grab your favorite walking stick; let’s go hiking!
Who Should Read this Series?

This article series is aimed at intermediate and advanced web designers looking to make the leap into
server-side programming. You’ll be expected to be comfortable with simple HTML, as I’ll make use of it
without much in the way of explanation. No knowledge of Cascading Style Sheets (CSS) or JavaScript is
assumed or required, but if you do know JavaScript, you’ll find it will make learning PHP a breeze, since
these languages are quite similar.

By the end of this series, you can expect to have a grasp of what’s involved in building a database driven
web site. If you follow the examples, you’ll also learn the basics of PHP (a server-side scripting language that
gives you easy access to a database, and a lot more) and Structured Query Language (SQL—the standard
language for interacting with relational databases) as supported by MySQL, the most popular free database
engine available today. Most importantly, you’ll come away with everything you need to start on your very
own database driven site!

What's in this Series?

This series comprises the following 4 chapters. Read them in order from beginning to end to gain a complete
understanding of the subject, or skip around if you need a refresher on a particular topic.

Chapter 1: Installation

Before you can start building your database driven web site, you must first ensure that you have the right
tools for the job. In this first chapter, I’ll tell you where to obtain the two essential components you’ll need:
the PHP scripting language and the MySQL database management system. I’ll step you through the setup
procedures on Windows, Linux, and Mac OS X, and show you how to test that PHP is operational on your
web server.

Chapter 2: Getting Started with MySQL

Although I’m sure you’ll be anxious to start building dynamic web pages, I’ll begin with an introduction to
databases in general, and the MySQL relational database management system in particular. If you have
never worked with a relational database before, this should definitely be an enlightening chapter that will
whet your appetite for what’s to come! In the process, you’ll build up a simple database to be used in later
chapters.

Chapter 3: Introducing PHP

Here’s where the fun really starts. In this chapter, I’ll introduce you to the PHP scripting language, which you
can use to build dynamic web pages that present up-to-the-moment information to your visitors. Readers
with previous programming experience will probably only need a quick skim of this chapter, as I explain the
essentials of the language from the ground up. This is a must-read chapter for beginners, however, as the
rest of this book relies heavily on the basic concepts presented here.

Chapter 4: Publishing MySQL Data on the Web

In this chapter you’ll bring together PHP and MySQL, which you’ll have seen separately in the previous
chapters, to create some of your first database driven web pages. You’ll explore the basic techniques of
using PHP to retrieve information from a database and display it on the Web in real time. I’ll also show you
how to use PHP to create web-based forms for adding new entries to, and modifying existing information in,
a MySQL database on the fly.
SEMANA 04
Semana 04 | Lecturas

        Realizar la lectura Capítulos 3, 4 y 5.
                   Cap 03: Web 2.0
                   Cap 04: XHTML
                   Cap 05: CSS


Semana 04 | Lecturas | Writing for the Web

                                                                               Writting for the Web, Sun Microsystems


Abstract (Summary)

Technical communicators are often expected to adapt their writing for different audiences, purposes, and
media. Additionally, technical communicators are called on to write for a variety of media--such as printed
books and manuals, online support and help, Web sites, brochures, and information kits. Gregory discusses
the seven key arguments that are used to distinguish between writing for the Web and writing for print.

»   Jump to indexing (document details)


INTRODUCTION

Technical communicators are often expected to adapt their writing for different audiences, purposes, and
media. For example, the technical communication role may involve writing a variety of resources-including
manuals, instructions, help resources, internal policies, style guides, sales specifications, and promotional
material. In addition, technical communicators are called on to write for a variety of media-such as printed
books and manuals, online support and help, Web sites, brochures, information kits, and so on. Many
technical communicators can adapt their writing styles with ease to suit print, screen, online environments,
video, audio, or whatever medium the job requires.

The demands of these different writing tasks raise questions about how an author's communication purpose
and chosen communication medium might influence the requirements of writing. In particular, it raises
questions about whether the core writing strategies used by technical communicators are transferred
between different writing tasks. For example, are similar approaches to writing used when writing for
different media-particularly when we compare writing for the Web with writing for print?

According to Kilian (2001):

When you write for a Web site, you're not just slapping a poster up on a new kind of wall. The Web is a very
different medium from print on paper, and it requires a different kind of writing, (p. 8)

Kalian's comment reflects much of the literature about writing for the Web. Several authors describe writing
for the Web as being fundamentally different from print. They use print-all print-as a comparison point, and
frame their discussions about writing for the Web in terms that suggest its complete opposition to writing
for print. Although these authors may offer excellent guidelines for practicing Web writers, they do so in a
context that is deliberately separated from print writing.

In this article, I revisit these guidelines for writing for the Web. Specifically, I examine seven of the key
dimensions along which Web writing is often differentiated from print writing. I propose that many of the
guidelines being advocated for Web writing have a long history in commentary on writing for print. Through
a brief review of the literature relating to Web writing and a review of the literature from the print tradition,
I suggest that many of the underlying principles of writing apply to both media, and that comparisons made
solely on the basis of communication medium may not be very helpful to technical writers.

Because this broad approach to advice about the Web is not always helpful, I offer an alternative-genre-and
conclude this article by arguing that genre-based comparisons and guidelines may be more helpful for
practicing writers than comparisons based only on medium. These comparisons can encourage writers to be
guided by their audience's needs and their communicative purpose, rather than being guided by the
medium for which they write.
Note that my focus in this article is on writing, and readers' reactions to written text. I am particularly
interested in the guidelines developed about writing for the Web. I do not address guidelines for document
design and navigation (some useful resources addressing design and navigation include Nord and Tanner
1993; Farkas and Parkas 2000; Rosenfeld and Morville 1998; Rubens and Krull 1985).

GUIDELINES THAT DEFINE WRITING FOR THE WEB

Most authors who write about the Web argue that writing for the Web is different from writing for print
(see, for example, Farkas and Farkas 2002; Garrand 2001; Holte 2001; Kilian 2001; Meisen 1999; Price and
Price 2002). In fact, when this literature on writing for the Web is considered broadly, there seems to be a
general determination that the newer medium, the Web, must be different from the traditional medium of
print. Seven guidelines emerge that are consistently used to define Web writing and to set it apart from
print writing. Dut is writing for the Web really distinct? The research on communicating in print suggests
otherwise.

1. Structure and design are concerns for Web writers

Guideline for the Web Holtz (2001) argues that one of the key issues that sets Web writing apart from print
is Web writing's focus on structure and design. Holtz suggests that print writers are concerned only with
content; other people, such as editors, designers, and printers, worry about format, artwork, and design. In
contrast, Holtz argues that, for Web writers, all of these issues are the writer's concern (p. 5). Web writers
must consider non-text elements because of the enormous impact these elements have on the effectiveness
of a Web site.

Holtz isn't alone in recognizing the importance of nontext elements in Web sites. Garrand (2001) notes that
Web writers need to be more than great wordsmiths; they also need to understand and address site
architecture and the capabilities of interactive media. Meisen (1999) includes an extensive discussion of the
ways that site navigation and design issues can influence the usability of a site. Both Farkas and Farkas
(2002) and Rosenfeld and Morville (1998) give significant attention to Web structuring and navigation.

Similar guideline for print These authors are all pointing to the valid need for Web writers to consider the
architecture of their sites and to think about navigation and design as they write content. They are
recognizing the fundamental inseparability of text, design, and format, and acknowledging that readers
approach documents not just through the given content, but also through the form in which it is presented.
These ideas are important and valid. However, they are ideas that print writers must also consider, and
there is a long history within print literature about the inseparability of content, design, and format.

Although the roles of writer, project manager, editor, and designer may be kept separate in print projects,
several authors note that this is not ideal practice (Carr 199$; Duchastel 1982; Parker 1989; Schriver 1997;
Waller 1982; Wickliff and Bosley 1990). Practicing writers and designers acknowledge that separating their
roles often brings unsatisfactory results, but role separation continues, particularly in a consultancy setting
(Gregory 1997). Good information design in any medium is usually the result of collaboration between a
variety of individuals (Sless 1994), and moving away from the idea that the roles should be separate
acknowledges the interdependence of the various elements of a document.

Even if writers work within a linear structure where the words are written first anil decisions about format
and design happen after the copy is finalized, this is not the way that readers approach texts. Readers do not
separate content and design-they experience them simultaneously. Content, format, and design work
together to create a complete package for readers. For example, the chosen format communicates
something to readers about what the text will be like and provides a constraint on the options available to
both writers and designers. It would be difficult to write a format-driven document like a brochure or
manual without considering both format and design as part of the writing process.

2. Write no more than 50% of what you would write for print

Guideline for the Web Nielsen (1999) argues that writers should write approximately $0% less when writing
for the Web than when writing for print, even when the same material is being covered (p. 101). This
guideline is echoed by a number of authors (Holtx 2001; Price and Price 2002).
This advice is based on research that suggests that reading from the screen is slower than reading from
paper. Because reading from screen is slow and unpleasant, and because people don't want to read a lot of
text from screen, Web authors should produce 50% less content to help with reading speed and to help
readers feel good about the site (Nielsen, pp. 101-102).

One problem with this guideline is the assumption that it is possible to define an ideal relationship between
the quantities of text suitable for print and for the Web. This advice is general and context free. It ignores
the individual situations that apply to each writer and presents a guideline that is, at best, overly simplistic.
Several authors, including Nielsen (1999), Holtz (2001), and Price and Price (2002), provide examples of
concise writing to illustrate their point. Their examples tend to be excellent examples of concise writing for
any medium, and they illustrate the advantages for readers imparted by a good, tough edit.

Similar guideline for print The lesson about text quantity is an important part of literature from the Plain
Language movement. Although fewer words and shorter sentences are a basic guideline for Plain Language,
the end result is not always a shorter document. In developing the "clear, straightforward expression"
(Eagleson 1990) that characterizes Plain Language, many writers find that they end up with a document that
is longer than the original version. Writing in a simple, reader-oriented way can sometimes mean writing
more words (Penman 1993).

The relevance of overall document length is also discussed in print-oriented information design literature. As
Tufte (1997) points out, it is the visual organization of information rather than the quantity of information
being conveyed that is a major determinant of successful information design. he argues that "clutter and
confusion are failures of design, not attributes of information" (p. 51).

In some cases, it may make sense to write $0% less for the Web. But, in other cases, it may make sense to
write more. The readers' information needs should drive these decisions, not arbitrary rules about
document length. Comparisons between printed materials and their related Web sites often show that Web
sites offer more information-giving more detail, more concrete examples, greater opportunities to delve
deeply into the subject, and covering more timely issues-than the print counterpart. In addition, a Web site
is often designed and structured to appeal to multiple audiences, whereas a printed resource will be more
closely targeted.

3. Write for scannabllity

Guideline for the Web When people read from a screen, they are likely to skip and skim over the text.
Instead of reading the content in full, readers will pick out keywords, headings, lists, and points of interest.
Authors such as Garrand (2001, p. 18), Horton (1994, pp. 262-274), Nielsen (1999, pp. 105-106, 111), and
Price and Price (2002, pp. 113-130) offer several writing guidelines for improving scannability and supporting
these typical reading strategies, including:

* Use two or three levels of headings

* Use meaningful, information-giving headings

* Use bulleted lists

* Use highlighting and emphasis

* Put the most important material first

* Put the topic sentence at the beginning of eveiy paragraph
Similar guideline for print All this is good advice. Hut the idea that readers skip and skim and that we should
therefore write for scannability isn't new. It appears in discussions of technical writing (for example, Nord
and Tanner 1993, and Redish 1993), in comments about Plain Language writing (such as Eagleson 1990), and
in discussions of professional writing (for instance, Petelin and Durham 1992). It also appears in discussions
about motivated readers who ask questions of texts (such as Steehouder and Jansen 19H7; and Wright
1999). And this advice is reflected in much of the document design literature (for example, Felkerand
colleagues 19Hl, Kempson and Moore 1994; and Lewis and Waller 1993). The need to write for scannahility
applies to die Web and to many types of print, and the guidelines offered to Web writers are equally valid
when writing for print.

4. The Web encourages restless reading

Guideline for the Web According to Farkas and Farkas (2002), one of the major differences between reading
from the Web and from print is that the Web encourages casual, restless reading behavior. People skim Web
sites, and will leave if they experience boredom or disappointment (pp. 220-221). Farkas and Farkas contrast
this situation with reading from print, which they describe as a medium where people will settle down for a
while. A similar point is made by Price and Price (2002), who suggest that Web audiences are more active
than print audiences. Instead of passively reading printed documents, Web audiences actively guide
conversations with the producers of Web pages. Instead of being authors, writers become participants in
conversations (p. xiii).

Farkas and Farkas (2002) suggest that the Web encourages restless reading for two reasons:

* Because of the difficulties that people have with reading off screen

* Because most sites are free and easily accessed (readers make little investment to start reading, so they
have little reluctance about getting out)

Similar guideline for print One difficulty with these comparisons is that the type of printed document being
discussed is not clear. Readers might settle down with a novel or even a weekend newspaper, but few
people settle down with a technical manual, an instruction booklet, or an information brochure. These
printed documents are characterized by extremely restless reading: readers usually want to find an answer
to a specific question, quickly. And while some readers may passively accept the content of all types of
documents, many readers will not; both reading theory and public relations/marketing theory have long
recognized the active characteristics of readers and the influences that readers have on whether authors are
successful with their writing.

For example, research examining the reading of brochures shows that readers take very little time to decide
whether something is worth the effort of reading (Gregory 2001). Readers expect documents like brochures
to be boring and irrelevant, so they skim the information quickly to decide whether there is anything worth
pursuing. Like Web sites, brochures are free and easily accessed. They are throw-away items in which
people usually have very little investment.

The restless reading described by Farkas and Farkas is also evident in brochure reading, and in the reading of
many other types of technical and professional writing. As Reclish (1993) notes, both workplace readers and
consumers decide how much attention to give to a documentincluding whether a document is worth any
attention at all. Readers continually decide whether a document is worth their time and effort.
5. Split information into coherent chunks Guideline for the Web The importance of chunking is a general
guideline discussed in many books about writing for the Web. For example, Nielsen (1999) strongly
advocates careful chunking, suggesting that chunks should be used to separate ideas and allow a Web site to
carry different levels of information by offering short summarizing chunks with links to more detailed
information (p. 112). The ability to write in chunks is identified by Duffy, Mehlenacher, and Palmer (1992) as
the key training requirement for writers of online information. Each chunk should focus on one topic,
allowing readers to access only the information that interests them.

Similar guideline for print For people with a background in writing, this guideline is not new. The idea of
chunking information can be traced back to Miller's research in 1956, which showed that people's short-
term memories are taxed when they must retain more than 7 ± 2 items, and that memory load is reduced
when the items are chunked (discussed in Spyridakis 2000). Although the term chunk does not appear
widely in the print literature, the idea that information should be divided into coherent sections is
frequently discussed (for example, Felker and colleagues 1981; and Reclish 1993). Some authors suggest
that professional writers should allow the various sections to stand alone, so that readers can begin and end
at any chunk within the document and still make sense of what they read (such as Bernhardt 1986).

Although the concept of chunking is used in all types of writing, it is possible that Web content should be
chunked differently from print content. The media are different: the Web offers navigational capabilities not
available in print and a document size not limited by printing and distribution costs. In addition, Web writers
must deal with an awkward screen sixe and an array of navigational furniture. Rosenfeld and Morville (1998)
warn that writers should not map printed documents directly onto Web pages because the most suitable
chunking processes for the media are likely to be different (pp. 165-166). But the advice to write in coherent
chunks applies across a variety of media.
6. Web writers can't predict where their readers will start Guideline for the Web Web sites exist as many
separate, linked pages that can be viewed independently, and Web writers can never he completely
confident about where their readers will start reading (see Holtz 2001, p. 6; and Parkas and Farkas 2002, p.
224). As a result, Holtz suggests that writers need to structure their information into independent parts that
make sense in their own context. Writers can rarely assume that readers have read other sections first.

In giving this advice, Holtz is referring to the way that readers arrive at different pages of Web sites. He's
arguing that each page should work as an independent segment because writers cannot assume that
readers have seen pages higher in the structural hierarchy. Holtz is describing the nonlinearity of the Web,
and contrasting this characteristic with print, which he sees as a linear medium.

Similar guideline for print Although novels and feature articles may be designed to be read in a linear
fashion, much print is not. Most readers use texts in a nonlinear fashion-by dipping into them, skipping
around, and backtracking (Nord and Tanner 1993). As Spyridakis (2000) notes, readers jump around in print,
looking at tables of contents, indexes, figures, tables, appendixes, footnotes, and glossaries. She suggests
that print can actually be less linear than a Web site, because the reading routes within print are less limited
than the reading routes in hypcrlinkcd Web pages.

Dillon (1996) also challenges the idea that print is linear and therefore constraining for readers. he notes
that this is a common belief among advocates of hyperlinks and argues that comparing online text and print
on the basis of linearity does not provide a fair representation of either medium (pp. 29-30). Dillon finds
little evidence to suggest that readers are constrained by the linearity of print or that they read printed
documents in a straightforward start-tofinish manner.

Of course, one important difference between print and the Web is that print usually provides a navigational
context through its form. When holding a printed document, readers can immediately see how big the
document is, and where they are in relation to its whole. This is much less likely to be true in a Web
environment, so Web writers must provide these details for readers in another way. However, while
acknowledging these navigational differences, it is still important to note that print is not a fully linear
medium and that writers cannot confidently predict where readers will start reading.

Authors such as Bernhardt (1986) and others working in information design have long challenged the idea
that print is linear. Readers will start reading at the point that grabs their interest or appears to answer their
question. Jn print, as on the Web, writers find it difficult to predict where their readers will start. Readers'
reading patterns are constrained by two issues that apply across all media: their interest (or involvement) in
a particular topic or document, and the personal reading patterns that they bring to each reading situation.
7. Readers "pull" the Information they need from the Web

Guideline for the Web The Weh is often described as a user-driven, "pull" medium. Readers actively pull
from Web sites only the information that interests them, and other material is ignored. Holtz (2001) sees
this Web characteristic as another point of difference with print; in print, readers are given what writers
want to give them (p. 6). Rouet and Levonen (1996) define readers' progression through hyperlinked online
documents as being "user controlled," whereas in print, the reading sequence is directed by the author and
can be passively accepted by readers. Progression through hyperlinks requires active decisionmaking by the
reader (p. 12).

Similar guideline for print But the contrast between the Web and print is really not so clear. In both media,
writers offer information to readers and readers take what interests them. Both media simultaneously
"push" information, while readers "pull" what they want. These two perspectives are recognized in many
models of the reading process (see Hatt 1976; and Wright 1999).

Readers are no more captive in print than they are on the Web. The decision to read is always a conscious
one, and readers can decide to terminate their reading at any point-clue to disinterest, inability to
comprehend, boredom, lack of time, change of circumstance, or simply because they reach the end
(Goodman 1985, p. 835). Authors such as Dervin (1983) recognize that information can't be pushed onto
captive audiences. Instead, readers will access what information interests them (or answers their questions)
at the time that is most convenient to them and use the medium that they choose.

As Redish (1993) notes, technical documents are used as tools; readers scan documents to find important
information, grab that information off the page, and then act on it. This tendency to "pull" information
applies in both print and online media.

SOME LIMITATIONS OF THE RESEARCH FROM WHICH WEB GUIDELINES EMERGE

The guidelines discussed in the previous section may be useful for technical writers writing for a variety of
media. But, in the Web literature, these guidelines seem to be based on limited research. For example,
Jakob Nielsen (1999), one of the most widely quoted authors discussing Web usability, argues that Web
writing should be approached differently from print writing. he identifies five reasons for the difference:

1. Reading online is around 25 percent slower than reading print

2. The Web is a user-driven medium where users feel they have to move around

3. Each page competes with many others for attention

4. Users are never sure whether they are looking at the best page for their topic

5. Users don't have time to -work hard for their information, (p. 106)

Although Nielsen offers many useful guidelines in Designing Web usability: The practice of simplicity und at
his Web site (http://www.useit.com), he offers little detail about the research that informs these five points
of difference (this point is also noted in a useful review of Nielsen's book by Racine 2002). Yet Nielsen's five
reasons are quoted by several authors and seem to offer a key basis for differentiating between print and
the Web.

The evidence for online reading being slower than print reading is discussed in Dillon's (1992) review of the
literature relating to reading from paper versus reading from the screen. Dillon reviews several studies that
collectively show that reading from screen is 20-30% slower than reading print. Rubens and Krull (1985) also
discuss studies that show that reading from the screen is slower than reading print, in part because of the
lack of character legibility on screen. Nielsen (1999) argues that this reading speed problem will be solved
over time as high-resolution monitors come into common use (p. 103). However, it is a difference between
print and screen reading that has current widespread acceptability.
There is little specific research to support the other four points of difference discussed by      Nielsen.
However, these points make intuitive sense for Web users and are widely accepted in the field. The question
of interest for this article is not whether these points relate to the Web; instead, we need to ask whether
these points also apply to print. In other words, rather than being key points of difference, are they points
that describe typical reading practice across both media?

If we look at specific categories of print-such as government information, technical writing, business writing,
promotional materials, and community educationNielsen's points may well apply. A solid body of literature
supports the argument that readers of print are choosy about what they read, are faced with many
documents competing for their attention, may question whether the information they read applies to them,
and adopt the principle of "least effort" as they read (Goodman 1985; Schriver 1997; Steehoucler and Jansen
1987; Wright 1989, 1999).

Perhaps it's possible that Nielsen's points have particular relevance on the Web because of the Web's
physical characteristics-such as lack of context, lack of a physical form that readers can annotate, freedom of
navigation, and hyperlinking. However, the research on communicating in print suggests that successful
writers will consider these issues no matter what medium they are writing for.

Although many authors argue that Web writing is fundamentally different from print writing, it is important
to note that this approach is not universal. For example, Spyridakis (2000) draws on the histoiy of print and
reading models to develop heuristics for writing and evaluating Weh pages. She notes that although there is
widespread discussion about the differences between the Web and print, "the two mediums may be more
similar than one might think." In addition, Farkas and Farkas (2000, 2002), while discussing differences based
on navigation and page structure, recognize that writing for the two media has many similarities. Horton
(1994) argues that the principles of good writing are the same across different media, but that applying
these principles may be more difficult in an online context (p. 201).

USING GENRE TO COMPARE THE WEB AND PRINT

Comparisons between print and the Web are often given at a very general level. Tn particular, authors rarely
specify what types of print documents and what types of Web sites are under discussion. Instead, they
suggest that the Web, as a medium or genre, can be contrasted with print, which is seen as a different
medium or genre. It often seems that these general comparisons are drawn from discussions of print that
are based on novels and newspaper feature articles, and discussions of the Web that are based on sites
promoting businesses and government policies.

Instead of comparing the Web and print at this general level, it may be more helpful for practicing writers if
authors focused on the recognizable communicative purposes of documents-that is, on gewre (Swales
1990). Using genre as the basis for comparison would allow writers to focus on their rhetorical intent and on
the contexts within which their documents are used. This may provide both more practical and more rich
points of comparison than current approaches which describe differences based only on communication
medium.

A genre is a relatively stable form of communication that develops through the repeated communication
practices of a discourse community and is recognized by the members of that community (Bargiela-
Chiappini and Nickerson 1999, p. 8). It is primarily characterized by the communicative purposes that it
intends to fulfill (Bhatia 1993; Swales 1990) and is recognized because it has become standardized-with
conventionalized language and patterns of organization (Bhatia 1999).

Members of a discourse community use genres that they recognize to achieve particular purposes
(Orlikowski and Yates 1994). This means that genres become templates for social action-when readers
encounter a text and identify it as belonging to a recognizable genre, they know how to deal with that text
and what to expect from it. For example, individual experience within a work context tells us how to deal
with e-mail newsletters, wizards, or instructions.

Most definitions of genre incorporate elements of communicative purpose and common form (Orlikowski
and Yates 1994). For example, we recognize a pop-up error message both for the purpose that it fulfills and
for the accepted form that it takes.
* The communicative purpose of a genre is based in a purpose that is recognized and reinforced within the
community; it is not a purpose that can be based simply in an individual author's purpose in communicating.
Authors cannot impose a genre's communicative purpose on readers because the genre is built through the
readers' and author's common understandings of what the genre is intended to achieve. So a memo
becomes a way of communicating work information when it is enacted by its author in such a way that its
purpose is recognized by its readers.

* The accepted form of a genre can include the communication medium-for example, the work meeting
genre typically invokes the idea of face-to-face interaction in a common location. But communication
medium is only part of a genre's form; the form can include other features such as structure, acceptable
interactions, or allowable language. In addition, in many genres, the communication medium is flexible-for
example, a memo can be recognized as a memo whether it is presented on paper or via email, and a
workplace meeting can be recognized as a meeting whether it happens in a meeting room or in an online
environment.

The value of genre is that it provides authors with heuristics for developing texts and it provides readers
with a framework for reading and understanding. The rationale behind a genre establishes constraints on
the contributions that can be made-in terms of both content and form. Authors cannot break away from the
constraints of a genre without producing a text that is noticeably odd. And readers draw on their prior
knowledge of a genre to interpret a text (Bhatia 1993; Swales 1990). This means that working within a
recognizable genre makes communication more easily recognized by readers while also giving authors a
framework for their task.

The distinction that is important in the context of print writing and Web writing is to note that genre is not
simply defined by communication medium. Communication medium has a role to play and can influence
which genres are accepted (Crowston and Williams 2000). And it is possible that some uniquely Web-based
genres are emerging (such as personal home pages- see Dillon and Gushrowski 2000), while other genres
will continue to operate across many different media and environments (such as newsletters, which exist in
print and various online forms).

The research literature discussed in this article shows that there are many similarities between Web writing
and print writing, and these similarities are based on genre. Writing technical manuals for the screen shares
important similarities with writing technical manuals for print. For example, writing for either medium
requires the writer to think about how readers will shuffle between the text and the activity being
described, how readers will dip into the text to solve problems, and what background knowledge the
readers bring to the task. In the same way, writing promotional materials for the Web shares important
similarities with writing promotional materials for print.

One implication of defining genre by communication purpose is that within many Web sites, multiple genres
must be evident. Although a company's Web site may have the overall purpose of communicating with its
audiences, within that site a number of different audiences and a number of different needs must be served.
Different parts of the site may be focused on promotion, sales, research, education, and internal purposes
such as record keeping, administration, and training. By using genre as a guide in planning their writing,
writers can be alert to the need for different approaches in writing different parts of the site.

I'd like to use the Web site developed by Queensland University of Technology (QUT) to illustrate this point.
QUT's site is designed to communicate with several audiences-including existing students, prospective
students (locally and internationally), staff, regulators, business partners, and funders. sections of the site
designed to communicate with these different audiences adopt different writing styles, particularly in terms
of their language and tone.

For example, the pages designed to promote the university to potential students-such as "About QUT and
Brisbane" (http://www.qut.edu.au/services/aboutqut/), "Location and campuses"
(http://www.qut.edu.au/services/ aboutqut/location/), and "History" (http://www.qut.edu.au/
scrvices/aboutqut/history.jsp)-use a colorful language and promotional tone to sell the university and its
location. The location is described as "one of Australia's most beautiful," while the university is described as
having a "rich past" and an "exciting future."
In these promotional pages, the headlines tend to be intriguing (for example, "Top artists, sit-ins and a
boxing ring"), while emphasizing the strengths that the university mentions in all media promotion (a
vocational education, particularly at undergraduate level, and Australia's largest provider of bachelor's
degree graduates into the workforce-these strengths appear in television advertising, displays at
promotional events, and both print and electronic promotional resources).

In contrast, pages designed to communicate with existing students about QUT's policies and procedures
adopt a simple, instructional, authoritative tone-such as the library's "Rorrowing: Students" page (http.-
//www.library. qut.edu.au/students/) and the computing services' "Getting started" page
(http://www.scg.qut.edu.au/GettingStarted/). These instructional pages assume some prior knowledge
about QUT and are written in second person, while the promotional pages assume no prior knowledge and
adopt the third person. The tone shifts again in the pages designed to explain QUT's rules and policies for
staff and students. For example, the "Manual of policies and procedures" (http://
www.qut.edu.au/admin/mopp/) adopts a tone that is precise, formal, authoritative, and distant, and uses a
third person stance. The introduction to Chapter C, "Teaching and learning," begins,

This chapter contains information, policies and procedures relating to the design, development, delivery and
monitoring of academic programs. . . . This chapter has most relevance for academic staff, academic
managers and general staff who are involved in academic planning, course development and assessment.
(http://www. qut.edu.au/admin/mopp/C/C_01.html)

QUT goes to the extent of naming its site differently for its different audiences-for students and staff, the
education URL http://www.qut.cdu.au is promoted, while in the business community, the corporate Ul(L of
http://www. qut.com is promoted.

The QUT site varies internally according to the genre being created, and the styles used reflect the
communicative purpose of that genre. The writing style used in QUT's Web-based promotional material has
much in common with the writing style used in its print-based promotional material, but it varies
significantly from the Web-based instructional material. In addition, there are strong similarities between
the Web-based and print-based instructional materials that cannot be found when other Web pages or
printed resources are compared.

A number of authors have already called for a genrebased approach to discussing Web writing. For example,
Farkas and Farkas (2002) note that Web genres are starting to emerge (p. 9), and that writing for different
genres requires different approaches. Price and Price (2002) devote a large section of their book to
discussing different generic forms. A very useful element of Price and Price's book is that they consider how
each of their writing guidelines should be adapted for different types of online writing (such as writing to
inform or writing to entertain). They recognize that their writing guidelines will be differently useful for
different types of Web documents. And authors such as Crowston and Williams (2000), Dillon and
Gushrowski (2000), Walker (2002), and Gonzalez de Cosio and Dyson (2002), reflect a growing level of
interest in the application of genres to Web-based communication.

The existing literature about genre and the Web varies in the emphasis it gives to communication medium.
Some authors encourage writers to define genre according to the medium used, with sub-genres being
created by the communicative purpose of the text (this approach would define the Web as a recognizable
genre, with promotional sites and online help as sub-genres). This approach moves away from the standard
definitions of genre discussed in more general genre theory (by authors such as Eargiela-Chiappini and
Nickerson 1999; Khatia 1993; Orlikowski and Yates 1994; Swales 1990). In general genre theory, genre is
defined according to both communicative purpose and form, while allowing sub-genres to be created by
more specific issues such as medium or format (this approach would define manuals as a genre, with paper-
based manuals and online manuals as recognizable sub-genres).
This distinction may seem to be splitting hairs. After all, most authors discussing genre recognize that it is a
fuzzy concept, with overlapping boundaries and subsets, and that genre is more helpful to use than define
(de Beaugrande and Dressier 1981; Orlikowski and Yates 1994). However, an approach based primarily on
communicative purpose and form is likely to be more useful for practicing writers than an approach based
primarily on communication medium. Considering both purpose and form encourages writers to look for
similarities and differences that occur within and between genres. Considering genre as part of the writing
process, with an emphasis on communicative purpose and form, will be helpful for practicing writers
because it

* Encourages writers to consider the needs and expectations of their audience

* Encourages writers to consider the uses to which their texts will be put

* Provides writers with a framework for thinking about their texts; in many cases, this framework is already
well established in the print environment and may only need adaptation rather than re-invention for the
Web environment

* Provides writers with an avenue for drawing on the long history of research in print writing as they
consider the most appropriate approaches to writing for the Web

The difference that I am proposing is one of focus. Instead of considering that writing genres are primarily
characterized by media (such as print vs. Web), I suggest that writers will find genre theory more helpful in
their work if they focus first on communication purpose (such as instructional writing vs. promotional
writing).

CONCLUSION

In this article, I have discussed seven key arguments that are used to distinguish between writing for the
WeIi and writing for print. I have argued that instead of providing a clear distinction between Web writing
and print writing, these points actually provide valuable guidelines for many styles of writing in both media.
Many oi the guidelines advocated for Web writing are regularly applied to print writing and have a long
history in the print literature.

Instead of providing comparisons that are based primarily on communication medium, it may be more
helpful for practicing writers to make comparisons that are based on genre, with a focus on communicative
purpose and form. Using genre as the point of comparison will allow writers to explore both the constraints
offered by the genre they are working within and any additional constraints imposed by the communication
medium. Most importantly, a genre-based approach to writing will allow writers to consider the needs and
expectations of their audience first, well before they allow their writing to be controlled by the
communication medium through which it will be published.

It is possible that, in our enthusiasm to embrace the new online medium, we have focused more on the
differences between media than on their similarities. We are rushing to invent new independent theory,
often without considering what has come before. Clearly there are differences between print and the Web,
just as there are differences between print and television. There are also wide differences between different
forms within each medium. These differences, which are recognized through genre, may be important to
readers and need to be questioned by technical writers. Hut, many of the fundamental writing issues that
communicators should consider appear to apply in both print and Web environments.
Semana 03 | Lecturas | Usability on the Web

                                                 Usability on the Web is not a Luxury , Jakob Nielsen and Donald A. Norman:


Usability On The Web Isn't A Luxury
On the Internet, it's survival of the easiest: If customers can't find a product, they can't buy it. It's cheaper to
increase the design budget than the ad budget, and attention to usability can increase the percentage of
Web-site visitors who complete a purchase.

By Jakob Nielsen and Donald A. Norman

he Web puts user experience of the site first, purchase and payment second. On the Web, users first
experience the usability of a site and then buy something. Give users a good experience and they're apt to
turn into frequent and loyal customers. But the Web also offers low switching costs; it's easy to turn to
another supplier in the face of even a minor hiccup. Only if a site is extremely easy to use will anybody
bother staying around.

The real difference between a person's behavior on the Web and in the physical world of real stores involves
switching costs--how much effort it takes to switch from one vendor to another. In a physical store, the
costs of switching are high. The person has driven to the store, entered the building, and walked deep into
the interior. Even when faced with dwindling supplies, inattentive or rude salespeople, and lines at the
checkout counter, the purchaser is apt to stick with it. The cost of leaving, going to another store, and then
possibly encountering the same behavior is usually not worth the effort. Of course, in the physical world,
many people then never return.

On the Internet, switching costs are low. If you don't find what you want, the competition is only a mouse-
click away. Get a questionnaire pushed in your face, and not only will you probably not answer it, but you're
likely turn away annoyed and go somewhere else, never to return.

Vendors such as Amazon.com Inc. try to overcome switching costs in several ways. First, they make it easy to
find the item the potential buyer wants. Second, they make it worthwhile to return. For example, the more
items bought at Amazon.com, the better its purchase recommendations will be. Both of us have at times
purchased items from the recommendation lists, even though we didn't start off intending to buy those
items. That's a good user experience and an even better sales ratio.

Third, Amazon.com uses its affiliates list to make its visitors part of the family and lets them earn money by
recommending the site to others. And fourth, the purchase process is as easy as any on the Web, again
fighting the low psychological cost of switching with the low psychological cost of purchasing.

Studies of user behavior on the Web find a low tolerance for difficult designs or slow sites. People don't
want to wait. And they don't want to learn how to use a home page. There's no such thing as a training class
or a manual for a Web site. People have to be able to grasp the functioning of the site immediately after
scanning the home page--for a few seconds at most.

Of course, there are no rules without exceptions, and there are a few cases where a Web site is so useful
that people are willing to spend some time learning the site; extranets sometimes fall in this category. But
it's dangerous to assume that a site will be so useful that people will give it more leeway than they give the
Web's other 11 million sites. It's arrogant, as well. Most brands are not nearly as treasured as their owners
would like to believe, especially when switching costs are factored in--take Coca-Cola, one of the most
famous brands in the world. Suppose you're on an airplane and ask the flight attendant for a Coke. If you're
told "we only have Pepsi; is that OK?" the odds are pretty good you'll say yes.

Success on the Internet depends on multiplying the number of people who will visit a home page times the
proportion who actually buy anything--the percentage who become customers. It's expensive and difficult
for most companies to get people to the Web site in the first place. That's why advertising budgets are so
high--increasing the second number is a lot easier.
To double the success of a site, you must either double the visitors or double the conversion rate. Doubling
the number of visitors could require doubling the advertising budget, or more. Doubling the percentage who
purchase may require simply redesigning your Web site guided by a human-centered design process.
Considering that many sites have conversion rates on the order of 1% or 2%, it's much more cost-effective to
focus on the second number in the formula.

Most Web sites today are tough to use. Usability studies typically find a success rate of less than 50%. When
the average person is asked to accomplish a simple task on the average Web site, the outcome all too often
is failure.

Why is the Web so successful if the average user experience is one of failure? Mainly because few sites
work--at least from the perspective of their users. Even though 90% of Web sites are obviously poorly
designed from this perspective, people spend only about 10% of their time on them. As soon as they
discover that the site is filled with bloated graphics and little useful information, they go elsewhere. Worse,
they're unlikely to return. If a site crashes their browser, they just don't go there again. If they can't find the
product they want, they will go elsewhere--and they're apt to stick with the site they know works.

The initial visit to the site is triggered by the advertising budget or other promotional links. When people
have a positive user experience, they are apt to return, and you get useful exposure, if not revenue, from
your ad dollar. When they have a bad user experience, they are likely never to return--a complete waste of
money. Over time, people gravitate to sites that treat them well and are easy to use.

Consider our experience with shopping for a printer (see sidebar story, "Walk-Through: A Usability
Experiment"). Hewlett-Packard and Canon Inc. failed in similar ways. At the home page, they failed to
provide links that answer what must be a common question among their visitors: I'd like to buy a printer,
but I'm not sure which product is best for me. Assuming visitors had the tenacity to find the page that
covered these products, it's close to impossible to select an appropriate printer from the Web pages
because those pages are devoid of useful information. On Canon's site, for example, the category of
"mobile" printers is the only one that makes sense. Otherwise, how would you know whether you needed a
"performance," a "value," or a "specialty" printer?

Advice such as "if you print more than 50 pages per day, select one of these," would have helped users buy.
Additional, but useless, information is offered through a "rollover" user interface: By rolling the mouse over
the images, you learn that model 6000 is "the smart choice for all your printing needs" whereas model 2000
is "the printing powerhouse you need for all your home or business projects."

Aside from the fact that the information doesn't help visitors differentiate among the products (all products
seem to be perfect), this rollover interface is too difficult to find and forces users to memorize the messages
as they move from product to product. This site works only for users who have plenty of time to spend on
aimless clicking.

Suppose a company has a good advertising budget and a well-designed site. Is this sufficient? No. Studies
show that a remarkable percentage of people visit a site, put items into shopping carts, and then leave the
site without finishing their purchases. Why would people discard their carts? After all, they've done the hard
part--they found the site and figured out which items to purchase, but left when all they had to do was pay.

There are two reasons why people discard carts. Most shoppers like to do comparisons among the
alternatives available. They'll make a tentative selection, then compare it with a few alternatives. This isn't
easy to do on many Web sites. One way is to remember to open new windows for each of the items to be
compared: Most users aren't comfortable doing this. The other way is to dump all the comparisons into the
shopping cart as the simplest way of keeping track of the alternatives. If the site doesn't make it easy for
people to compare, they will invent ways to do so.

The second reason for discarding the cart is much more fundamental; the payment process is too onerous.
Are customers asked unnecessary questions? Does the process get in the way? Is it overly difficult? Is the
Web site secure and are customers assured that their privacy is respected (and then, does the site actually
do so)? Do the sites provide information about stock availability, shipping, and other extra costs before the
purchase process, or is the customer surprised mid-way through?
Making these final steps as pleasant and easy as possible is just as important as any of the other steps. The
customers are already at the site. They have already decided what items to purchase. If they leave at this
point, it's the site's fault--from a business point of view, this is inexcusable.

The good news is that it's not difficult to have a superior user experience. Since most sites are so bad, it
doesn't take much to stand out and be one of the easiest sites on the Internet. Get rid of the spinning logos
and boastful marketing; focus the site on people's needs in plain language in a layout that's easy to scan and
fast to download.

Providing a customer-centered organizational architecture is important, but it can be surprisingly difficult. It
means that competing lines within a company might have to cooperate in the structure of the Web.
Remember, customers don't care how the company is organized; they just want to know which product to
buy and what its characteristics are. Then the customer wants to be able to buy it, right then and there. If a
company doesn't wish to compete with its distributors, it should make it easy to go to consumer sites and
get to the item in question immediately.

Although designing a Web site properly is easy in principle, it takes the right staff: Professionals in user
experience understand people. They know how to do field research, design for interaction, develop rapid
prototypes, and do rapid tests. They work closely with graphic designers, Web coders, and marketing. It's
critical to have the assistance of designers who understand people, that the group has an appropriate
budget, and that it has sufficient authority to execute their recommendations.

A deep understanding of needs requires field studies of customers. Developing the perfect Web site is a
major undertaking, especially because it needs to overcome the Internet's primitive technical limitations.
Luckily, the site doesn't have to be perfect--just sufficiently better than competing sites.

How to get a better site? Observe real customers as they actually use the site. A professional user-
experience team will go to the customer's home, office, or place of employment, where they will access the
site.

This isn't at all the same as a more familiar research tool, focus groups, which reveal what customers think,
not what they do. Focus groups reveal what customers think they think, how customers think they behave,
but not what they actually believe or what they actually do. Don't trust what customers say--trust what they
do.

Usability isn't a luxury on the Internet; it's essential to survival. It's the key technique for superior customer
relationships--more than any other technology we tend to associate with customer-relationship
management on the Web.

Because switching costs are so low, attention to usability increases the percentage of those who complete a
purchase after visiting the site. It's a lot cheaper to increase the human-centered design budget than to
double the advertising budget. The Internet follows a kind of Sheer Design Darwinism: survival of the easiest
SEMANA 05
Semana 05 | Lecturas

Semana 05 | Lecturas | IEEE

                              Software architecture: introducing IEEE Standard 1471. Maier, M.W.; Emery, D.; Hilliard, R.
Semana 05 | Lecturas | Electronic Payment Systems

         State of the Art in Electronic Payment Systems, Asokan, Janson, Steiner, Waidner.


The exchange of goods conducted face-to-face between two parties dates back to before the beginning of recorded history.
Eventually, as trade became more complicated and inconvenient, humans invented abstract representations of value.
As time passed, representations of value became more and more abstract, progressing from barter through bank notes,
payment orders, checks, credit cards, and now electronic payment systems. Traditional means of payment suffer from
various well-known security problems: Money can be counterfeited, signatures forged, and checks bounced.
Electronic means of payment retain the same drawbacks and some additional risks: Unlike paper, digital ―documents‖ can
be copied perfectly and arbitrarily often; digital signatures can be produced by anybody who knows the secret
cryptographic key; a buyer‘s name can be associated with every payment, eliminating the anonymity of cash.
Thus without new security measures, widespread electronic commerce is not viable. On the other hand, properly designed
electronic payment systems can actually provide better security than traditional means of payments, in addition to
flexibility of use. This article provides an overview of electronic payment systems, focusing on issues related to security.
Pointers to more information on several payment systems described can be found at http://www.semper.org/
sirene/outsideworld/ecommerce.html.

ELECTRONIC PAYMENT MODELS

Commerce always involves a payer and a payee—who exchange money for goods or services—and at least one financial
institution—which links ―bits‖ to ―money.‖ In most existing payment systems, the latter role is divided into two parts: an
issuer (used by the payer) and an acquirer (used by the payee). Electronic payment is implemented by a flow of money
from the payer via the issuer and acquirer to the payee. Figure 1 shows some typical flows of money in the
case of prepaid, cash-like payment systems. In these systems, a certain amount of money is taken away from the payer
(for example, by debiting the payer‘s bank account) before purchases are made. This amount of money can be used for
payments later. Smart cardbased electronic purses, electronic cash, and bank checks (such as certified checks) fall into this
category. Figure 2 shows some typical flows of money in the case of bank-card-based systems, which include paynow
systems and pay-later systems. In pay-now payment systems, the payer‘s account is debited at the time of payment.
Automated-teller-machine (ATM) cards fall into this category. In pay-later (credit) payment systems, the payee‘s bank
account is credited the amount of sale before the payer‘s account is debited. Credit card systems fall into this category.
From a protocol point of view, pay-now and pay-later systems belong to the same class: Because a payment is always
done by sending some sort of ―form‖ from payer to payee (whether it be a check or credit card slip or some other form),
we call these systems check-like. Both types of payment systems are direct-payment systems: A payment requires an
interaction between payer and payee. There are also indirect payment systems, in which either the payer or the payee
initiates payment without the other party involved online. Electronic funds transfer is one example of an indirect
payment system.

SECURITY REQUIREMENTS

The concrete security requirements of electronic payment systems vary, depending both on their features
and the trust assumptions placed on their operation. In general, however, electronic payment systems
must exhibit integrity, authorization, confidentiality, availability, and reliability.

Integrity and authorization

A payment system with integrity allows no money to be taken from a user without explicit authorization by that user. It
may also disallow the receipt of payment without explicit consent, to prevent occurrences of things like unsolicited
bribery. Authorization constitutes the most important relationship in a payment system. Payment can be authorized in
three ways: via out-band authorization, passwords, and signature.

Out-band authorization. In this approach, the verifying party (typically a bank) notifies the authorizing party (the
payer) of a transaction. The authorizing party is required to approve or deny the payment using a secure, out-band channel
(such as via surface mail or the phone). This is the current approach for credit cards involving mail orders and telephone
orders: Anyone who knows a user‘s credit card data can initiate transactions, and the legitimate user must check the
statement and actively complain about unauthorized transactions. If the user does not complain within a certain
time (usually 90 days), the transaction is considered ―approved‖ by default.

Password authorization. A transaction protected by a password requires that every message from the authorizing
party include a cryptographic check value. The check value is computed using a secret known only to the authorizing and
verifying parties. This secret can be a personal identification number, a password, or any form of shared secret (defined in
the sidebar ―Basic Concepts in Cryptography and Security ‖). In addition, shared secrets that are short—like a
six-digit PIN—are inherently susceptible to various kinds of attacks. They cannot by themselves provide a high degree of
security. They should only be used to control access to a physical token like a smart card (or a wallet) that performs the
actual authorization using secure cryptographic mechanisms, such as digital signatures.

Signature authorization. In this type of transaction, the verifying party requires a digital signature of the authorizing
party. Digital signatures provide nonrepudiation of origin: Only the owner of the secret signing key can ―sign‖ messages
(whereas everybody who knows the corresponding public verification key can verify the authenticity of signatures.)
Confidentiality

Some parties involved may wish confidentiality of transactions. Confidentiality in this context means the
restriction of the knowledge about various pieces of information related to a transaction: the identity of
payer/payee, purchase content, amount, and so on. Typically, the confidentiality requirement dictates that this information
be restricted only to the participants involved. Where anonymity or untraceability are desired, the requirement may be to
limit this knowledge to certain subsets of the participants only, as described later.

Availability and reliability

All parties require the ability to make or receive payments whenever necessary. Payment transactions must
be atomic: They occur entirely or not at all, but they never hang in an unknown or inconsistent state. No payer would
accept a loss of money (not a significant amount, in any case) due to a network or system crash. Availability and
reliability presume that the underlying networking services and all software and hardware components are sufficiently
dependable. Recovery from crash failures requires some sort of stable storage at all parties and specific resynchronization
protocols. These fault tolerance issues are not discussed here, because most payment systems do not address them
explicitly.

TECHNOLOGY OVERVIEW

Electronic payment systems must enable an honest payer to convince the payee to accept a legitimate payment and at the
same time prevent a dishonest payer from making unauthorized payments, all the while ensuring the privacy of honest
participants. The sidebar ―Information Sources for Representative Payment Systems‖ lists some examples of payment
systems, categorized according to the technique used for authorizing a money transfer from the payer to the
payee.

Online versus offline

Offline payments involve no contact with a third party during payment—the transaction involves only the payer and
payee. The obvious problem with offline payments is that it is difficult to prevent payers from spending more money than
they actually possess. In a purely digital world, a dishonest payer can easily reset the local state of his system to a prior
state after each payment. Online payments involve an authorization server (usually as part of the issuer or acquirer) in
each payment. Online systems obviously require more communication. In general, they are considered more secure than
offline systems. Most proposed Internet payment systems are online. All proposed payment systems based on electronic
hardware, including Mondex and CAFE (Conditional Access for Europe), are offline systems. Mondex is the only system
that enables offline transferability: The payee can use the amount received to make a new payment himself, without
having to go to the bank in between. However, this seems to be a politically unpopular feature. CAFE is the only system
that provides strong payer anonymity and untraceability. Both systems offer payers an electronic wallet, preventing
fake-terminal attacks on the payer‘s PIN. CAFE also provides loss tolerance, which allows the payer to recover from coin
losses (but at the expense of some anonymity in case of loss). Mondex and CAFE are multicurrency purses capable of
handling different currencies simultaneously. All these systems can be used for Internet payments,
and there are several plans for so doing, but none is actually being used at the time of this writing. The main
technical obstacle is that they require a smart card reader attached to the payer‘s computer. Inexpensive PCMCIA smart
card readers and standardized infrared interfaces on notebook computers will solve this connectivity problem. Another
system being developed along these lines is the FSTC (Financial Services Technology Consortium) Electronic Check
Project, which uses a tamper-resistant PCMCIA card and implements a check-like payment model. Instead of tamper-
resistant hardware, offline authorization could be given via preauthorization: The payee is known to the payer in advance,
and the payment is already authorized during withdrawal, in a way similar to a certified bank check.

Trusted hardware

Offline payment systems that seek to prevent (not merely detect) double spending require tamper-resistant hardware at the
payer end. The smart card is an example. Tamper-resistant hardware may also be used at the payee end. An example is the
security modules of point-of-sale (POS) terminals. This is mandatory in the case of shared-key systems and in cases where
the payee does not forward individual transactions but the total volume of transactions. In a certain sense, tamper-resistant
hardware is a ―pocket branch‖ of a bank and must be trusted by the issuer. Independent of the issuer‘s security
considerations, it is in the payer‘s interest to have a secure device that can be trusted to protect his secret keys and to
perform the necessary operations. Initially, this could be simply a smart card. But in the long run, it should become a
smart device of a different form factor with secure access to a minimal keyboard and display. This is often called an
electronic wallet. Without such a secure device, the payers‘ secrets and hence their money are vulnerable to anybody who
can access his computer. This is obviously a problem in multiuser environments. It is also a problem even on single-user
computers that may be accessed directly or indirectly by others. A virus, for example, installed on a computer could steal
PINs and passwords as they are entered. Even when a smart card is available to store keys, a virus program may directly
ask the smart card to make a payment to an attacker‘s account. Thus for true security, trusted input/output channels
between the user and the smart card must exist.1
Cryptography

A wide variety of cryptographic techniques have been developed for user authentication, secret communication,
and nonrepudiation. They are essential tools in building secure payment systems over open networks that have little or no
physical security. There are also excellent reference works on cryptography.2-3 “Cryptofree” systems. Using no
cryptography at all means relying on out-band security: Goods ordered electronically are not delivered until a fax arrives
from the payer confirming the order. First Virtual is a cryptofree system. A user has an account and receives
a password in exchange for a credit card number, but the password is not protected as it traverses the Internet. Such a
system is vulnerable to eavesdropping. First Virtual achieves some protection by asking the payer for an acknowledgment
of each payment via email, but the actual security of the system is based on the payer‘s ability to revoke each payment
within a certain period. In other words, there is no definite authorization during payment. Until the end of this period,
the payee assumes the entire risk.

Generic payment switch. A payment switch is an online payment system that implements both the prepaid and pay-
later models, as exemplified by the OpenMarket payment switch. OpenMarket‘s architecture supports several
authentication methods, depending on the payment method chosen. The methods range from simple, unprotected PIN-
based authentication to challenge-response-based systems, in which the response is computed, typically by a smart card.
Actually, OpenMarket uses passwords and optionally two types of devices for response generation: Secure Net Key and
SecureID. User authentication therefore is based on shared-key cryptography. However, authorization is based on public-
key cryptography: the OpenMarket payment switch digitally signs an authorization message, which is forwarded
to the payee. The payment switch is completely trusted by users who use shared-key cryptography. Shared-key
cryptography. Authentication based on shared-key cryptography requires that the prover (the payer) and a verifier (the
issuer) both have a shared secret. A DES key is one example of a shared secret; a password and PIN are other examples.
Because both sides have exactly the same secret information, shared-key cryptography does not provide nonrepudiation. If
payer and issuer disagree about a payment, there is no way to decide if the payment was initiated by the payer or by an
employee of the issuer. Authenticating a transfer order on the basis of shared keys is therefore not appropriate if the payer
bears the risk of forged payments. 4 If authentication is to be done offline, each payerpayee pair needs a shared secret. In
practice this means that some sort of master key is present at each payee end, to enable the payee to derive the payer‘s
key. Tamper-resistant security modules in point-of-sale terminals protect the master key. Most offline systems
(Danmont/Visa and the trial version of Mondex) and online systems (NetBill, and the 2KP variant of iKP) use a shared
secret between payer and issuer for authentication.

Public-key digital signatures. Authentication based on public-key cryptography requires that the prover have
a secret signing key and a certificate for its corresponding public signature verification key. The certificate is
issued by a well-known authority. Most systems now use RSA encryption, but there are several alternatives. Digital
signatures can provide nonrepudiation— disputes between sender and receiver can be resolved. Digital signatures should
be mandatory if the payer bears the risk of forged payments A rather general security scheme that uses publickey
signatures is Secure Socket Layer. SSL is a socketlayer communication interface that allows two parties
to communicate securely over the Internet. It is not a payment technology per se, but has been proposed as a means to
secure payment messages. SSL does not support nonrepudiation. Complete payment systems using public-key
cryptography include e-cash, NetCash, CyberCash, the 3KP variant of iKP, and Secure Electronic Transactions
(SET). The protocol ideas themselves are much older. The use of digital signatures for both online and offline payments,
anonymous accounts with digitally signed transfer orders, and anonymous electronic cash were all introduced during the
1980s.5
Payer anonymity

Payers prefer to keep their everyday payment activities private. Certainly they do not want unrelated third
parties to observe and track their payments. Often, they prefer the payees (shops, publishers, and the like) and in some
cases even banks to be incapable of observing and tracking their payments. Some payment systems provide payer
anonymity and untraceability. Both are considered useful for cash-like payments since cash is also anonymous and
untraceable. Whereas anonymity simply means that the payer‘s identity is not used in payments, untraceability means
that, in addition, two different payments by the same payer cannot be linked. By encrypting all flows between payer and
payee, all payment systems could be made untraceable by outsiders. Payer anonymity with respect to the payee can be
achieved by using pseudonyms instead of real identities. Some electronic payment systems are designed to provide
anonymity or even untraceability with respect to the payee (iKP, for example, offers this as an option).
Currently, the only payment systems mentioned here that provide anonymity and untraceability against payee and issuer
are e-cash (online) and CAFE (offline). Both are based on public-key cryptography, a special form of signatures called
blind signatures.6-7 A blind signature on some message is made in such a way that the signer does not know the exact
content of the message. DigiCash‘s e-cash, which is also based on the concept of blind signatures, is a cash-like payment
system providing high levels of anonymity and untraceability. In an e-cash system, users can withdraw e-cash
coins from a bank and use them to pay other users. Each e-cash coin has a serial number. To withdraw e-cash coins, a user
prepares a ―blank coin‖ that has a randomly generated serial number, blinds it, and sends it to the bank. If the user is
authorized to withdraw the specified amount of e-cash, the bank signs the blind coin and returns it to the user. The user
then unblinds it to extract the signed coin. The signed coin can now be used to pay any other e-cash user. When
a payee deposits an e-cash coin, the bank records its serial number to prevent double-spending. However, because the
bank cannot see the serial number when it signs the coin, it cannot relate the deposited coin to the earlier withdrawal by
the payer. NetCash and anonymous credit cards also provide anonymity and untraceability. But they are based on
the use of trusted ―mixes‖ that change electronic money of one representation into another representation, without
revealing the relation. Neither e-cash nor CAFÉ assume the existence of such trusted third parties.
MICROPAYMENTS

Micropayments are low-value payments (probably less than $1) that are made very quickly, like paying
for each tick of a phone call. Given these constraints, micropayment techniques must be both inexpensive
and fast. Achieving both requires certain compromises. A number of proposals assume repeated payments
(such as pay-per-view), beginning with CAFE Phone Ticks and -iKP, the micropayment proposal for iKP.
Both of these proposals use one-way hash functions to implement micropayments. Content servers in the global
information infrastructure will probably have to process such a large number of these low-value transactions that it will be
impractical to use computationally complex and expensive cryptographic protocols to secure them. -iKP,
designed with these goals in mind, is based on computationally secure one-way functions. Informally, a function
f() is one-way if it is difficult to find the value x given the value y = f(x). The value x is the preimage of y. Given such a
one-way function, the payer will randomly choose a seed value Xand recursively compute:

A0(X) = X
Ai+1(X) = f(Ai(X))

The values A0, ..., An1—known as coupons—will enable the payer to make n micropayments of a fixed
value v to one payee: First, the payer forwards An and v to the payee in an authenticated manner.
Authentication can be achieved by sending these values to the payee as the payload of a normal iKP payment.
The payee ensures, possibly via its bank, that An does in fact correspond to a good hash preimage chain that can be used
for subsequent micropayments. The micropayments are then carried out by revealing components of the chain An1, An2 ,
..., A0 successively to the payee. To clear the payments, the payee presents the partial chain

Ai, . . . , Aj (0 ² i j ² n)

to its bank in return for a credit of value v(ji). The overhead of the setup phase is justified only when it is followed by
several repeated micropayments. However, nonrepeated (or rarely repeated) micropayments are also a likely scenario in
the electronic marketplace: A user surfing the Web may chance upon a single page that costs $0.01. Neither the
micropayment setup overhead nor the cost of a normal payment is justified in this case. -iKP solves this problem with a
broker: An isolated micropayment from payer P to payee Q is carried out by P, which makes one or more micropayments
to broker B. Broker B then makes an equivalent micropayment to Q. In other words, a nonrepeating financial relationship
between P and Q is achieved by leveraging on existing relationships between B and P and between B and Q. On the other
hand, if the amount of the transaction is small, developers can assume a lower risk and so opt to reduce security (for
example, by foregoing nonrepudiation). A notable example is NetBill, which is founded on the shared-key technology
Kerberos. It implements a check-like debit-payment model. The use of shared-key technology is justified by the
performance required to process many micropayments in a short time. NetBill developers of both technologies have
announced that they will migrate to public-key technology. MiniPay, from IBM Haifa Laboratory, is an example of a
micropayment system based on public-key technology.




STANDARDS

The European Standardisation Organisation (CEN), as well as Europay, MasterCard, and Visa (known collectively as
EMV), are working on standards for smart-card-based electronic payment systems. A CEN standard for an Intersector
Electronic Purse already exists. There are currently no efforts to standardize an untraceable, offline payment system.
Two proposals, Visa‘s Secure Transaction Technology (STT) and MasterCard‘s Secure Electronic Payment Protocol
(SEPP), began as competing standards for credit-card-based online payment schemes. Recently SET, a proposal designed
by MasterCard, Visa, GTE, IBM, Microsoft, Netscape, SAIC, Terisa, and Verisign, has replaced these competing
standards. SET is likely to be widely adopted for credit card payments over the Internet. The first prototypes of SET
toolkits have been built. SET is a pragmatic approach that paves the way for easy, fast, secure transactions over the
Internet. It seeks to preserve the existing relationships between merchants and acquirers as well as between payers and
their bank. SET concentrates on securely communicating credit card numbers between a payer and an acquirer gateway
interfacing to the existing financial infrastructure. In our classification, SET falls under the check-like
model. The transaction is initiated with a handshake, with the merchant authenticating itself to the payer
and fixing all payment data. The payer then uses a sophisticated encryption scheme to generate a payment
slip. The goal of the encryption scheme is to protect sensitive payment information (such as the credit
card number); limit encryption to selected fields (to ease export approval); cryptographically bind the
order information to the payment message; and maximize user privacy. Next the payment slip is signed by
the payer and is sent to the merchant. The merchant sends the slip to its acquirer gateway, to authorize and
capture the payment. The acquirer checks all signatures and the slip, verifies the creditability of the payer,
and sends either a positive or negative signed acknowledgment back to merchant and payer.
Currently, discussions on SET dominate the stage of Internet payment systems, but there is a parallel
demand for international standards of electronic cashlike payment schemes and schemes for micropayments.
TODAY’S SYSTEMS

In principle, the technology exists to secure electronic payment over the Internet. It is now possible to achieve security for
all parties, including the perfect untraceability of the payer. No one system will prevail; several payment systems will
coexist. Micropayments (say, less than $1), low-value payments (say, $1 to $100), and high-value payments
have significantly different security and cost requirements. High values will be transferred using nonanonymous, online
payment systems based on public-key cryptography implementing a check-like payment model. Within the next few
years, smart-card readers will become widely available on PCs and workstations. This will enable payments of small
amounts using prepaid, offline payment systems that provide a certain degree of untraceability. Payment systems with and
without tamper-resistant hardware at the payer‘s end will coexist for some time. Ultimately, payment systems based on
smart cards and electronic wallets (having secure access to some display and keyboard, and communicating with the
buyer‘s terminal via an infrared interface) will become prevalent for two reasons: They enable mobility of users and they
clearly provide better security, allowing the payer to use untrusted terminals without endangering security.
A few almost equivalent payment systems with the same scope (in terms of the payment model and maximum
amounts) will possibly coexist. The reasons are various cultural differences in the business and payment processes,
national security considerations that might disqualify some solutions in some countries, and competition between payment
system providers.

No electronic payment system is currently deployed on a large scale. But within a few years most of us will carry smart
cards that can be used to buy things offline and in shops, as well as over the Internet. Several countries, most of them
in Europe, are introducing such smart cards, but most cannot yet be used for cross-border payments. There is little chance
that the world will ever agree on a single scheme for electronic purses in the near future.

Within the next two to three years, SET will become the predominant method for credit card purchases on the Internet. It
will be implemented initially in software only, but will later be supported by smart cards. For some time, the currently
preferred method of using SSL to encrypt payment details on their way from payer to payee will coexist with SET.
Beyond this, the future is much less clear. While it is likely that FSTC checks will be deployed within the US, the
prospects for success are not clear. It is also not clear if the FSTC design will ever be used, or indeed can be used,
internationally. Prepaid, online payment systems are becoming more and more attractive, with e-cash being the best-
known system and the only one that supports strong privacy for payers. It is difficult to predict the future of payment
systems that protect payer privacy because there are so many legal requirements and legal restrictions involved. Several
micropayment systems will be used with microservice providers, but it is not clear yet whether there will be a single
winner in the end.
Basic Concepts in Cryptography and Security
Cryptographic techniques are essential tools in securing payment protocols over open, insecure networks. Here we outline
some relevant basic concepts.

Message authentication
To authenticate a message is to prove the identity of its originator to its recipient. Authentication can be achieved by
using shared-key or public-key cryptography. Shared-key cryptography The prover and the verifier share a common
secret. Hence this is also called symmetric authentication. A message is authenticated by means of a cryptographic
check value, which is a function of both the message itself and the shared secret. This check value is known as the
message authentication code (MAC). Public-key cryptography Each entity has a matching pair of keys. One, known as the
signature key, is used for computing signatures and is kept secret. The other, known as the verification key, is used
to verify signatures made with the corresponding signature key; the verification key is made public along with a certificate
binding an entity‘s identity to its verification key. Certificates are signed by a well-known authority whose verification
key is known a priori to all verifiers. A message is authenticated by computing a digital signature over the message using
the prover‘s signature key. Given a digital signature and a certificate for its verification key, a verifier can authenticate
the message. Authentication of messages using MACs does not provide nonrepudiation of origin for the message, whereas
authentication using digital signatures does.

Attacks
Electronic payment protocols can be attacked at two levels: the protocol itself or the underlying cryptosystem.
Protocol-level attacks Protocol attacks exploit weaknesses in the design and/or implementation of the highlevel
payment system. Even if the underlying cryptographic techniques are secure, their inappropriate use may open up
vulnerabilities that an attacker can exploit. Freshness and replay. A protocol may be attacked by replaying some messages
from a previous legitimate run. The standard countermeasure is to guarantee the freshness of messages in a protocol.
Freshness means that the message provably belongs to the current context only (that is, the current payment transaction)
and is not a replay of a previous message. A nonce is a random value chosen by the verifying party and sent to the
authenticating party to be included in its reply. Because nonces are unpredictable and used in only one context,
they ensure that a message cannot be reused in later transactions. Nonces do not require synchronization of clocks
between the two parties. Consequently, they are very robust and popular in cryptographic protocol design. In general,
nonces are an example of the challenge-response technique. Fake-terminal. Protocols that perform authentication in only
one direction are susceptible to the fake-terminal attack. For example, when a customer uses an ATM, the bank and the
machine check the authenticity of the customer using a PIN. The customer, however, cannot be sure whether the ATM is a
genuine bank terminal or a fake one installed by an attacker for gathering PINs. Using a trusted personal device, such as a
smart card or electronic wallet, helps avoid this attack. Cryptosystem attacks Cryptosystem attacks exploit weaknesses
in the underlying cryptographic building blocks used in the payment system. Brute force attack. The straight-forward
cryptosystem attack is the brute force attack of trying every possible key. The space from which cryptographic keys are
chosen is necessarily finite. If this space is not large enough, a brute force attack becomes practical. Four-digit PIN codes
have a total of 10,000 permutations in the key space. If a value X is known to be the result of applying a deterministic
transformation to the PIN, one can use this X to search the set of all possible PINs for the correct one. In some
applications one can increase the protection against brute force attacks by randomization. Even if the key space is large,
the probability distribution of keys is not necessarily uniform (especially for user-chosen PINs, which are likely to be
related to the user‘s birthday, phone number, and so on). It might then be possible to mount dictionary attacks.
Instead of trying every possible key as in the brute force attack, the attacker will only try the keys in ―dictionary‖ of likely
words, phrases, or other strings of characters. Cryptanalysis. More sophisticated attacks, called cryptanalysis, attempt to
explore weaknesses in the cryptosystem itself. Most cryptosystems are not proven secure but rely on heuristics,
experience, and careful review and are prone to errors. Even provably secure cryptosystems are based on the intractability
of a given mathematical problem (such as the difficulty of finding graph isomorphism), which might be solvable one day.
Information Sources for Representative Payment Systems
Online Systems, Traceable
Credit-card payment system without cryptography

First Virtual
http://www.fv.com
Credit-card payment systems with cryptography

CyberCash
http://www.cybercash.com

iKP
http://www.zurich.ibm.com/Technology/Security/extern/ecommerce/iKP.html
Proposed standard

SET
http://www.mastercard.com/set/set.htm
Micropayments

NetBill
B. Cox, J. D. Tygar, and M. Sirbu, ―NetBill Security and Transaction Protocol‖ Proc. First Usenix Electronic Commerce
Workshop, Usenix, Berkeley, Calif., July
1995, pp. 77-88.

Phone-Ticks (CAFE)
T. Pedersen, ―Electronic Payments of Small Amounts,‖ Lecture Notes in Computer Science, No. 1189, 1996, pp. 59-68.

Millicent
http://www.millicent.digital.com/

μ-iKP
http://www.zurich.ibm.com/Technology/Security/publications/1996/HSW96. ps.gz

MiniPay
http://www.ibm.net.il/ibm_il/int-lab/mpay/)
Payment switches

OpenMarket
http://www.openmarket.com/

Offline, Traceable
Electronic purses that use smart cards with shared key

Danmønt/Visa
http://www.visa.com/cgi-bin/vee/sf/cashmain.html?2+0
Electronic purses that use smart cards with public key

CLIP
http://www.europay.com/brand/clip.htm
Electronic purses; encryption unknown

Mondex
http://www.mondex.com/
Standards

CEN Intersector Electronic Purse
CEN/TC224/WG10, Intersector Electronic Purse, draft European standard, Comite European de Normalization, Brussels,
1992-1994

EMV Electronic Purse
http://www.visa.com/cgi-bin/vee/sf/chip/circuit.html
Electronic check

FSTC Electronic Check Project
http://fstc.org/projects/echeck/echeck2.html

Online Systems, Untraceable
Anonymous remailers for change
NetCash
http://gost.isi.edu/info/netcash/

Anonymous Credit Cards
S.H. Low, N.F. Maxemchuk, and S. Paul, ―Anonymous Credit Cards,‖ Proc. 2nd ACM Conf. Computer and Communication
Security, ACM Press, New York,
1994, pp. 108-117.

Offline Systems, Untraceable
Anonymous (“blind”) signatures

e-cash
http://www.digicash.com
Anonymous (“blind”) signatures

CAFE
http://www.semper.org/sirene/publ/BBCM1_94CafeEsorics.ps.gz.
Semana 05 | Lecturas | N-Tier

        Application Architecture: An N-Tier Approach.


Introduction

Developers must realize there is more to
programming than simple code. This two-part
series addresses the important issue of
application architecture using an N-tier
approach. The first part is a brief introduction
to the theoretical aspects, including the
understanding of certain basic concepts. The
second part shows how to create a flexible and
reusable application for distribution to any
number of client interfaces. Technologies used
consist of .NET Beta 2 (including C#, .NET Web
Services, symmetric encryption), Visual Basic
6, the Microsoft SOAP Toolkit V2 SP2, and
basic interoperability [ability to communicate
with each other] between Web Services in
.NET and the Microsoft SOAP Toolkit. None of
these discussions (unless otherwise indicated)
specify anything to do with the physical location of each layer. They often are on separate physical
machines, but can be isolated to a single machine.

For starters, this article uses the terms "tier" and "layer" synonymously. In the term "N-tier," "N" implies any
number, like 2-tier, or 4-tier, basically any number of distinct tiers used in your architecture.

"Tier" can be defined as "one of two or more rows, levels, or ranks arranged one above another" (see
http://www.m-w.com/cgi-bin/dictionary?Tier). So from this, we get an adapted definition of the
understanding of what N-tier means and how it relates to our application architecture: "Any number of
levels arranged above another, each serving distinct and separate tasks." To gain a better understanding of
what is meant, let's take a look at a typical N-tier model (see Figure 1.1).


Figure 1.1 A Typical N-Tier Model

The Data Tier

Since this has been deemed the Age of Information, and since all information needs to be stored, the Data
Tier described above is usually an essential part. Developing a system without a data tier is possible, but I
think for most applications the data tier should exist. So what is this layer? Basically, it is your Database
Management System (DBMS) -- SQL Server, Access, Oracle, MySql, plain text (or binary) files, whatever you
like. This tier can be as complex and comprehensive as high-end products such as SQL Server and Oracle,
which do include the things like query optimization, indexing, etc., all the way down to the simplistic plain
text files (and the engine to read and search these files). Some more well-known formats of structured, plain
text files include CSV, XML, etc.. Notice how this layer is only intended to deal with the storage and retrieval
of information. It doesn't care about how you plan on manipulating or delivering this data. This also should
include your stored procedures. Do not place business logic in here, no matter how tempting.
The Presentation Logic Tier

Let's jump to the Presentation Logic Layer in Figure 1.1. You probably are familiar with this layer; it consists
of our standard ASP documents, Windows forms, etc. This is the layer that provides an interface for the end
user into your application. That is, it works with the results/output of the Business Tier to handle the
transformation into something usable and readable by the end user. It has come to my attention that most
applications have been developed for the Web with this layer talking directly to the Data Access Layer and
not even implementing the Business Tier. Sometimes the Business Layer is not kept separated from the
other two layers. Some applications are not consistent with the separation of these layers, and it's
important that they are kept separate. A lot of developers will simply throw some SQL in their ASP (using
ADO), connect to their database, get the recordset, and loop in their ASP to output the result. This is usually
a very bad idea. I will discuss why later.

The Proxy Tier and the Distributed Logic

There's also that little, obscure Proxy Tier. "Proxy" by definition is "a person [object] authorized to act for
another" (see http://www.m-w.com/cgi-bin/dictionary?Proxy). This "object," in our context, is referring to
any sort of code that is performing the actions for something else (the client). The key part of this definition
is "act for another." The Proxy Layer is "acting" on behalf of the Distributed Logic layer (or end-user's
requests) to provide access to the next tier, the Business Tier. Why would anyone ever need this? This
facilitates our need for distributed computing. Basically it comes down to you choosing some standard
method of communication between these two entities. That is, "how can the client talk to the remote
server?"

This is where we find the need for the Simple Object Access Protocol (SOAP). SOAP is a very simple method
for doing this. Without too many details, SOAP could be considered a standard (protocol) for accessing
remote objects. It provides a way in which to have two machines "talking" or "communicating" with each
other. (Common Object Request Broker Architecture [CORBA], Remote Method Invocation [RMI],
Distributed Component Object Model [DCOM], SOAP, etc., all basically serve the same function.)

The Client Interface

In this section of Figure 1.1 we notice that the end-user presentation (Windows forms, etc.) is connected
directly to the Business Tier. A good example of this would be your applications over the Local Area Network
(LAN). This is your typical, nondistributed, client-server application. Also notice that it extends over and on
top of the Distributed Logic layer. This is intended to demonstrate how you could use SOAP (or some other
type of distributed-computing messaging protocol) on the client to communicate with the server and have
those requests be transformed into something readable and usable for the end user.
The Business Tier

This is basically where the brains of your application reside; it contains things like the business rules, data
manipulation, etc. For example, if you're creating a search engine and you want to rate/weight each
matching item based on some custom criteria (say a quality rating and number of times a keyword was
found in the result), place this logic at this layer. This layer does NOT know anything about HTML, nor does it
output it. It does NOT care about ADO or SQL, and it shouldn't have any code to access the database or the
like. Those tasks are assigned to each corresponding layer above or below it.

We must gain a very basic understanding of Object-Oriented Programming (OOP) at this time. Take time to
read over http://searchwin2000.techtarget.com/sDefinition/0,,sid1_gci212681,00.html and make sure you
understand the important benefits of OOP. To clarify, let's look at another example, such as a shopping cart
application. Think in terms of basic objects. We create an object to represent each product for sale. This
Product object has the standard property getters and setters: getSize, getColor, setSize, setColor, etc. It is a
super simple implementation of any generic product. Internally, it ONLY knows how to return information
(getters) and understands how it can validate the data you pump into it (ONLY for its limited use). It is self-
contained (encapsulation). The key here is to encapsulate all the logic related to the generic product within
this object. If you ask it to "getPrice," it will return the price of the single item it represents. Also if you
instruct it to "validate" or "save," it has the brains to be able to handle this, return any errors, etc.

We can plug this Product object into another object, a "Cart" object. This cart can contain and handle many
Product objects. It also has getters and setters, but obviously on a more global scale. You can do something
like "for each product in myCart", and enumerate (loop through) each product within. (For more information
on enumeration, refer to http://www.m-w.com/cgi-bin/dictionary?enumeration.) Now, when you call
"getPrice" for the Cart object, it knows that it must enumerate each product that it has, add up the price for
each, and return that single total. When we fire the "saveCart" method, it will loop for each "product" and
call its "saveProduct" method, which will then hit the Data Access Tier objects and methods to persist itself
over to the Data Tier.

We could also take our simple Product object, and plug it into our "Sale" object. This Sale object contains all
of the items that are available for a particular sale. And the Sale object can be used for things like
representing all the items on sale at a given outlet or the like. I'm sure you are beginning to understand the
advantage of using an OOP environment.

Data Access Tier

This layer is where you will write some generic methods to interface with your data. For example, we will
write a method for creating and opening a Connection object (internal), and another for creating and using a
Command object, along with a stored procedure (with or without a return value), etc. It will also have some
specific methods, such as "saveProduct," so that when the Product object calls it with the appropriate data,
it can persist it to the Data Tier. This Data Layer, obviously, contains no data business rules or data
manipulation/transformation logic. It is merely a reusable interface to the database.
Conclusions

In all of the systems that I have been able to dig my dirty little hands into, I have rarely ever seen both the
Business Tier and Data Access Tiers used. I mostly combine the two tiers. Allow the Business Layer to talk
directly to the Data Layer, and do not bother with the Data Access Layer. To justify this, we are all
developing on Internet time, and the last time I looked, it's still going at about 3 to 4 times faster than
normal time, which means we are expected to also work and produce at the same rate. The bottom line is
reducing the time to market. In my opinion, writing this Data Access Tier, which is simply abstracting the
Data Tier, is overkill, and ADO can be considered as this Data Access Layer. It provides us with the interface
directly. We still keep all SQL in the Data Tier (stored procedures), but no business rules should be kept here.

Of course, the more tiers you add, the more performance is affected. The client hits "Save Cart" on their
Web-enabled phone, it hits the Business Tier to call the "Cart" "saveCart," which calls the products "save,"
which goes either directly to the database or goes through the Data Access Layer and finally persists into the
database. This path does affect performance. It is up to the application architect (you) to know and
understand this, and all other factors affecting the system, and be able to make a good decision on how to
develop it at this level. This decision is usually pretty easily made, depending on the amount of work and
documentation that was produced from the analysis phase.

We all now know how to do this logically. Let's explain the why. A good example is to look at the
Presentation Logic Tier. Notice that many of its sections --the Web, the Proxy Tier, and the Client Interface --
all sit directly on top of the Business Tier. We gain the advantage of not needing to redo any code from that
Business Tier all the way to the database. Write it once, and plug into it from anywhere.

Now say you're using SQL Server and you don't want to pay Microsoft's prices anymore, and you decide to
pay Oracle's instead. So, with this approach you could easily port the Data Layer over to the new DBMS and
touch up some of the code in the Data Access Layer to use the new system. This should be a very minimal
touch-up. The whole point is to allow you to plug each layer in and out (very modular) without too many
hassles and without limiting the technology used at each tier.

Another example would be that we initially develop our entire system using VB (COM) and ASP, and now we
want to push it over to our friendly VB .NET or C#. It is just a matter of porting the code over at each layer
(phased approach), and voila, it's done. (Microsoft has given us the ability for interop between classic COM
and .NET.) We can upgrade each layer separately (with minor hurdles) on an as-needed basis.
Semana 05 | Lecturas | Smart Client

                                                                                  Client-Server / N-Tier Systems.

Client-Server / N-Tier Systems



N-Tier architectures are hot. Well, maybe not as hot as a few years ago, but still it is very important you
know about them. All web applications are N-Tier architectures. You have an application server, a large
number of clients, and a database. An N-Tier architecture is really a Client-Server architecture combined
with the Layered architecture. The reason why I combine Client-Server and N-Tier here is because they are
very much related.

A Tier is a just a Layer, yet Tiers are commonly physically removed from each other. The meaning of a tier is:

One of a series of rows placed one above another: a stadium with four tiers of seats.

A Client-Server system is one in which the server performs some kind of service that is used by many clients.
The clients take the lead in the communication. The basic Client-Server architecture has 2 tiers (Client and
Server). I will basically explain the 3-tier architecture here, which is an extension to the 2-tier architecture.

The first, or presentation tier, a.k.a. the client or front-end, deals with the interaction with the user. Usually,
there can be any number of clients which can all access the server at the same time. Currently the clients are
mostly thin clients, which means they do not contain a lot of application code (in contrast to fat clients).
Clients process user input, send requests to the server, and show the results of these requests to the user. A
common client is made up of a number of dynamic HTML pages that one can access with a web browser.

The second, or application tier, a.k.a. the server, or the back-end, or middleware, processes the requests of
all clients. It is the actual web application that performs all functionality specific to the web application.
However, it does not store the persistent data itself. Whenever it needs data of any importance, it contacts
the database server.

The third, or database tier contains the database management system that manages all persistent data.




It is clear that there are multiple clients. That's what client-server computing is all about. However, in the
second and third tier there can also be multiple instances of the same application. If this is the case, it is
because of scalability, load-balancing and redundancy. Which means the system is important, so let's add
extra equipment that does the same thing. This makes the server a very powerful system, but also
introduces synchronisation problems.

Examples

        Web-applications. Where the first tier is the application-tier, the second tier is the application tier
         and the third tier is the database tier.
Where does it come from?

At the advance of multitasking operating systems in the nineteen-sixties, it became possible to access a
single computer (the server) from different terminals (clients). The distance between the clients and the
server became bigger and the number of clients increased. At the time the application and database tiers
were still integrated. It is called client-server computing.

With the booming of the Internet and e-commerce in the nineteen-nineties, the architecture became
important, and much time and money was invested in it. As other good architectures have shown, it is a
good idea to separate the application code from the data. This principle was applied to the client-server
architecture. Companies created application servers to ease the creation of web applications.

An N-tier architecture (with N more than 3) is really 3 tier architectures in which the middle tier is split up
into new tiers. The application tier is broken down into separate parts. What these parts are differs from
system to system. The following picture shows it:




When should you use it?

You don't usually need to build your own application and database server. Most application developers
either build the application specific front-end code, or the application specific back-end code. This code is
then embedded in an existing application server and uses an existing database management server.

How does it work?

The architecture is so generic it is hard to say anything concrete about it. Communication between the
different tiers often takes place via a network. Communication within a tier also is done over a (local)
network. Clients don't communicate directly to each other. Clients communicate to the application server
directly or to a broker that balances requests between separate server machines. The database layer usually
contains only one database.
SEMANA 06
Semana 06 | Lecturas

      Capítulos 6, 7, 8, 14 y 15.
           o Cap 06: JScript intro
           o Cap 07: JScript control
           o Cap 08: JScript control 2
           o Cap 14: XML & RSS
           o Cap 15: AJAX RIA
      Javascript Tutorial, W3 Schools.
Semana 06 Examen 02

   1. A web server is specialized software that responds to client requests by providing
      resources such as XHTML documents:
          a. True
          b. False

   2. According to our text book, the basic structure of a three tier web based application is:
         a. Client tier, business logic tier and data tier
         b. Client tier, user interface tier and information tier
         c. User interface tier, data tier and information tier
         d. None above

   3. A __________ is a logical representation of data that allows the data to be accessed
      without consideration of its physical structure.
          a. Database
          b. DBMS
          c. Relational Database

   4. Some attributes of Web 2.0 are: involves the user, enables conversation, embraces an
      architecture of participation and harnessing collective intelligence:
          a. True
          b. False

   5. CSS allows document authors to specify the presentation of elements on a web page
      separately from the structure of the document.
          a. True
          b. False

   6. On the TYPE attribute in the SCRIPT tag, document authors define that script language
      [like JavaScript] will be used in the HTML document:
           a. True
           b. False

   7. A ________ is a graphical representation of an algorithm or of apportion of an algorithm,
      and are drawn using certain special purpose symbols:
          a. Flowchart
          b. Algorithm map
          c. Algorithm chart
          d. None above

   8. XML = XHTML Markup Language
         a. True
         b. False

   9. Is one of the two main types of documents you can use to specify XML document
      structure:
          a. XHTML
          b. HTML
          c. DTD
          d. AJAX

   10. AJAX = Asynchronous JavaScript and XML:
          a. True
          b. False
            1.   Wikis are websites that allow users to edit existing content andadd new information, are
                 prime examples of user-generated content and collective intelligence
                     a) True
                     b) False

            2.   MySpace and Facebook are two examples of social networking sites
                   a) True
                   b) False

            3.   Combine content or functionality from existing web services, websites and RSS feeds to
                 serve a new purpose. Some examples arehousingmaps.com, secretprices.com and
                 chicagocrime.org
                     a) Mashups
                     b) Gadgets
                     c) LBS
                     d) None above

            4.   H1, H2…H6 are tags used to define HTML tables
                     a) True
                     b) False

            5.   Select and Option tags are used to create a drop-down list
                     a) True
                     b) False


image as an hyperlink
<a href>
  <img>
</a>

This code shows how to create one column/one row table
<table>
   <tr>
      <td></td>
   </tr>
</table

This code shows how to create an ordered list
<ol>
  <li></li>
  <li></li>
</ol>

    10. The most popular web servers are IIS and APACHE
            a.   True
            b.   False


    4. Some attributes of Web 2.0 are: involves the user, enables conversation, embraces an
       architecture of participation and harnessing collective intelligence:
                   a) True
                   b) False
SEMANA 07
Semana 07 | Lecturas

      Capítulos 9, 10, 11 y 19. (Capítulo 19 es OPCIONAL)
            Cap 09: JScript F(x)
            Cap 10: JScript arrays
            Cap 11: JScript objects
            Cap 19: SilverLight
      Building Enterprise Portals: Principles to Practice, Tushar Hazra.
      Smart Client: Quick Start Guide.   (Lectura OPCIONAL)
SEMANA 08
Semana 08 | Lecturas

      Capítulo 12, 13 y 18 (Capítulo 18 es OPCIONAL)
            Cap 12: DOM
            Cap 13: JScript events
            Cap 18: Flex
      Tutorial Flex. Flex Org   (Lectura OPCIONAL)
Semana 08 Examen 03

An Enterprise portal is an enterprise wide integration of business applications on the web
    a) True
    b) False

Flex allow web programmers create user friendly internet applications?
    a) True
    b) False

Align the IT organization model to achieve strategies business goals by talking advatage of
emerging tecnologies" is consider a best practice in desingning an enterprise portal.
    a) True
    b) False

Most companies interested in deveyoing an enterprise portal are facing these challengers:
   a) formulation of corporate extrategy
   b) define the network topology
   c) identify the tecnology implant
   d) all

In javascrip it is the container of attibutes and behaivors.

        OJECTS


Vertical portals are websites that serve as universal entry point to internet?
    a) True
    b) False


A small piece of information, often no more than a short session identifier the HTTP server sends to
the browser when the browser connects for the first time:

        COOKIE


Javascript functions support:
   a) Recursion
   b) Parameter passing
   c) Objects
   d) all of the above

Is a key feature of an enterprise portal
     a) Security
     b) Reliability
     c) All of the above
     d) none of the above
It is private network that uses a public network (usually the intenet) to connect remote sites or
users together. Intead of using dedicated , real-world connection such as leased line, it uses
“virtual” connections routed through the Internet from the company’s private network to the
remote site of the employee

    a)   LAN
    b)   WAN
    c)   IntraNet
    d)   VPN
1. The best way to develop and maintain a large program is to construct it from small, simple pieces, or
   modules. This technique is called divide and conquer.
       a. True
       b. False

2. A recursive function is a function that calls itself, either directly or indirectly through another function:
       a. True
       b. False

3. JavaScript arrays are “static” entities in that they can change size after they are created
       a. True
       b. False

4. Two ways to pass arguments to functions [or methods] in many programming languages are pass-by-
   value and pass-by-reference
       a. True
       b. False

5. Objects are a natural way of thinking about the world and about scripts that manipulate XHTML
   documents
      a. True
      b. False

6. In object-oriented languages, the unit of programming is the function
       a. True
       b. False

7. The DOM gives you access to all the elements on a web page. Using JavaScript, you can create, modify
   and remove elements in the page dynamically
       a. True
       b. False

8. Event “bubbling” is the process whereby events fired in child elements “bubble” up to their parent
   elements. When an event is fired on an element, it is first delivered to the element’s event handler *if
   any], then to the parent element´s event handler [if any]
       a. True
       b. False

9. RIA = Rich Internet application
       a. True
       b. False

10. The first line of an MXML file declares the document to be an XML document, because MXML is a type of
    XML
        a. True
        b. False
1) Modules in JavaScript are called “functions”, and the prepackaged functions that
belong to JavaScript objects are called “methods”:


   a) True

   b) False

3) Recursion and iteration involve repetition: recursion explicitly uses a repetition
statement; iteration achieves repetition through repeated function calls.


   a) True

   b) False

4) An array is a group of memory locations that all have the same name and normally are
of the same type:


   a) True

   b) False

5) OOD = Object Oriented Development


   a) True

   b) False

6) A cookie is a piece of data that is stored on the user’s computer to maintain
information about the client during and between browser sessions:


   a) True

   b) False



8) Is the process by which events fired in child elements “bubble” up to their parent
elements:


   a) DOM

   b) OOD

   c) JSON

   d) Event Bubbling
9) Enterprise Portal [EP] is an enterprise-wide integration of business applications to the Web -
specifically devised to avail the benefits of the Internet.


    a) True

    b) False

10) The five Key Elements of Enterprise Portal Environment are content management, knowledge
management, collaboration, ERP and security:


    a) True

    b) False
SEMANA 09
Semana 09 | Lecturas

      Capítulo 25 (Lectura OPCIONAL).- ASP .net 2.0 + AJAX
       Your Next IT Strategy. Hagel, Seely


Over the past year, as the hype over e-commerce has subsided, a new chorus of promises about the
potential of the Internet has been gaining volume.The singers this time are not dot-coms and their backers
but rather the big providers of computer hardware, software,and services.What they’re promoting, through
a flurry of advertisements,white papers, and sales pitches, is a whole new approach to corporate
information systems. The approach goes by many different names –Microsoft calls it “.Net,” Oracle refers to
“network services,” IBM touts “Web services,” Sun talks about an “open network environment”– but at its
core is the assumption that companies will in the future buy their information technologies as services
provided over the Internet rather than own-ing and maintaining all their own hardware and software.

No doubt,many executives are skeptical. They’ve heard outsized promises and indecipherable buzzwords
before, and they’ve wasted a lot of time and money on Internet initiatives that went nowhere. This time,
though, there’s an important difference. The technology providers are not making empty promises: They’re
backing up their words with massive investments to help create the infrastructure needed to make the
new IT approach work. As these ef-forts continue, over the next year or two, a steady stream of new,
Internet-based services will come on-line, provid-ing significant cost savings over traditional, internal sys-
tems and offering new opportunities for collaboration among companies. Slowly but surely, all your old as-
sumptions about IT management will be overturned.

In this article, we will provide an executive’s guide to the new IT strategy.We will explain what the Web
services architecture is,how it differs from traditional IT architecture, and why it will create substantial
benefits for companies.We will also lay out a measured,practical plan for adopting the new architecture – a
step-by-step approach that will pay for itself while mitigating the potential for organizational disruption.
Indeed,we believe that two of the great advantages of the Web services architecture are its openness and
its modularity. Companies won’t need to take high-risk, big-bang approaches to its implementation.They
can focus initially on opportunities that will deliver immediate efficiency gains, incorporating new
capabilities as the infrastructure becomes more robust and stable.

The New Architecture Until now, companies have viewed their information systems as proprietary. They
bought or leased their own hardware, wrote or licensed their own applications, and hired big staffs to keep
everything up and running. This approach has worked, but it has not worked well. After years of piecemeal
technology purchases, companies have inevitably ended up with a mishmash of disparate systems spread
throughout different units. Over the last decade, in efforts to merge these “data silos,”many big
companies have invested large amounts of money–hundreds of millions of dollars, in some cases – in
massively complex enterprise-resource-planning systems, which offer suites of interlinked applications that
draw on unified databases. The ERP systems have certainly solved some problems, but they’ve been no
panacea: Most big companies still struggle with a hodgepodge of hundreds of incompatible systems.And ERP
systems have also created new problems. Because they’re relatively inflexible, they tend to lock companies
into rigid business processes. It becomes hard, if not impossible, to adapt quickly to changes in the
marketplace, and strategic restructurings, through acquisitions, divestitures, and partnerships, become
fiendishly difficult to pull off.

In effect, the companies that have installed ERP systems have replaced their fragmented unit silos with
more integrated but nonetheless restrictive enterprise silos. The Web services architecture is completely
different. Constructed on the Internet, it is an open rather than a proprietary architecture. Instead of
building and maintaining unique internal systems, companies can rent the functionality they need –whether
it’s data storage, processing power, or specific applications– from outside service providers. Without getting
too technical, the Web services architecture can be thought of as comprising three layers of technology, as
described in the sidebar “An Overview of Web Services.”At the foundation are software standards and
communication protocols, such as XML and SOAP, that allow information to be exchanged easily among
different applications. These tools provide the common languages for Web services, enabling applications to
connect freely to other applications and to read electronic messages from them. The standards dramatically
simplify and streamline information management you no longer have to write customized code whenever
communication with a new application is needed.
The service grid, the middle layer of the architecture, builds upon the protocols and standards.Analogous to
an electrical power grid, the service grid provides a set of shared utilities – from security to third-party
auditing to billing and payment– that makes it possible to carry out mission-critical business functions and
transactions over the Internet. In addition, the service grid encompasses a set of utilities, also usually
supplied and managed by third parties, that facilitates the transport of messages (such as routing and
filtering), the identification of available services (such as directories and brokers),and the assurance of
reliability and consistency (such as onitoring and conflict resolution). In short, the service grid plays two key
roles: helping Web services users and providers find and connect with one another, and creating trusted
environments essential for carrying out mission-critical business activities. The role of the service grid
cannot be overemphasized: A robust service grid is vital to accelerating and broadening the potential
impact of Web services.Without it,Web services will remain relatively marginal to the enterprise. The top
layer of the architecture comprises a diverse array of application services, from credit card processing to
production scheduling, that automate particular business functions. It is this top layer that, day to day,will
be most visible to you, your employees, your customers, and your partners. Some application services will
be proprietary to a particular company or group of companies, while others will be shared among all
companies. In some cases, companies may develop their own application services and then choose to sell
them on a subscription basis to other enterprises, creating new and potentially lucrative sources of revenue
The Web services architecture has three layers.

The most fundamental layer consists of software standards (such as XML) and communication protocols
(such as SOAP and its likely extensions) that make it possible for diverse applications and organizations to do
business together electronically.

The middle layer is the service grid, through which specialized utilities provide key services and tools. Four
types of utilities operate over the service grid. Shared utilities provide services that support not only the
application services residing in the top layer but also the other utilities within the service grid. For example,
security utilities provide such services as authentication, authorization, and accounting. Performance
auditing and assessment utilities assure users of Web services that they will obtain agreed-upon levels of
performance and will be compensated for damages if performance falls below these levels.Billing and
payment utilities aggregate charges for the use of Web services and ensure prompt and full payment.
Transport management utilities include messaging services to facilitate reliable and flexible communication
among application services as well as orchestration utilities that help companies assemble sets of
application services from different providers. Resource knowledge management utilities include service
directories, brokers, and common registries and repositories that describe available application services
and determine correct ways of interacting with them. They also include specialized services for converting
data from one format to another.

Service management utilities ensure reliable provisioning of Web services.They also manage sessions and
monitor performance to ascertain conformance to service-quality specifications and service-level
agreements.

The top layer encompasses a diverse array of application services that support day-to-day business
activities and processes –everything from procurement and supply chain management to marketing
communications and sales.
To illustrate how the architecture works, let’s contrast the way a typical business activity – loan processing
by a bank – would be carried out through a traditional proprietary architecture and the Web services
architecture. Loan processing is a complex procedure requiring at least six steps (data gathering about an
applicant, validation of data, credit scoring, risk analysis and pricing, underwriting, and closing) and involving
interactions with a number of other institutions (checking an applicant’s credit rating, verifying investment
and loan balances, and so on).With a traditional IT architecture, the process is usually supported by one,
very complicated application maintained by an individual bank; like a Swiss Army knife, the integrated
application does a lot of things,but it may not do any of them particularly well. And since the costs of
maintaining electronic connections with other institutions are high, requiring leased communication lines
and expensive software to link different systems, the necessary interactions are often handled manually
through phone calls and faxes. The process, in sum, is cumbersome, costly, and prone to errors. With the
Web services architecture, loan processing becomes much more flexible, automated, and efficient. Leased
lines are replaced with the Internet, and open standards and protocols take the place of proprietary
technologies. As a result, the bank can connect automatically with the most appropriate institution for each
transaction, speeding up the entire process and reducing the need for manual work. And rather than
maintain its own integrated loan-processing system, the bank can take a modular approach, using
specialized Web services supplied by an array of providers. It can also shift easily among providers, using one
service, say, for risk analysis of loans to restaurants and another for risk analysis of loans to hospitals. In
other words, the bank will always be able to use the best tool for the job at hand; it will no longer have to
compromise on performance to avoid the complexity of integrating proprietary applications.

Clearly, the Web services architecture offers important advantages over its predecessor.First, it represents a
much more efficient way to manage information technology.By allowing companies to purchase only the
functionality they need when they need it, the new architecture can substantially reduce investments in IT
assets.And by shifting responsibility for maintaining systems to outside providers, it reduces the need for
hiring numerous IT specialists,which itself has become a significant challenge for many companies.Using
Web services also reduces the risk that companies will end up using obsolete technologies; third-party
utilities and application providers will be required to offer the most up-to-date technologies in order to
compete. Companies will no longer find themselves stuck with outdated or mediocre applications and
hardware. The standardized, plug-and-play nature of such an architecture will also make it much easier for
companies to outsource activities and processes whenever it makes economic sense. (See the sidebar “Big
Changes for Your IT Department.”)

Second, and perhaps more important, the Web services architecture supports more flexible collaboration,
both among a company’s own units and between a company and its business partners. When traditional
information systems need to talk to each other, they do so through dedicated, point-to-point connections.
For example, when a sales-force-management application needs to send information on closed sales to a
payroll processing application for the computation of commissions, a programmer has to write a special
piece of code–a connector–to allow the two systems to communicate. The problem with such point-to-point
connections is that they are fixed and inflexible and, as they proliferate, become nightmares to manage.
With the Web services architecture, tight couplings will be replaced with loose couplings. Because
everyone will share the same standards for data description and connection protocols, applications will be
able to talk freely with other applications,without costly reprogramming. This will make it much easier for
companies to shift operations and partnerships in response to market or competitive stimuli. The loose-
coupling approach of Web services also makes it an attractive option within an organization. CIOs can use
the Web services architecture to more flexibly integrate the extraordinarily diverse set of applications and
databases residing within most enterprises while at the same time making these resources available to
business partners.
Until now, what’s been called e-business has for the most part been a primitive patchwork of old
technologies. Most companies that do business on the Internet have had to yoke together existing systems
with new ones to create the illusion of integration. A visitor using a corporate Web site may think it’s a
single, streamlined system, but behind the scenes, people are often manually taking information from one
application and entering it into another. Such swivel chair networks, as they have come to be known, are
inefficient, slow, and mistake ridden. Merrill Lynch, like almost all large companies, has struggled to patch
together hundreds of different applications to support its sites for customers. John McKinley, the company’s
CTO, draws an analogy to the Potemkin villages in czarist Russia, where brightly painted facades hid the
unseemly reality of run-down homes. The Web services architecture promises to solve this problem.\

Taking the people out of the network, the architecture will enable connections between applications – both
within and across enterprises– to be managed automatically. First Steps to Success The construction of the
Web services architecture is still in its early stages,and years of investment and refinement will be required
before a mature, stable architecture is in place. This does not mean, however, that companies should wait
to begin the transition to a new IT strategy; even today, benefits can be gained by moving to a Web services
model for certain activities and processes. But it does mean that companies should take a pragmatic,
measured approach. Fortunately, the Web services architecture is ideally suited to such an approach:
Because it’s based on open standards and it leverages the capabilities of third parties, companies don’t
have to place big bets at the outset. They can carefully stage their investments, learning important lessons
along the way. (See the sidebar “Five Questions You Need to Ask.”) Merrill Lynch’s McKinley, for example, is
currently leading a number of initiatives designed to take advantage of Web services. One initiative is the
creation of an innovative portfolio-analysis system for use by brokers and selected customers. By using XML
to link disparate systems within Merrill Lynch as well as to integrate information from partner organizations,
the new system will tie together customer information, product information, and real-time market data in a
flexible, low-cost way. It will give the company’s brokers up-to-the-second, integrated views of all the
information they need to meet a customer’s needs at any given moment. Merrill Lynch is also using a Web
services approach to enable brokers and clients to access information and applications from a wide variety
of devices, including computers, PDAs, cell phones, and conventional phones.Both of these projects offer
immediate business benefits: They provide an important competitive advantage to the company’s sales-
people while delivering added value to customers. Merrill Lynch’s experience, as well as that of other early
adopters such as General Motors and Dell, offers three guidelines for other companies looking to get a
head start.

Build on your existing systems. The Web services architecture should initially be viewed as an adjunct to
your current systems. Through a process we call node enablement, you can use Web or application servers
to connect your traditional applications, one at a time, to the outside service grid, turning them, in effect,
into nodes on the Internet. Node enablement is often as simple as creating an explicit record of the
connection specifications of an application – documenting, in other words, its application programming
interfaces, or APIs – along with the application’s name, its Internet location, and procedures for connecting
with it. The existing application is left intact but is “exposed” so that it can be found and accessed by other
applications in the Web services architecture. The process of node enablement should be systematic, driven
by near-term needs but shaped by a view of longer-term opportunities.

General Motors provides a useful example of this process. Mark Hogan, the president of eGM, a business
unit created by the auto giant to oversee its consumer Internet initiatives, is a strong advocate of the Web
services architecture. Like Merrill Lynch, eGM began with fairly conventional Web sites connecting the
company with customers and dealers. Now, however, Hogan and his team have developed a road map for
using Web services to move GM to a dramatically new build-to-order manufacturing and distribution
model, which will enable the company to generate added revenue and use its assets much more efficiently.
This initiative requires the ability to communicate and collaborate electronically with more than 8,000
dealers, all with information systems of widely differing specifications and sophistication. Few of the dealers
have cutting-edge IT skills, and fewer still have the money to invest in major new applications. Given these
constraints, says Hogan, “traditional IT architectures simply aren’t up to the task.
The Web services architecture provides the only way to rapidly enhance our IT platform.” By applying node
enablement to existing applications at GM and at the dealers, new processes can be rolled out incrementally
with relatively modest investment. For the first stage in the transition, GM is focusing on using Web services
to enhance its traditional build-to-stock model, providing a broader set of options for dealers and
customers. It has provided dealers, for example, with a locate-to-order functionality – a Web services–
based application that quickly finds specific car models in the inventories of other dealers. GM is also
planning to roll out an order-to-delivery application, which will shorten the lead time between placing a
customer order and delivering the vehicle. Such interim steps will pave the way to offering the ultimate
build-to-order model, which will require the reconfiguration of manu-facturing operations and a more
sophisticated deployment of Web services. The payoff is expected to be enormous.GM’s long-term goal is to
cut in half its $25 billion investment in inventory and working capital.Analysts at Goldman Sachs estimate
that supply chain initiatives using Web services could ultimately reduce GM’s operating cost per vehicle by
more than $1,000. Yet the staged approach to change allows GM to shift its IT architecture slowly, avoiding
disruption and focusing only on systems that will deliver real economic paybacks at each stage of
deployment. It also allows the company to temper the risk involved in moving to a new technology platform,
since GM’s efforts are tied to the evolution of the architecture.

Start at the edge. In implementing the new architecture, early adopters are concentrating their initial efforts
at the edges of their enterprises–on the applications and activities that tie their companies to customers or
to other companies. Sales and customer support are obvious examples of edge activities, as are
procurement and supply chain management. Less obviously, some traditionally internal functions can be
pushed out to the edges as a result of outsourcing. In the electronics industry, for example, many
production activities are being contracted to specialized manufacturing service providers, creating a need to
share formerly proprietary applications and data. Why is there so much focus on the edge? Because that’s
where the limitations of existing IT architectures are most apparent and onerous. Almost by definition, an
application on the edge can benefit by being shared.As a result, it suffers most from the difficulties in
connecting proprietary,heterogeneous systems.As GM found, rolling out a new set of applications to its far-
flung dealer network was next to impossible before the emergence of Web services. Dell Computer provides
a great example of the benefits of starting at the edge. Dell’s relationships with its suppliers of components
and other direct materials are critical to the company’s strategy.The total amount the company spends on
direct materials equals as much as 70% of its revenue, so even modest savings in supply chain costs will have
a big impact on the bottom line.A related and equally important concern to Dell is inventory management.
In the personal computer industry,where product prices have recently been declining at 0.6% per week,
excess inventory can become very costly. Recognizing the huge gains possible from more effective supply-
chain management,Dell focused its early Web services initiatives in this area. It began by more closely
connecting its assembly operations with the network of outside logistics providers that operate the
distribution centers for direct materials – the vendor-managed hubs, as Dell calls them.Traditionally, the
company had to hold substantial inventory in the supply chain to ensure that products could be delivered
quickly to customers. Its goal was to fill orders in five days, yet it took suppliers an average of 45 days to fill
materials orders. To ensure it did not run short of key components, suppliers had to maintain ten-day
inventory buffers at vendor-managed hubs, and Dell had to maintain buffers of 26 to 30 hours at its own
assembly plants. In addition, every week, Dell distributed a new 52-week demand forecast to all suppliers.

Today, Dell generates a new manufacturing schedule for each of its plants every two hours, reflecting actual
orders received, and publishes these schedules as a Web service via its extranet. Because the schedules
are in XML format, they can be fed directly into the disparate inventory-management systems maintained
by all the vendor-managed hubs. The hubs always know Dell’s precise materials requirements and can
deliver the materials to a specific loading dock at a specific building, from which they are fed immediately
into an assembly line. With this new approach,Dell has been able to cut the inventory buffers at its plants to
just three to five hours.Explains Eric Michlowitz, the company’s director of supply chain e-business solutions,
“We’ve been able to remove the stock rooms from the assembly plant,because we now pull in only
materials specifically tied to customer orders. This has enabled us to add more production lines, increasing
our factory utilization by one-third.”
Of course, such lean manufacturing approaches often just push inventory back from the manufacturer to the
supplier. Dell’s goal, however, is to eliminate excess inventory throughout the supply chain. So the
company is now focusing on reducing the buffers held at the hubs. These stocks could be cut substantially
if supply problems could be identified earlier. If, for example,Dell knew that one supplier was having a
problem fulfilling an order for a particular part, it might be able to temporarily remove from its Web store
the computer model that used the part. This would, in turn, enable a reduction in the stocks of the part held
at the hubs. To establish such an early warning system for its supply chain,Dell is rolling out an “event
management”Web service, again using its extranet. This service automatically sends out queries on the
status of orders to suppliers,whose own systems automatically send back responses. Dell expects that this
system will reduce hub inventories by as much as 40% while at the same time significantly improving gross
margins by better matching demand and supply. Create a shared terminology. The move to a shared IT
architecture raises an obvious question of control: Who calls the shots? Within a single company, a CIO can
impose a set of standards governing information technology (requiring, for example, that accounts always
be represented in applications as “ACCTS”). But once a group of companies,each with different internal
systems and standards, begins to collaborate electronically, establishing clear lines of authority becomes
difficult. In some cases, one company will have the market power to impose standards on its partners, but
these situations are rare and, given the increasing complexity and fluidity of corporate partnerships, usually
unwise. Instead, shared meaning, and the trust it engenders,must develop much more or- ganically among
participants. Incremental implementation of Web services can aid this process.By starting with a few long-
standing business partners–as GM did with its dealers and Dell did with its logistics providers–companies
gain room to experiment; they can establish through trial and error a common technical language.Then, as
they learn what works and what doesn’t, they can expand the orbit of their partnerships to encompass new
companies.

Trying to engage with too many partners too fast is one of the main reasons that so many on-line market
makers have foundered: The transactions they had viewed as simple and routine actually involved many
subtle distinctions in terminology and meaning. That doesn’t mean that shared standards can’t be
established among large groups; it just can’t be done easily or overnight. Traditional distributors spend years
learning the shades of meaning used by different buyers and sellers. A produce distributor, for example, has
to build an understanding of how each of its suppliers quantifies the ripeness of an orange as well as how
each buyer evaluates ripeness. It is only then that the distributor will have the knowledge and the authority
to create a standard rating system for oranges and promote its adoption throughout the community of
buyers and sellers. XML can be a powerful tool for building shared meaning in Web-based communities, but
it’s important to understand that XML isn’t a cure-all.While XML establishes a common grammar – a
framework for sharing meaning – it establishes only very limited semantics.

The precise meanings of XML terms still need to be determined by the actual partners.For instance, a
particular XML tag may refer to the price of a product, but that doesn’t tell you if it’s the net price after
discounts, if it includes shipping, and so on. Subtleties of meaning have to be hashed out before business
can be conducted in all its inevitable complexity. And don’t expect the meanings, once established, to stay
fixed. They will evolve as partners gain experience and discover shortcomings in their shared processes.
The service grid will play an important role in helping business communities build shared meaning, since a
set of utilities will be established to facilitate the development of trading standards. In many cases, the
dominant companies within private trading networks will provide these utilities. In other cases, industry
consortia will take the lead. RosettaNet is an early example of a consortium-driven utility. It is defining and
promoting the adoption of standard XML formats to describe processes in the supply chain of the
electronics industry,enabling all participants to use the same terms to describe activities like issuing
purchase orders. Such utilities might also be provided by independent businesses that are focused solely on
developing XML or other software standards within an industry or across industries. Shared meaning will
naturally increase as the use of the Web services architecture expands. In the architecture’s current, early
stage of development, incentives for its adoption are limited because relatively few application services are
available and the functionality of the service grid is limited. In this period, early movers like Merrill Lynch,
GM, and Dell play key roles by providing their business partners with compelling reasons to use Web
services. Over time, as additional resources become accessible, the benefits of adopting this architecture
willbecome compelling to more and more companies.New-comers will find it advantageous to adopt
meanings already in use in order to tap into existing applications and utilities.
A Platform for Growth Although many of the early uses of Web services will focus on reducing costs,
efficiency-driven initiatives are only the beginning.Ultimately, the greatest beneficiaries of this new
technology will be companies that harness its power for revenue growth. (See the sidebar “Unbundling and
Rebundling.”) The new architecture provides, for example,a platform for companies to offer their core
competencies as services to other companies. Smart businesses, in other words, won’t just consume Web
services; they’ll also sell them. That’s exactly what Citibank is doing right now. It saw that one drawback to
early on-line exchanges was the inability to handle payments: Participants would use an exchange to reach
agreement on the terms of a transaction but would then have to process payments either manually or
through specialized banking networks. Leveraging its deep skill in electronic payments, Citibank quickly
introduced CitiConnect, an XML-based payment-processing service that plugs into existing trading
applications.

Here’s how it works. A company purchasing supplies through an Internet exchange platform, such as one
offered by Commerce One, registers information about the authorization levels for specific employees and
the corporate bank accounts to be used for payment. When a purchase is made, the buyer clicks the
CitiConnect icon on the Web site. An XML message containing payment instructions is automatically
assembled, specifying the amount involved, the identity of the buyer, the identity of the supplier, the bank
from which to withdraw funds, the bank to transfer the funds to, and the timing of payment. The
message is then routed, according to predefined rules, to the appropriate specialized settlement networks.

The benefits for buyers and sellers are compelling: Sellers cut settlement times by 20% to 40%, and both
buyers and sellers reduce settlement costs by 50% to 60%.

GM’s long-term goal is to cut in half its $25 billion investment in inventory and working capital

Trying to engage with too many partners too fast is one of the main reasons that so many on-line market
makers have foundered: The transactions they had viewed as simple and routine actually involved many
                           subtle distinctions in terminology and meaning.

  While XML establishes a common grammar–a framework for sharing meaning–it establishes only very
  limited semantics.The precise meanings of XML terms still need to be determined by the actual partner
   Business-oriented management of web services,   Casati, Shan, Dayal and Shan: Biblioteca Digital
   Tutorial JSP
SEMANA 10
Semana 10 Examen 04

   1. According with Hagel and Brown in the Article “Your Next IT Strategy”, web services
      architecture has three layers:
          a) True
          b) False

   2. In the Web Services architecture, “Service management utilities” include service
      directories, brokers and common registries and repositories that describe available
      application services and determine correct ways of interacting with them:
           a) True
           b) False

   3. Web services architecture supports more flexible collaboration, both among company’s
      own units and between a company and it{s business partners.
         a) True
         b) False

   4. Management through web services refers to leveraging the Web services for managing
      heterogeneous and distributed systems, thanks to their ability of reducing heterogeneity
      through standardized interaction paradigms.
          a) True
          b) False

   5. Management of Web services and management through Web services, both refer exactly
      to the same idea:
          a) True
          b) False

   6. Management of web services can be classified based on its scope, we distinguish:
         a) Infrastructure level
         b) Application level
         c) Business level
         d) All above
7. Focuses on the web services platform; its goal is to ensure the different components of
   the platform are operating with acceptable performances:
       a) Infrastructure level
       b) Application level
       c) Business level
       d) All above

8. A Web Services Manager is characterized by these ingredients:
      a) Service Model and metric model
      b) Development and runtime environment
      c) All above
      d) None Above

9. The Web Service Manager ______ include conversation and composition informacion,
   typically gathered from local service repositories and from a service composition engine:
       a) Service Model
       b) Metric model
       c) Development and runtime environment
       d) None Above

10. Modularization considerably complicates report definition and maintenance
      a) True
      b) False
SEMANA 11
Semana 11 | Lecturas

      Peer-to-peer computing (P2P),, Dejan Milojicic
Semana 11 | Lecturas | Semantic Web

                                                                              The Semantic Web: Meaning And SOA.


Semantics is just a fancy word for understanding what ttiings truly mean. In distributed
IT environments, semantic interoperability enables applications to understand the precise meaning of each
piece of data that they import, acquire, retrieve and otherwise receive from elsewhere. Without a transparent
view into the semantics of externally originated content, applications cannot know how to validate, map,
transform, correlate and otherwise process that infonnation without garbling its meaning. Semantic
interoperability is, and always has been, one of the principal tasks in real-world integration projects.
Typically, it requires sweat equity by business analysts and data architects, who must define data mappings to
ensure that meaning is not lost or misconstrued when data is transformed for use by target applications. This
can be a complex, error-prone exercise, because separate application domains often use different data
syntaxes, schemas and formats to describe semantically equivalent entities, such as a particular customer's
various records or a specific product's various descriptions. Complicating the integration process is the fact
that application domains rarely describe their semantics—in other words, the entity-relationship conceptual
models that inform their data structures— in any formal or consistent way. Furthermore, developers often find
relational data stmctures hard to fathom when they are trying to associate a complex set of linked tables with a
coherent, business-level conceptual model. Integration specialists often must infer semantics from
sketchy project documentation, and then create cross-application data mappings that are based on those
inferences. Most integration issues in the real world, including data mapping and semantics, are being
addressed on a project-by-project basis. In the ideal world of the Semantic Web, a consistent level of semantic
interoperability would prevail across all applications, and semantics standards would be implemented
universally, thereby accelerating, automating and tightening integration among heterogeneous environments.
This article summarizes the various efforts to achieve these semantics standards, and assesses their progress
and prospects.

What Is The Semantic Web?
At heart, the Semantic Web is a vision for how the Web should evolve to realize its full potential for
delivering a services oriented architecture (SOA —indeed, some industry observers have taken to calling it
"Semantic SOA"). The idea has been percolating within the SOA community since the late 1990s. It has been
promoted primarily by World Wide Web inventor Tim Bemers-Lee, and it continues to be developed through
a formal activity of the World Wide Web Consortium (W3C), which Bemers-Lee heads. Since its birth in the
early 1990s, the Web (what we might today call "Web 1.0") has transformed the Intemet into an open book
that—through common interoperability standards such as HyperText Transfer Protocol (HTTP), Hyper-
Text Markup Language (HTML), and Extensible Markup Language (XML)—allows content everywhere to
be available, readable, searchable and comprehensible to human consumers. The recent "Web 2.0" wave has
taken that concept a step further by making it even easier for people to publish a wider range of content—such
as blogs, wikis, podcasts and mashups—on the Web. The Semantic Web initiative, which some call
"Web 3.0," brings non-human content consumers —including services, applications, bots and other automated
components—into the loop. Organizations can implement W3C-developed semantics standards—such as
Resource Description Framework (RDF) and Web Ontology Language (OWL)-—to make the meaning of
content unambiguously comprehensible to these components. Much of the W3C's focus is on machine-
readable application and data semantics Despite the existence of these and related standards, people vary
widely in how they interpret the scope of the Semantic Web initiative, and the market is swarming with
projects, products and tools that implement different variants of this vision. In the broadest perspective, the
Semantic Web may be understood as referring to an allencompassing metadata, description and policy
layer that enables universal, automatic, comprehensive end-to-end interoperability across every macro or
micro entity—including data, components, applications and services—on every conceivable level.
At the most down-to-earth, though. Semantic Web principles are usually construed as the ability to associate
data with controlled, applicationdomain- specific conceptual models known as "ontologies" (more below on
ontologies). Semantic Web approaches are being applied to integration requirements in the following areas:
• Enterprise content management (ECM)—Semantic approaches could be used to enable more powerful
discovery, indexing, search, classification, commentary and navigation across heterogeneous stores of
unstructured and semi-structured content. Semantic search—driven by concepts, not mere text strings—is
regarded by many as the potential killer application of Semantic Web technology. Indeed, many Semantic
Web vendors are primarily implementing the technology in search engines that use ontology-dedved concepts
to improve search accuracy and reduce spudous hits. (For more on search, see BCR, August 2007, pp. 28-31
and this issue, pp. 19-29)

• Enterprise information integration (EII)— Semantic approaches also could enable consolidated
viewing, query and update of structured data that has been retrieved from diverse sources. Indeed, most
commercial EII environments already present an abstract semantic layer that mediates access to
heterogeneous data, such as enterpdse resource planning (ERP) and customer relationship management
(CRM) applications, converging it all to a common presentation-side schema. A handful of these EII
vendors—including BEA and Red Hat/MetaMatdx—have begun to support Semantic Web standards,
pdmadly through third-party software plug-ins.

• Enterprise service bus (ESB)—Semantic approaches also could facilitate multilayered application, process
and service interoperability across disparate environments. To date, there has been little production
implementation of Semantic Web standards in the ESB arena, though vendors such as Telcordia Technologies
have adopted semantics, ontologies, and RDF to descdbe the conceptual models implemented by application
endpoints, agents and intermediary nodes within ESB-like middleware approaches such as event stream
processing (ESP). To some degree, the Semantic Web community is also loosely associated with Web 2.0
usages such as the "social bookmarking" or "folksonomy" initiatives of Del.icio.us, Digg and Reddit.
These sites provide online communities, within which users may collectively link, tag, classify, and comment
on Web content odginated elsewhere (however, usually without reference to W3C specifications). While these
efforts rely on end users to create informal, non-standard collections of descdptive tags for the content they
find while surfing the Web, the Semantic Web relies on professional developers to create and maintain
standards-based ontologies. Meanwhile, on the standards front, the Semantic Web vision is starting to bear
fruit, slowly but inexorably. In the past year, there has been an upsurge in industry attention to the W3C's
Semantic Web activity, due in part to the growing realization that SOA-based interoperability will
demand attention to semantics issues.

Standards for Implementing the Semantic Web
At the heart of Semantic Web environments is the notion of ontologies, which, as mentioned above, are
conceptual models compdsing entity-relationship statements that have been expressed in RDF or in another
knowledge representation language. RDF is an official W3C Recommendation that uses XML to define a dch
data model, syntax and vocabulary for the exchange of machine-understandable ontologies.
Within an RDF ontology, statements consist of well-defined "subjects," "predicates" and "objects." For
example, in the statement "This BCR article has an author whose value is James Kobielus," the subject is
"This BCR article," the predicate is "has an author," and the object is "whose value is James Kobielus." Under
RDF notafion, each of these "nodes" is designated with its own unique URJ, and a syntactically complete
statement can be created by concatenating subject, predicate, and object node URIs into a single structure
called an "RDF tdple." Application developers then associate, link, or map their data structures to the entity-
relationship models—the ontologies—that are declared in RDF tdples. Applications that are able to validate
XML content against RDF ontologies will then be able to detennine the precise meaning of each data element,
as intended by the content's odginating application. Having this degree of semantic context empowers the
receiving application to process the content correctly, so as to avoid garbling or distorting its meaning.
RDF is the core specification in a growing range of Semantic Web standards and specifications under W3C,
including:

• OWL—This specification, which also is an official W3C Recommendation, extends RDF to
support dcher descdptions of resource properties, classes, relationships, equality and typing.

• SPARQL Query Language for RDF—This spec, currently a W3C Candidate Recommendation,
leverages XQuery and XPath to support queries across diverse RDF data sources.

• Gleaning Resource Descriptions from Diatects of Languages (GRDDL)—This specification,
currently a W3C Candidate Recommendation, specifies how an XML document can be marked up to declare
that it includes RDF-compatible data and also to specify links to algorithms— typically represented in
Extensible Stylesheet Language Transformations (XSLT)— for extracting this data from the document.
At the very least, all Semantic Web implementations use RDF as their core ontology language, though many
also support OWL for its semantic richness, and a growing number are implementing SPARQL, GRDDL and
related W3C specifications. Profiles of some commercially available products that adhere to these are
described in "A Sampling of Semantic Web Solutions." Ontologies figure into Semantic Web environments
in the following scenarios:

M Semantic modeling—In this greenfield model for the development of application data, developers
explicitly model semantics as RDF/OWL ontologies, and/or as such related logical structures as taxonomies,
thesauri and topic maps. The ontologies are used to drive creation of structured content that instantiates the
entities, classes, relationships, attributes and properties defined in the ontologies.

H Semantic mediation—This is the typical use of Semantic Web approaches within heterogeneous
EII and other data integration environments. In this scenario, developers explicitly model semantics as
RDF/OWL ontologies, then use the ontologies to drive the creation of mappings, transformations and
aggregations among existing, structured data sets.

                       A Sampling of Semantic Web Vendors
There is a growing range of pure-play Semantic Web vendors, including the following:

M Cycorp: Headquartered in Austin, TX, Cycorp develops turnkey solutions in artificial intelligence,
knowledge representation, machine reasoning, NLP, semantic data integration, information management and
search. Its Cyc middleware combines an ontology (which has been placed in the public domain) with a
knowledge base, inference engine, natural language interfaces and semantic integration bus. The vendor
offers a no-cost license to its semantic technologies development toolkit to the research community.
In the Semantic Web arena, Cycorp is doing R&D into scenarios in which end users create lightweight local
ontologies that are subsequently elaborated, enriched and mapped to more formal global ontologies by
semantic inference engines.

M Sandpiper Software: Headquartered in Los Altos, CA, Sandpiper Software provides semantics tools,
consulting and training. Its Visual Ontology Modeler (VOM) 1.5 tool supports component-based ontology
modeling through frame-based knowledge representation. VOM, an add-in to IBM Rational Rose,
leverages UML to capture and represent knowledge unambiguously. VOM supports RDF/OWL-based
modeling of domain, interface, process and user ontologies. As a subscription service. Sandpiper also offers
the Medius Ontology Library, which extends VOM's bundled ontology libraries to include application-
specific ontologies plus utility ontologies for national, intemational and general metadata standards.

• SchemaLogic: Headquartered in Kirkland, WA, SchemaLogic provides an SOA-based business semantics
middleware suite, as well as semantics consulting and training services. The company's SchemaLogic
Enterprise Suite includes server components that gather, create, refine, reconcile and distribute ontologies,
taxonomies, tag libraries and other semantic metadata to subscribing applications over a realtime pub-sub
integration fabric. The suite includes a govemance layer that supports collaborative. Web-based participation
and feedback by users and subject matter experts in the creation and refinement of business semantics.
Collaborative semantic govemance may span organizational boundaries, with the resultant semantic artifacts
capable of being propagated automatically to third-party search engines, content management applications,
portals and other systems. For example, customers can use SchemeiLogic Enterprise Suite to synchronize
content categories and descriptions across distributed deployments of Microsoft Office SharePoint
Servers.

• TopQuadrant: Headquartered in Alexandria, VA, TopQuadrant is a software vendor that provides an open
Java-based platform for development of Semantic Web applications. The TopBraid Suite includes tools and
components for building ontologies; developing inference rules and SPARQL-based queries; collaboratively
creating and browsing RDF-enabled content; extracting semantics from various data sources via GRDDL and
other interfaces; mediating between RDF/OWL and other formats; displaying rich model-driven user
interfaces; configuring and orchestrating semantic inference operations; and storing ontologies in third-party
RDF triple-store databases. The suite supports
• Semantic mining—In search and text mining/ analytics environments, developers use natural- language
processing (NLP) and pattern-recognition tools to extract the implicit semantics from unstructured text
sources. The extracted entities, relationships, facts, sentiments and other artifacts are used to fashion
RDF/OWL ontologies that drive the creation of indices, tags, annotations and other metadata that layer a
onsistent semantic structure across the various items within an unstructured text store. To sustain an ontology-
centric Semantic Web environment, the following functional components are necessary. These functional
components are implemented in most commercial and open sourced Semantic Web solutions. They include:

• Semantic tools—Application developers require a broad range of tools to help them work with ontologies,
taxonomies, thesauri, topic maps and other semantic constructs. Developers need tools to discover, query,
browse, analyze, visualize, model, design, edit, classify and annotate semantic constructs. They also need
tools to map among dissimilar ontologies, define transformation rules and attach descriptive tags and
metadata. Tools should support semantics development by individual developers or collaborative teams.
And semantics tools should integrate with Eclipse and other common platforms, and support visual
development in Unified Modeling Language (UML) and other modeling frameworks.

• Semantic engines—Application environments require runtime components to mediate interactions among
semantic-aware components, and also to interface with legacy systems. Runtime semantic engines should
support such functions as validating ontologies against standards; matching, mapping, transformation,
correlation and merging

Ontologies can be used to model, mediate and mine application data
browser-based access, collaborative semantic governance and ontology-based search. As noted earlier, some
Semantic Web vendors are partnering with established EII vendors to offer ontology-aware semantic-
integration layers for federated data query/update. They include:

• Modus Operandi: This vendor's Wave Semantic Data Services Layer product integrates with BEA's EII
solution—AquaLogic Data Services Platform (ALDSP)—via RDF/OWL ontologies. In so doing, it enables
semantic integration of information across diverse, dispersed corporate applications, databases and data
warehouses. It supports user-driven ad-hoc semantic search and query, relying on ontologies to reconcile
semantic conflicts among heterogeneous data. It also incorporates runtime services to crawl and index data
services, to visualize the integrated data, and to monitor data services status. Modus Operandi's ontology
development tool can be launched from within BEA WebLogic Workshop, and can also import any standard
OWL ontology developed in extemal tools. The tool deploys Wave semantic data services directly to ALDSP
running on BEA's WebLogic Server.

• Revelytix: This vendor's MatchIT integrates with the semantic data services layer in Red Hat/MetaMatrix's
EII environment. MatchIT supports automated semantic mapping to help domain experts reconcile, map and
mediate semantics across heterogeneous environments via RDF/OWL ontologies. It provides an extensible
ontology development tool that implements various sophisticated algorithms for determining semantic
equivalence. Some major data management vendors have begun to dip their toes in the Semantic Web
market through solutions of their own. These vendors include:

• Oracle: In July 2007, Oracle announced that it will incorporate Semantic Web support in the new "1 lg"
generation of its market-leading DBMS. Expected to become commercially available in the next several
months, Oracle Database 1 lg will incorporate the Semantic Web features that that vendor had previously
shipped through an optional add-on called Oracle Spatial lOg Release 2. In that previous release, which has
been on the market for two years, the vendor provides a data management platform for RDF-based
applications, supporting new object types to manage RDF data in Oracle. Based on a graph data model,
RDF triples are persisted, indexed and queried, similar to other object-relational data types. The Oracle lOg
RDF database ensures tliat application developers benefit from the scalability of the Oracle database to deploy
scalable semantic-based enterprise applications. Metatomix, Ontoprise and TopQuadrant have all announced
support for Oracle Spatial 1 Og Release 2.
• IBM: This vendor's Integrated Ontology Development Toolkit supports storage, manipulation, query and
inference of ontologies and corresponding data instances. It can be downloaded from IBM's AlphaWorks
site, and includes an ontology definition metadata model, workbench and repository. Its metamodel is a
runtime semantics library that is derived from the OMG's Ontology Definition Metamodel (ODM) and
implemented in Eclipse Modeling Framework (EMF). The Java-based workbench enables RDF/OWL
ontology building, management, visualization, parsing and serialization, plus transformation between
RDF/OWL and other data-modeling languages. The repository, Minerva, is a high-performance DBMS
optimized for OWL ontology storage, inference and query, implementing a subset of SPARQLn
of data to conform to standard ontologies; and inference-based extraction of implicit ontologies from
unstructured text sources. Semantic inference engines should support deterministic mapping across
ontologies, as well as fuzzy equivalence- matching between extracted entity-relationship models and concepts
that have been specified in formal ontologies.

• Semantic repositories—Application environments require repositories or libraries to manage ontologies
and other semantic objects, and also to maintain the rules, policies, service definitions and other metadata to
support life-cycle management of application semantics. Semantic repositories should support storage,
synchronization, caching, access, import/export, registration, archiving, backup and administration of
ontologies and the data that instantiate those ontologies. The most prevalent semantic repositories are
"RDF-triple store" databases.

U Semantic controls—Application environments require that various controls—on access, change,
versioning, auditing and so forth—be applied to ontologies (otherwise, it would be meaningless to refer to
ontologies as "controlled vocabularies"). Controls might be enforced at the repository, engine and/or tool
levels. Developers might be constrained by the corporate-standard semantic tool to only use particular
standard ontologies, which could vary depending on the type of application or project on which they're
working. To the extent that developers work in teams, the semantic- application development tool might
provide a role-based workflow to structure interactions in accordance with best practices.

Implementers looking for guidance on developing Semantic Web applications should begin by reviewing the
academic and open source projects that play a substantial role in catalyzing the development of the Semantic
Web. Entities that are coordinating semantics projects include Advanced Knowledge Technologies, Digital
Enterprise Research Institute, Gnowsis, Rx4RDF and SemWebCentral. The Semantic Web community
also is spawning an expanding group of promising startups, as well as some tentative commitments by larger,
established software vendors.

Who Are The Semantic Web Solutions Vendors?
W3C-developed Semantic Web specifications— most notably, RDF and OWL—have begun to gain traction
through implementation in commercial products. Startups also continue to emerge, offering ontology
modeling tools, inference engines, RDF repositories and other necessary components of Semantic Web
solutions. And more and more users are incorporating semanticsbased approaches in their search, text
analytics, ECM, EII and other mission-critical applications. As befits an embryonic market pushing a
bleeding-edge technology, however, many Semantic Web vendors are consultants who are pursuing ontology-
based projects in ECM, EH, ESB and other areas. In fact, many Semantic Web vendors are attempting to
jump-start a self-sustaining software business from a handful of consulting jobs.

Other semantics firms also make their living primarily from consulting and from other professional services
engagements. These include Articulate Software, Business Semantics, Effective Soft, Mindful Data, Pragati
Synergetic Research, Semantic Arts, Semantic Light, Taxonomy Strategies and Zepheira. As noted earlier,
many software vendors are seeking the low-hanging commercial fruit of semantic search. The growing list of
semantic search engine vendors includes Aduna, AskMeNow, ChaCha, Cognition Technologies, Copemic,
Endeca, FAST Search and Transfer, Groxis, Hakia, Intelliseek, ISYS Search Software, Jarg, Metacarta,
Ontoseareh, Powerset, Readware, Semaview, Siderean, Syntactica, Textdigger, Vivisimo and Zoomlnfo. Most
of these vendors rely heavily on NLP, pattern-matching, and text analytics to power the semantics-aware
crawlers that they deploy to extract ontologies from unstructured text throughout the Web, intranets and other
content collections. Just as important, pure-play Semantic Web vendors have come into their own.

Dozens of vendors offer flexible, sophisticated solutions that ean support a wide range of semantics-aware
applications in addition to search. Pure-plays in this space include Access Innovations, Axontologic, Cycorp,
Fourthcodex, DATA-GRID, Franz, LinkSpace, Metatomix, Modus Operandi, Mondeca, Ontology Works,
Ontopia, Ontoprise, Ontos AG, Revelytix. Sandpiper Software, SchemaLogic, Semagix, Semandex Networks,
Semansys, Semantic Insights, Semantic Research, Semantra, Semtation GmBH, Teragram, Thetus,
TopQuadrant, Visual Knowledge, Wordmap and XSB.
Semantic Web vendors vary widely in their functionality, development interfaces, deployment flexibility and
standards support. None of these vendors is staking its success on rapid, universal adoption of the full stack of
Semantic Web standards. Instead, they all provide tools, platforms and applications that can be deployed for
tactical, quick-payoff IT projects. They address specific business needs with their solutions while enabling
customers to integrate semantics solutions to varying degrees with their existing application and middleware
infrastructures. The sidebar, "A Sampling of Semantic Web Solutions," features snapshots of a handful of
these vendors, illustrating their diverse backgrounds, approaches and business models.

How Mature Is The Semantic Web Market?
This is a young, highly specialized niche, in which academic research projects outnumber commercial
products, and in which most products are point solutions rather than integrated features of application
platforms. Commercial progress on the Semantic Web front has been incremental, at best, with no clear
tipping point in sight. As noted above, no EII vendor has natively integrated Semantic Web specifications,
and neither Oracle nor IBM has ventured much beyond their initial tentative forays into this new arena.
One doubts that the average enterprise IT professional could name a single pure-play vendor of Semantic
Web technology. And rare is the enterprise IT organization that's looking for people with backgrounds in or
even familiarity with Semantic Web technologies. Nor have RDF, OWL and kindred W3C specifications
exactly taken the SOA world by storm. It's been eight years since RDF was ratified by W3C, and more than
three years since OWL spread its wings, but neither has achieved breakaway vendor or user adoption.
To be fair, there has been a steady rise in the number of semantics projects and start-ups, as evidenced by
growing participation in the annual Semantic Technology Conference, which was held recently in San Jose,

And there has been a recent surge in industry attention to semantics issues, such as the recent announcement
of a "Semantic SOA Consortium" involving Science Applications International Corporation (SAIC) and
others. As noted some industry iobservers have even attempted to rebrand Semantic Web as "Web 3,0," so as
to create the impression that this is a new initiative and not an old effort straining to stay relevant.
Surprisingly, the SOA market sectors that one would expect to embrace the Semantic Web have largely kept
their distance. In theory, vendors of search, ECM, EII, ESB, business intelligence (BI), database management
systems (DBMS), master data management (MDM) and data quality (DQ) solutions would all benefit from
the ability to automatically harmonize divergent ontologies across heterogeneous environments. But only a
handful of vendors from these market segments have taken a visible role in the Semantic Web community,
and even these vendors seem to be taking a wait-and-see attitude to it all. One big reason for reluctance is that
the SOA world already has many established tools and approaches for semantic interoperability, such as the
traditional data modeling, mapping and mediation approaches that are included in integration middleware.
In the eyes of most integration professionals, the new W3C-developed approaches, though interesting, have
not yet demonstrated any significant advantages in development productivity, flexibility or cost. They
continue to use older-style data mapping, modeling and mediation approaches, storing their conceptual
semantic models in various proprietary formats and/or all-purpose modeling languages such as UML (which
isn't set up to describe the full semantics of data). If one of the leading indicators of any technology's
commercial adoption is the extent to which

Microsoft is on board, then the Semantic Web has a long way to go, and may not get to first base
until early in the next decade, at the soonest. The vendor's ambitious roadmap for its SQL Server product
includes no mention of the Semantic Web, ontologies, RDF or anything to that effect. Like most mainstream
DBMS and middleware vendors, Microsoft is still focused strongly on traditional data modeling, mapping and
mediation approaches for semantic interoperability. So far, the only mention of semantic interoperability in
Microsoft's strategy is in a new development project code-named "Astoria," This project, which was
announced in May at Microsoft's MIX conference, will support greater SOA-based semantic interoperability
on the ADO,Net framework through a new Entity Data Model schema that implements RDF, XML and
URIs, However, Microsoft has not committed to integrating Astoria with SQL Server, nor is it planning to
implement any of the W3C's other Semantic Web specifications. Essentially, Astoria is Microsoft's trial
balloon to see if a Semantic Web-lite architecture lights any fires in the development community.
Conclusion
Clearly, there is growing and persistent attention to semantic interoperability issues throughout the distributed
computing industry, Microsoft is not the only SOA vendor that is at least pondering these issues, at least on a
high architectural plane. Over the remainder of this decade, we can expect that most major SOA, EII, DBMS
and BI vendors will make some strategic acquisitions in the Semantic Web community. Increasingly, leading
enterprise platform, application and tool vendors will integrate ontologies, inference engines, RDFtriple stores
and other semantics components and interfaces into their solutions. But it may take another decade before the
likes of IBM, Oracle, Microsoft, SAP and other leading enterprise software vendors fully integrate semantics
into all their solutions. Until such time, we must continue to view the Semantic Web as an exciting but
immature work in progressn

               Surprisingly, the SOA market sectors have kept their distance
 Question 1        10 of 10 points

              The data objects defined in the data-modeling phase are transformed to achieve the
              information flow necessary to implement a business function.




              Selected Answer:        b) Process Modeling

 Question 2        10 of 10 points

              Sustainable development is development that meets the needs of the present
              without compromising the ability of future generations to meet their own needs:




              Selected Answer:      True

 Question 3        10 of 10 points

              This principle of Software Engineering Code of Ethics and Professional Practice
              exposes that software engineers shall maintain integrity and independence in their
              professional judgment:




              Selected Answer:        a) Judgment




 Question 5        10 of 10 points

              This principle of Software Engineering Code of Ethics and Professional Practice
              exposes that software engineers shall act consistently with the public interest:




              Selected Answer:         a) Public



                  10 of 10 points
Question 6
         Compilers allow programmers to have many activities running simultaneously.




         Selected Answer:        False

Question 7        10 of 10 points

             The Software Engineering Code of Ethics and Professional Practice, intended as a
             Standard for teaching and practicing software engineering, documents the ethical and
             professional obligations of software engineers




             Selected Answer:       True

Question 8        10 of 10 points

             Three logical units are:




              Selected Answer:          b) Memory unit, CPU, ALU

Question 9        10 of 10 points

             A Web Services are Web-based programs that organizations can incorporate into
             their systems to speed the Web-application-development




             Selected Answer:       True

Question 10        10 of 10 points

              Some differences between education about sustainable development and education
              for sustainable development are:

              ??????????????????

               Selected Answer:          c) All Above
Question 1        10 of 10 points

             An advantage of metal cables over fiber optics is that metal cables usually provide better
             signals:




             Selected Answer:       False

Question 2        10 of 10 points

             Fortran, Cobol and Basic aren’t examples of High Level Languages:




             Selected Answer:       False

Question 3        10 of 10 points

             Is the “natural language” of any given computer and is defined by its hardware design:




             Selected Answer:         Machine language


Question 4        10 of 10 points

             These programs were developed to execute high-level language programs directly, without
             the need of compiling them into machine language:




             Selected Answer:         Interpeters


Question 5        10 of 10 points

             The two most popular web browsers are:




             Selected Answer:          Firefox 2/IE7


Question 6        10 of 10 points

             The program that translate high-level language programs into machine language are
             called:




             Selected Answer:          Compiler


Question 7        10 of 10 points

             An internet service provider (ISP) connects computers to the ARPANET:
             Selected Answer:       False

Question 8        10 of 10 points

             When deciding wich commercial ISP service to use, two important considerations are:




              Selected Answer:         Bandwidth and cost


Question 9        10 of 10 points

             Were developed to convert assembly language programs to machine language at
             computer speeds:




              Selected Answer:         Assemblers


Question 10        10 of 10 points

              Are replacing metal cables in many computer networks due to the grater bandwidth:




               Selected Answer:         Fiber optics
Name:             Parcial 1


Status :          Completed


Score:            100 out of 100 points


Time Elapsed: 0 hours, 12 minutes, and 46 seconds out of 0 hours and 15 minutes allowed.


Instructions:


Question 1         10 of 10 points

             An advantage of metal cables over fiber optics is that metal cables usually provide better
             signals:




             Selected Answer:        False

Question 2         10 of 10 points

             Fortran, Cobol and Basic aren’t examples of High Level Languages:




             Selected Answer:        False

Question 3         10 of 10 points

             Is the “natural language” of any given computer and is defined by its hardware design:




                Selected Answer:          Machine language


Question 4         10 of 10 points

             These programs were developed to execute high-level language programs directly, without
             the need of compiling them into machine language:




                Selected Answer:          Interpeters


Question 5         10 of 10 points

             The two most popular web browsers are:




                Selected Answer:          Firefox 2/IE7


Question 6         10 of 10 points
             The program that translate high-level language programs into machine language are
             called:




              Selected Answer:         Compiler


Question 7        10 of 10 points

             An internet service provider (ISP) connects computers to the ARPANET:




             Selected Answer:       False

Question 8        10 of 10 points

             When deciding wich commercial ISP service to use, two important considerations are:




              Selected Answer:         Bandwidth and cost


Question 9        10 of 10 points

             Were developed to convert assembly language programs to machine language at
             computer speeds:




              Selected Answer:         Assemblers


Question 10        10 of 10 points

              Are replacing metal cables in many computer networks due to the grater bandwidth:




               Selected Answer:         Fiber optics




   Review Assessment: Parcial 1
Name:             Parcial 1


Status :          Completed


Score:            100 out of 100 points


Time Elapsed: 0 hours, 5 minutes, and 22 seconds out of 0 hours and 15 minutes allowed.


Instructions:


Question 1         10 of 10 points

             Fortran, Cobol and Basic aren’t examples of High Level Languages:




             Selected Answer:        False

Question 2         10 of 10 points

             Are replacing metal cables in many computer networks due to the grater bandwidth:




                Selected Answer:          Fiber optics


Question 3         10 of 10 points

             In a typical client/server relationship, the _________ requests that some action be
             performed and the __________ performs the action and responds:




                Selected Answer:          Client/server


Question 4         10 of 10 points

             Were developed to convert assembly language programs to machine language at
             computer speeds:




                Selected Answer:          Assemblers


Question 5         10 of 10 points

             A browser is used to view files on the Internet and the Web:




             Selected Answer:        True
Question 6        10 of 10 points

             When deciding wich commercial ISP service to use, two important considerations are:




              Selected Answer:          Bandwidth and cost


Question 7        10 of 10 points

             Is the software that allows the user to view certain types of Internet files in and interactive
             environment:




              Selected Answer:          Web browser


Question 8        10 of 10 points

             Guide the computer through orderly sets of actions specified by the programmers:




              Selected Answer:          Computer programs


Question 9        10 of 10 points

             Metasearch engines are the tools that most frequently store information in data repositories
             called databases:




             Selected Answer:       False

Question 10        10 of 10 points

              In an anonymous FTP access service, only registered users can view and download files:




              Selected Answer:       False
    Review Assessment: Parcial 1




Name:             Parcial 1


Status :          Completed


Score:            100 out of 100 points


Time Elapsed: 0 hours, 5 minutes, and 22 seconds out of 0 hours and 15 minutes allowed.


Instructions:


Question 1         10 of 10 points

             Fortran, Cobol and Basic aren’t examples of High Level Languages:




             Selected Answer:        False

Question 2         10 of 10 points

             Are replacing metal cables in many computer networks due to the grater bandwidth:




                Selected Answer:          Fiber optics


Question 3         10 of 10 points

             In a typical client/server relationship, the _________ requests that some action be
             performed and the __________ performs the action and responds:




                Selected Answer:          Client/server


Question 4         10 of 10 points

             Were developed to convert assembly language programs to machine language at
             computer speeds:
              Selected Answer:          Assemblers


Question 5        10 of 10 points

             A browser is used to view files on the Internet and the Web:




             Selected Answer:       True

Question 6        10 of 10 points

             When deciding wich commercial ISP service to use, two important considerations are:




              Selected Answer:          Bandwidth and cost


Question 7        10 of 10 points

             Is the software that allows the user to view certain types of Internet files in and interactive
             environment:




              Selected Answer:          Web browser


Question 8        10 of 10 points

             Guide the computer through orderly sets of actions specified by the programmers:




              Selected Answer:          Computer programs


Question 9        10 of 10 points

             Metasearch engines are the tools that most frequently store information in data repositories
             called databases:




             Selected Answer:       False

Question 10        10 of 10 points

              In an anonymous FTP access service, only registered users can view and download files:




              Selected Answer:       False
    Review Assessment: Parcial 1




Name:             Parcial 1


Status :          Completed


Score:            100 out of 100 points


Time Elapsed: 0 hours, 5 minutes, and 22 seconds out of 0 hours and 15 minutes allowed.


Instructions:


Question 1         10 of 10 points

             Fortran, Cobol and Basic aren’t examples of High Level Languages:




             Selected Answer:        False

Question 2         10 of 10 points

             Are replacing metal cables in many computer networks due to the grater bandwidth:




                Selected Answer:          Fiber optics


Question 3         10 of 10 points

             In a typical client/server relationship, the _________ requests that some action be
             performed and the __________ performs the action and responds:




                Selected Answer:          Client/server


Question 4         10 of 10 points

             Were developed to convert assembly language programs to machine language at
             computer speeds:
              Selected Answer:          Assemblers


Question 5        10 of 10 points

             A browser is used to view files on the Internet and the Web:




             Selected Answer:       True

Question 6        10 of 10 points

             When deciding wich commercial ISP service to use, two important considerations are:




              Selected Answer:          Bandwidth and cost


Question 7        10 of 10 points

             Is the software that allows the user to view certain types of Internet files in and interactive
             environment:




              Selected Answer:          Web browser


Question 8        10 of 10 points

             Guide the computer through orderly sets of actions specified by the programmers:




              Selected Answer:          Computer programs


Question 9        10 of 10 points

             Metasearch engines are the tools that most frequently store information in data repositories
             called databases:




             Selected Answer:       False

Question 10        10 of 10 points

              In an anonymous FTP access service, only registered users can view and download files:




              Selected Answer:       False
 Question 1        10 of 10 points

              An advantage of metal cables over fiber optics is that metal cables usually provide better
              signals:




              Selected Answer:       False

 Question 3        10 of 10 points

              Is the “natural language” of any given computer and is defined by its hardware design:




              Selected Answer:          Machine language


 Question 4        10 of 10 points

              These programs were developed to execute high-level language programs directly, without
              the need of compiling them into machine language:




              Selected Answer:          Interpeters


 Question 5        10 of 10 points

              The two most popular web browsers are:




              Selected Answer:          Firefox 2/IE7


 Question 6        10 of 10 points

              The program that translate high-level language programs into machine language are
              called:




              Selected Answer:          Compiler


 Question 7        10 of 10 points

              An internet service provider (ISP) connects computers to the ARPANET:




              Selected Answer:       False




Review Assessment: Examen Rápido 1 (Semana 2)
Name:             Examen Rápido 1 (Semana 2)

Status :          Completed

Score:            100 out of 100 points

Time Elapsed: 0 hours, 11 minutes, and 47 seconds out of 0 hours and 15 minutes allowed.

Instructions:

Question 1          10 of 10 points

             This principle of Software Engineering Code of Ethics and Professional Practice
             exposes that software engineers shall be fair to and supportive of their colleagues




                Selected Answer:      c) Colleagues

Question 2          10 of 10 points

             A Web Services are Web-based programs that organizations can incorporate into
             their systems to speed the Web-application-development




             Selected Answer:      True

Question 3          10 of 10 points

             Was developed to execute high level language programs directly, without the need
             for compiling them into machine language:




                Selected Answer:      b) Interpreter program

Question 4          10 of 10 points

             Sustainable development is generally thought to have three components:
             environment, society, and _____:




                Selected Answer:      b) Economy
Question 5        10 of 10 points

             The data objects defined in the data-modeling phase are transformed to achieve the
             information flow necessary to implement a business function.




              Selected Answer:       b) Process Modeling

Question 6        10 of 10 points

             Sustainable development is development that meets the needs of the present
             without compromising the ability of future generations to meet their own needs:




             Selected Answer:    True

Question 7        10 of 10 points

             Guide the computer through orderly sets of actions specified by the programmers:




              Selected Answer:       d) Computer programs

Question 8        10 of 10 points

             The Software Engineering Code of Ethics and Professional Practice doesn’t include
             specific language about the importance of ethical behavior during the maintenance
             phase of software development.




             Selected Answer:    False

Question 9        10 of 10 points

             An _______ was developed to convert assembly-language programs to machine
             language at computer speeds:




              Selected Answer:       a) Assembler program

Question 10        10 of 10 points

              A computer is a device capable of performing computations and making logical
              decisions at speed millions, even millions, of times faster than human beings can:
                Selected Answer:     True




Name:             Examen Rápido 1 (Semana 2)


Status :          Needs Grading


Score:            100 out of 100 points


Time Elapsed: 0 hours, 20 minutes, and 44 seconds out of 0 hours and 15 minutes allowed.


Instructions:


Question 1         10 of 10 points

             Was developed to execute high level language programs directly, without the need for
             compiling them into machine language:




                Selected Answer:          b) Interpreter program


Question 2         10 of 10 points

             The _______ model is a "high speed" adaptation of the linear sequential model in which
             rapid development is achieved by using a component-based construction approach.




                Selected Answer:          c) RAD


Question 3         10 of 10 points

             This principle of Software Engineering Code of Ethics and Professional Practice exposes
             that software engineers shall maintain integrity and independence in their professional
             judgment:




                Selected Answer:          a) Judgment


Question 4         10 of 10 points

             This unit takes information processed by the computer and sends it to various devices to
             make the information available for use outside the computer:




                Selected Answer:          b) Output


Question 5         10 of 10 points
             The Software Engineering Code of Ethics and Professional Practice, intended as a
             Standard for teaching and practicing software engineering, documents the ethical and
             professional obligations of software engineers




             Selected Answer:       True

Question 6        10 of 10 points

             Following are the basic popular models used by many software development firms:




              Selected Answer:         c) All Above


Question 7        10 of 10 points

             A Web Services are Web-based programs that organizations can incorporate into their
             systems to speed the Web-application-development




             Selected Answer:       True

Question 8        10 of 10 points

             A computer is a device capable of performing computations and making logical decisions
             at speed millions, even millions, of times faster than human beings can:




             Selected Answer:       True

Question 9        10 of 10 points

             Is the “natural language” of any given computer and is defined by its hardware design:




              Selected Answer:         a) Machine language


Question 10        10 of 10 points

              This principle of Software Engineering Code of Ethics and Professional Practice exposes
              that software engineers shall act consistently with the public interest:




               Selected Answer:            a) Public
Assessment: Examen Rápido 1 (Semana 2)




Name:             Examen Rápido 1 (Semana 2)


Status :          Completed


Score:            70 out of 100 points


Time Elapsed: 0 hours, 8 minutes, and 31 seconds out of 0 hours and 15 minutes allowed.


Instructions:


Question 1         0 of 10 points

             An _______ was developed to convert assembly-language programs to machine language
             at computer speeds:




                Selected Answer:
                                         b) Interpreter program


Question 2         0 of 10 points

             Guide the computer through orderly sets of actions specified by the programmers:




                Selected Answer:
                                         c) Interpeter


Question 3         10 of 10 points

             The data objects defined in the data-modeling phase are transformed to achieve the
             information flow necessary to implement a business function.




                Selected Answer:
                                         b) Process Modeling


Question 4         10 of 10 points

             Sustainable development is generally thought to have three components: environment,
             society, and _____:




                Selected Answer:
                                         b) Economy
Question 5        10 of 10 points

             The _______ model is a "high speed" adaptation of the linear sequential model in which
             rapid development is achieved by using a component-based construction approach.




              Selected Answer:
                                           c) RAD


Question 6        10 of 10 points

             Sustainable software development is a mindset (principles) and an accompanying set of
             practices that enable a team to achieve and maintain an optimal development pace
             indefinitely:




             Selected Answer:           True

Question 7        10 of 10 points

             Fortran, Cobol and Basic are examples of Low Level Languages:




             Selected Answer:           False

Question 8        10 of 10 points

             Three logical units are:




              Selected Answer:
                                           b) Memory unit, CPU, ALU


Question 9        0 of 10 points

             This principle of Software Engineering Code of Ethics and Professional Practice exposes
             that software engineers shall be fair to and supportive of their colleagues




              Selected Answer:
                                           b) Judgment


Question 10        10 of 10 points

              A computer is a device capable of performing computations and making logical decisions
              at speed millions, even millions, of times faster than human beings can:




              Selected Answer:
                                         True
COURSES > (06 SEP) ARQUITECTURAS DE APLICACIONES PARA NEGOCIOS ELECTRÓNICOS > PROGRAMA >
EXÁMENES > REVIEW ASSESSMENT: EXAMEN RÁPIDO 1 (SEMANA 2)




     Review Assessment: Examen Rápido 1 (Semana 2)




Name:             Examen Rápido 1 (Semana 2)

Status :          Completed

Score:            90 out of 100 points

Time Elapsed: 0 hours, 12 minutes, and 20 seconds out of 0 hours and 15 minutes allowed.

Instructions:

 Question 1         10 of 10 points

              Sustainable development is generally thought to have three components:
              environment, society, and _____:




                Selected Answer:      b) Economy

 Question 2         0 of 10 points

              An _______ was developed to convert assembly-language programs to machine
              language at computer speeds:




                Selected Answer:      d) None Above

 Question 3         10 of 10 points

              This principle of Software Engineering Code of Ethics and Professional Practice
              exposes that software engineers shall maintain integrity and independence in their
              professional judgment:
             Selected Answer:           a) Judgment

Question 4        10 of 10 points

             The Software Engineering Code of Ethics and Professional Practice, intended as a
             Standard for teaching and practicing software engineering, documents the ethical and
             professional obligations of software engineers




             Selected Answer:       True

Question 5        10 of 10 points

             Is the “natural language” of any given computer and is defined by its hardware
             design:




             Selected Answer:           a) Machine language

Question 6        10 of 10 points

             The data objects defined in the data-modeling phase are transformed to achieve the
             information flow necessary to implement a business function.




             Selected Answer:           b) Process Modeling

Question 7        10 of 10 points

             Following are the basic popular models used by many software development firms:




             Selected Answer:           c) All Above

Question 8        10 of 10 points

             Three logical units are:




             Selected Answer:           b) Memory unit, CPU, ALU

Question 9        10 of 10 points

             Guide the computer through orderly sets of actions specified by the programmers:
              Selected Answer:      d) Computer programs

Question 10       10 of 10 points

              Was developed to execute high level language programs directly, without the need
              for compiling them into machine language:




               Selected Answer:      b) Interpreter program
Name:            Examen Rápido 2


Score:           90 out of 100 points


Time Elapsed:    0 hours, 12 minutes, and 12 seconds out of 0 hours and 15 minutes allowed.


Question 1        10 of 10 points

             This function uses recursion to view all the elements on the page and output them in a
             hierarchical manner:




             Selected Answer:           Child


Question 2        10 of 10 points

             Allows you to move an element to one side of the screen:




             Selected Answer:           Floating


Question 3        10 of 10 points

             Enable a Web Page author to embed an entire CSS document in an XHTML document’s
             head section:




             Selected Answer:           Embedded Style Sheets


Question 4        10 of 10 points

             This element provides to XHTML the capacity for collecting information from users:




             Selected Answer:           Forms


Question 5        10 of 10 points

             The two most common HTTP request types are “get” and “retrieve”:




             Selected Answer:       False

Question 6        10 of 10 points

             A hyperlink references other sources, such as XHTML documents and images
             Selected Answer:       True

Question 7        10 of 10 points

             Some important elements of an XHTML document are




              Selected Answer:         HTML, head and body


Question 8        0 of 10 points

             The hyperlink element is useful to create links




             Selected Answer:       True

Question 9        10 of 10 points

             This element summarizes the table’s contents and is used by speech devices to make the
             table more accessible to users with visual impairments:




              Selected Answer:         Summary


Question 10        10 of 10 points

              The bottom tier application is the application’s user interface:




              Selected Answer:       False
Name:              Examen Rápido 2


Status :           Completed


Score:             100 out of 100 points


Time Elapsed: 0 hours, 11 minutes, and 22 seconds out of 0 hours and 15 minutes allowed.


Instructions:


Question 1          10 of 10 points

             Allows you to move an element to one side of the screen:




              Selected Answer:          Floating


Question 2         10 of 10 points

             Enterprise-level Web server that allows a computer serve documents:




             Selected Answer:          IIS


Question 3         10 of 10 points

             In XHTML, a table has three sections:




             Selected Answer:          head, body and foot


Question 4         10 of 10 points

             The event model allows scripts to respond to user actions and change a page accordingly:




             Selected Answer:       True

Question 5         10 of 10 points

             The bottom tier application is the application’s user interface:




             Selected Answer:       False

Question 6         10 of 10 points

             This function uses recursion to view all the elements on the page and output them in a
             hierarchical manner:
              Selected Answer:          Child


Question 7          10 of 10 points

              This element describes the table’s content:




              Selected Answer:          Caption


Question 8         10 of 10 points

             The onload event is used to call the JavaScript start function when document loading
             completes:




             Selected Answer:      True

Question 9         10 of 10 points

             Represent the distance between the content inside an element and the element’s border:




              Selected Answer:          Padding


Question 10         10 of 10 points

              The validation service is able to validate the syntax of XHTML documents




              Selected Answer:        True
Question 1        10 of 10 points

             This technology allows document authors to specify the presentation of elements on a Web
             Page separately from the structure of the document:




             Selected Answer:          CSS


Question 2        10 of 10 points

             The bottom tier application is the application’s user interface:




             Selected Answer:       False

Question 3        10 of 10 points

             To create an image hyperlink we must nesting an “img” element in an “anchor” element




             Selected Answer:       True

Question 4        10 of 10 points

             This element describes the table’s content:




             Selected Answer:          Caption


Question 5        10 of 10 points

             The event model allows scripts to respond to user actions and change a page accordingly:




             Selected Answer:       True

Question 6        10 of 10 points

             Represent the distance between the content inside an element and the element’s border:




             Selected Answer:          Padding


Question 7        10 of 10 points

             The simplest way to reference an element is by using the element’s ____ attribute:




             Selected Answer:          Id
Question 8        10 of 10 points

             Enterprise-level Web server that allows a computer serve documents:




              Selected Answer:         IIS


Question 9        10 of 10 points

             This request typically sends data to a server:




              Selected Answer:         Post


Question 10        10 of 10 points

              Web-page authors can provide a uniform look and feel to an entire Web site:




               Selected Answer:         External Style sheet
Name:              Examen Rápido 2


Status :           Completed


Score:             100 out of 100 points


Time Elapsed: 0 hours, 11 minutes, and 22 seconds out of 0 hours and 15 minutes allowed.


Instructions:


Question 1          10 of 10 points

             Allows you to move an element to one side of the screen:




              Selected Answer:          Floating


Question 2         10 of 10 points

             Enterprise-level Web server that allows a computer serve documents:




             Selected Answer:          IIS


Question 3         10 of 10 points

             In XHTML, a table has three sections:




             Selected Answer:          head, body and foot


Question 4         10 of 10 points

             The event model allows scripts to respond to user actions and change a page accordingly:




             Selected Answer:       True

Question 5         10 of 10 points

             The bottom tier application is the application’s user interface:




             Selected Answer:       False

Question 6         10 of 10 points

             This function uses recursion to view all the elements on the page and output them in a
             hierarchical manner:




              Selected Answer:          Child


Question 7          10 of 10 points

              This element describes the table’s content:




              Selected Answer:          Caption


Question 8         10 of 10 points

             The onload event is used to call the JavaScript start function when document loading
             completes:




             Selected Answer:       True

Question 9         10 of 10 points

             Represent the distance between the content inside an element and the element’s border:




              Selected Answer:          Padding


Question 10         10 of 10 points

              The validation service is able to validate the syntax of XHTML documents




              Selected Answer:        True
Question 1        10 of 10 points

             This technology allows document authors to specify the presentation of elements on a Web
             Page separately from the structure of the document:




             Selected Answer:          CSS


Question 2        10 of 10 points

             The bottom tier application is the application’s user interface:




             Selected Answer:       False

Question 3        10 of 10 points

             To create an image hyperlink we must nesting an “img” element in an “anchor” element




             Selected Answer:       True

Question 4        10 of 10 points

             This element describes the table’s content:




             Selected Answer:          Caption


Question 5        10 of 10 points

             The event model allows scripts to respond to user actions and change a page accordingly:




             Selected Answer:       True

Question 6        10 of 10 points

             Represent the distance between the content inside an element and the element’s border:




             Selected Answer:          Padding


Question 7        10 of 10 points

             The simplest way to reference an element is by using the element’s ____ attribute:




             Selected Answer:          Id
Question 8        10 of 10 points

             Enterprise-level Web server that allows a computer serve documents:




              Selected Answer:         IIS


Question 9        10 of 10 points

             This request typically sends data to a server:




              Selected Answer:         Post


Question 10        10 of 10 points

              Web-page authors can provide a uniform look and feel to an entire Web site:




               Selected Answer:         External Style sheet
Question 1        5 of 5 points

             Requiring all application programs to lock resources in the same order is a technique for
             preventing what problem?




              Selected Answer:        Deadlock


 Question 2        5 of 5 points

              Changes in the database structure usually involve only one application.




              Selected Answer:      False

 Question 3        0 of 5 points

              The DBA is responsible for managing changes to the database structure, but is rarely
              involved in the original design of the structure.




              Selected Answer:      True

 Question 4        5 of 5 points

              Locks that are placed assuming that a conflict will not occur are called:




               Selected Answer:         optimistic.


 Question 5        5 of 5 points

              The DBA has to find a balance between the conflicting goals of maximizing availability of
              the database to users and protecting the database.




              Selected Answer:      True

 Question 6        0 of 5 points

              The situation that occurs when one user's changes to the database are lost by a second
              user's changes to the database is known as the:




               Selected Answer:         inconsistent read problem.


 Question 7        0 of 5 points

              A passive repository is preferred over an active repository because it requires less human
             intervention.




             Selected Answer:      True

Question 8        0 of 5 points

             Which of the following is not true of DBMS security features?




              Selected Answer:         Each permission pertains to one user or role and one object.


Question 9        0 of 5 points

             Database administration tasks have to be performed for single-user, personal databases.




             Selected Answer:      False

Question 10        0 of 5 points

              Concurrency control measures are taken to ensure that one user's work has absolutely no
              influence on another user's work.




              Selected Answer:       True

Question 11        0 of 5 points

              Which of the following is a common data mining technique?




               Selected Answer:           Regression analysis


Question 12        5 of 5 points

              Which of the following is a reason that operational data are difficult to read?




               Selected Answer:           Dirty data, Missing values and Nonintegrated data


Question 13        5 of 5 points

              Business Intelligence (BI) systems are information systems that help users analyze and
              use data.




              Selected Answer:       True

Question 14        5 of 5 points
              Business Intelligence (BI) reporting systems are used to filter data, sort data, group data
              and make simple calculations based on the data.




              Selected Answer:      True

Question 15        0 of 5 points

              RFM analysis is a way of analyzing and ranking customers based on online survey data.




              Selected Answer:      True

Question 16        0 of 5 points

              OLAP provides the ability to sum, count, average and perform other simple arithmetic
              operations on groups of data.




              Selected Answer:      False

Question 17        0 of 5 points

              A report generated by a reporting system is delivered to the appropriate users via a
              printed report. This system uses which of the following report modes?




              Selected Answer:          None of these.


Question 18        0 of 5 points

              The reports generated by a reporting system can be classified as ________.




              Selected Answer:          static, dynamic and fluid


Question 19        5 of 5 points

              BI reporting systems summarize the current status of business activities and compare that
              status with past events but not with predicted future activities.




              Selected Answer:      False

Question 20        5 of 5 points

              Which of the following is a supervised data mining technique?




              Selected Answer:          Regression analysis

								
To top