Guide to the Software Engineering Body of Knowledge

Document Sample
Guide to the Software Engineering Body of Knowledge Powered By Docstoc
					           Guide to the Software Engineering Body of Knowledge

          Knowledge Area Description (version 0.5) Review Form

                         Educator/Trainer Review Viewpoint

Following are the specifications given by the project’s Editorial Team to the Knowledge Area
Specialist regarding the Knowledge Area Descriptions. The questions that you as reviewers are
being asked to answer are based directly on these specifications. Please constantly keep in mind
your review viewpoint stated in the opening of each question when answering the questions.

Please note that the Knowledge Area Descriptions are still in progress and you may therefore not be
able to answer some questions because certain sections have not been completed yet. Additionally,
due to their specific nature there are no questions pertaining to sections regarding Bloom’s taxonomy
and Vincenti’s engineering knowledge categories. These sections may or may not already be
completed and will be reviewed in a separate process.

However, the requirements pertaining to the proposed breakdowns of topics and the selection of
reference material have been met for all Knowledge Area Descriptions and all questions regarding
these requirements can be answered.

There are 13 specifications, numbered 1) through 13). Each item is presented in italic typeface.
The questions your are asked to answer appear below each item. . If you have additional comments
or issues, please use the last question (14) to state them.

1) Knowledge Area Specialists are expected to propose one or possibly two complementary
   breakdowns that are specific to their Knowledge Area. The topics found in all breakdowns
   within a given Knowledge Area must be identical.

   These breakdowns of topics are expected to be ìreasonableî, not ìperfectî. The Guide to the
   Software Engineering Body of Knowledge is definitely viewed as a multi-phase effort and
   many iterations within each phase as well as multiple phases will be necessary to
   continuously improve these breakdowns. At least for the Stone Man version, ìsoundness and
   reasonablenessî are being sought after, not ìperfectionî.

Question. As an educator/trainer, do you find that the breakdowns of topics comply with the
requirement of being sound and reasonable?


The major topics are quite workable, though I am totally sure that quality definition should
really be a subset of the analysis area. Since there is such a variety of lists of
characteristics, such a divergence of ìdefinitionsî, and such a spectrum of from ìzero
defectsî through a variety of beliefs regarding what is ìgood enough,î is there a chance this
definitional material (or at least some part of it plus other topics) should be at another level
of the Body of Knowledge? I do not object strongly to it being here, but I did want to raise

d551f3bc-7a64-44b8-8646-67bb23d7c6c4.rtf     1
the issue of whether, given the variety of possibilities, greater discussion of that variety
might be appropriate. The analysis issue would then be a matter of applying various
techniques and methods within activities that assure, validate, and verify quality as defined
for the particular organization/domain. Such definition would also apply to considerations
related to specification, design, construction, and testing of the software which appear
elsewhere in the Body of Knowledge. Again, this Knowledge Area might be the best place
assuming other Areas reference it appropriately.


2) The proposed breakdown of topics within a Knowledge Area must decompose the subset of
   the Software Engineering Body of Knowledge that is ìgenerally acceptedî.

   The software engineering body of knowledge is an all-inclusive term that describes the sum of
   knowledge within the profession of software engineering. However, the Guide to the
   Software Engineering Body of Knowledge seeks to identify and describe that subset of the
   body of knowledge that is generally accepted or, in other words, the core body of knowledge.
   To better illustrate what ìgenerally accepted knowledgeî is relative to other types of
   knowledge, Figure 1 proposes a draft three-category schema for classifying knowledge.

                      Specializ       Generally Accepted
                      ed              Established traditional practices recommended by
                      Practices       many organizations
                      used only for
                      certain types
                       of software

                                      Advanced and Research
                                      Innovative practices tested and used only by some
                                      organizations and concepts still being developed
                                      and tested in research organizations

                                      Figure 1 Categories of knowledge

   The Project Management Institute in its Guide to the Project Management Body of
   Knowledge( defines ìgenerally acceptedî knowledge for project management in
   the following manner:

          ‘ìGenerally acceptedî means that the knowledge and practices described are
         applicable to most projects most of the time, and that there is widespread
         consensus about their value and usefulness. ìGenerally acceptedî does not
         mean that the knowledge and practices described are or should be applied
         uniformly on all projects; the project management team is always responsible for
         determining what is appropriate for any given project.’

d551f3bc-7a64-44b8-8646-67bb23d7c6c4.rtf        2
   The Guide to the Project Management Body of Knowledge is now an IEEE Standard. At its
   kick off meeting, the project’s Industrial Advisory Board better defined ìgenerally acceptedî
   as knowledge to be included in the study material of a software engineering licensing exam
   that a graduate would pass after completing four years of work experience. These two
   definitions should be seen as complementary.

   Additionally, Knowledge Area Specialists are also expected to be somewhat forward looking
   in their interpretation by taking into consideration not only what is ìgenerally acceptedî
   today and but what they expect will be ìgenerally acceptedî in a 3 to 5 years timeframe.

Question. As an educator/trainer, do you find that proposed breakdowns of topics decompose the
subset of the Software Engineering Body of Knowledge for this Knowledge Area that is ìgenerally
acceptedî? As an educator/trainer, do you believe that there are topics that meet the criteria of being
ìgenerally acceptedî and are not included in the proposed breakdowns of topics? Are there
ìnon-generally acceptedî topics included in the proposed breakdowns of topics?


Perhaps the overall Body of Knowledge material will make a point about ìstake in the
groundî sources for overall definitions, characteristics, etc. I am not sure that, from an
industry perspective, rather than an academic one, assumption of ISO definitions and
categories can be assumed to be ìgenerally accepted.î That is, perhaps there should be
a place in the Body of Knowledge where assumptions regarding fundamental references
(like a variety of ISO standards) are explained and the decision to go with ISO
lists/definitions is made explicit as an overall Body of Knowledge position. Perhaps this
already is done somewhere in material I have not yet read.

This may also be a matter of consistency from Knowledge Area to Knowledge Area in that,
for example, the S/W Engineering Process Area does not separate standards references
from individual author ones and does not take a clear standards-based position within the
text of the Knowledge Area as is done in this (SQA) Area. Realizing that different
Knowledge Area Specialists are at work on difference parts of the Body of Knowledge, it
would be good to establish some expected common way/approach to this matter, including
how appendices are used in this regard.

[See my comments on the Measurement discussion under the Additional Comments
question, however.]


1) The proposed breakdowns of topics within a Knowledge Area must not presume specific
   application domains, business needs, sizes of organizations, organizational structures,
   management philosophies, software life cycle models, software technologies or software
   development methods.

Question. As an educator/trainer, do you find that the proposed breakdowns of topics comply with
the constraints cited in this specification?

d551f3bc-7a64-44b8-8646-67bb23d7c6c4.rtf      3

[Note: in all the response forms I have been sent, the number of the responses reverts
back to ì(1)î with this question rather than continuing with ì(3)î such that the number ends
with ì(12)î rather than ì(14).î And as I cut-and-paster in my answers, the numbers change
as well. A minor point and perhaps not worth correcting given where you are in the review
cycle for this draft of the Body of Knowledge.]

I would agree that the constraints are met in that the material is applicable across domains,


1) The proposed breakdown of topics must, as much as possible, be compatible with the various
   schools of thought within software engineering.

Question. As an educator/trainer, do you find that the proposed breakdowns of topics are compatible
with the various schools of thought in the field?


Yes, though as noted in section A., there is divergence in the details depending on what
specific author/source might be consulted. This gets back to the ìstake in the groundî
decision that, I believe, needs to be stated generally for the entire Body of Knowledge so
that all Knowledge Areas apply a consistent way to decide what specific
author(s)/sources(s) will be used in cases of specific content.


2) The proposed breakdown of topics within Knowledge Areas must be compatible with the
   breakdown of software engineering generally found in industry and in the software
   engineering literature and standards?

Question. As an educator/trainer, do you find that the proposed breakdowns of topics are compatible
with the breakdowns generally found in industry and in the software engineering literature and


Yes since this Area, in particular, draws so heavily from a specific standard.


3) The Knowledge Area Specialist are expected to adopt the position that even though the
   following ìthemesî are common across all Knowledge Areas, they are also an integral part of
   all Knowledge Areas and therefore must be incorporated into the proposed breakdown of
   topics of each Knowledge Area. These common themes are quality (in general),
   measurement, tools and standards.

d551f3bc-7a64-44b8-8646-67bb23d7c6c4.rtf    4
Question. As an educator/trainer, do you find that the common themes of quality, measurement,
tools and standards are well integrated into the proposed breakdown of topics?


Generally, yes, though, in this particular Area, tools and measurement are addressed in a
relatively cursory manner, i.e., with much less detail, compared to quality and standards.
This may have to do with the overlap between this Knowledge Area and that on testing.


4) The proposed breakdowns should be at most two or three levels deep. Even though no upper
   or lower limit is imposed on the number of topics within each Knowledge Area, Knowledge
   Area Specialists are expected to propose a reasonable and manageable number of topics per
   Knowledge Area. Emphasis should also be put on the selection of the topics themselves
   rather than on their organization in an appropriate hierarchy.

Question. As an educator/trainer, do you find that the number of topics in the proposed breakdowns
is reasonable and manageable?




5) Proposed topic names must be significant enough to be meaningful even when cited outside
   the Guide to the Software Engineering Body of Knowledge.

Question. As an educator/trainer, do you find that the topic names would be meaningful if cited
outside the Guide to the Software Engineering Body of Knowledge?


Yes, subject to my caveat about general industry knowledge of the material in general.
But then, this is something the Body of Knowledge is being developed to address, I would


6) Topics need only to be sufficiently described so the reader can select the appropriate
   reference material according to his/her needs.

Question. As an educator/trainer, do you find that the topics are described in sufficient detail to
enable the reader to select reference material according to her/his needs? Could these descriptions be
briefer and still meet this requirement?


d551f3bc-7a64-44b8-8646-67bb23d7c6c4.rtf      5
This is where a consistent way of doing this from Knowledge Area to Knowledge Area
might help since this Area chooses to make links between topics and references through
Appendices with matrices and checkmarks while the S/W Engineering Process Area, for
example, uses bibliographical references (numbers keyed to the reference list) within the
text itself. I find the latter easier to use since it makes it clear, right away, what the
connection between references and parts of the text would be. However, the matrices
are valuable, too, in that they show the level of coverage across the references/sources of
various topics. This is, in itself, useful information. I am, therefore, arguing for both text
citations and the matrices to be used in all Knowledge Areas.


7) Knowledge Area Specialists are expected to provide a text describing the rationale
   underlying the proposed breakdown(s).

Question. As an educator/trainer, do you find that the text describing the rationale of the
breakdowns is sufficient to understand the decisions made?


Yes, I thought the introductory material on this was good and provided sound basic
definitions that allow a person to know the stance being taken in the material.


8) Specific reference material must be identified for each topic. Each reference material can of
   course cover multiple topics.

Question. As an educator/trainer, do you find that adequate reference material has been selected?
Do you have any additional or alternative suggestions fore reference material ?


I think the material is comprehensive. I would suggest that there are some good on-line
S/W Engineering bibliographies and reference lists that should, perhaps, be cited either in
specific Knowledge Areas, or for the Body of Knowledge as a whole. I realize the caveat
that such web-based references change, etc., but the Web does seem to have a few
steady, reliable S/W Eng. reference lists that could be cited.


9) Proposed Reference Material can be book chapters, refereed journal papers, refereed
   conference papers or refereed technical or industrial reports or any other type of recognized
   artifact such as web documents. They must be generally available and must not be
   confidential in nature.

Question. As an educator/trainer, do you find that the selected reference material is generally

d551f3bc-7a64-44b8-8646-67bb23d7c6c4.rtf    6

I think this depends, in all cases, on some very local conditions regarding the actual users
of the Body of Knowledge. But, generally, I would say yes, itís all out there somewhere,
though access to it depends on oneís financial or local library network resources. This is
not a problem that I think the Body of Knowledge can address other than to realize that
many hoped for users of the Body of Knowledge may not work for major corporations,
research facilities, or academic institutions where access to reference material is relatively
easy (and inexpensive for the individual).


10) Knowledge Area Specialists are expected to identify in a separate section which Knowledge
    Areas of the Related Disciplines that are sufficiently relevant to the Software Engineering
    Knowledge Area that has been assigned to them be expected knowledge by a graduate plus
    four years of experience.

   This information will be particularly useful to and will engage much dialogue between the
   Guide to the Software Engineering Body of Knowledge initiative and our sister initiatives
   responsible for defining a common software engineering curricula and standard
   performance norms for software engineers.

   The Related Disciplines are: Computer Science, Project Management, Mathematics, Systems
   Engineering, Management and Management Science, Cognitive Sciences and Human
   Factors and Computer Engineering.

   The specialists were asked to choose from lists of Knowledge Areas provided to them for each
   Related Discipline. If you, as a reviewer, wish to see these lists, they are available in the
   Proposed Baseline List of Related Disciplines which can be downloaded from Error!
   Reference source not found.Erreur! Source du renvoi introuvable. under ìAvailable
   Documentsî. If deemed necessary and if accompanied by a justification, Knowledge Area
   Specialists can also propose additional Related Disciplines not already included or identified
   in the Proposed Baseline List of Related Disciplines.

Question. As an educator/trainer, do you agree that the selected Related Disciplines are sufficiently
relevant to the Software Engineering Knowledge Area for which you are reviewing it’s description to
be expected knowledge by a graduate plus four years of experience? As an educator/trainer, do you
find that the Knowledge Areas identified within each selected Related Discipline meet this criteria as


This is where the difference in education/training done through formal sources and the kind
of entry-level practitioner on-the-job learning is very clear. The Body of Knowledge is
ahead of the game for much of the software industry in that it is stating an expectation of
what well-rounded professional education should include. However, many organizations
are clearly ìgetting byî with much less comprehensively trained people for whom vendor

d551f3bc-7a64-44b8-8646-67bb23d7c6c4.rtf      7
and platform specific tool and technology training is deemed sufficient.               Any other
knowledge individuals may possess is left up to chance, frankly.


11) Question. Do you have any additional comments?


Not sure if some of this might have fit in areas above, but here are some
section-by-section comments that I developed during my initial reading of the material
before I looked at what questions were formally being asked as part of the feedback:

1) Section I. A. 3. ñ Last bullet appears in larger font size in my copy;
2) Section I. C. ñ States that ìOne wants software to be traceable to requirements insofar
     as possible (this may be thought of as a version of understandability)....î This seems
     to leave the impression that comprehension is the major goal for doing traceability,
     leaving its value in ensuring proper design and test coverage otherwise unmentioned,
     at least here.
3)   Section I. C. ñ The bullet list that follows the above material includes ìModule Cohesion
     and Couplingî which I look upon as characteristics of architecture and design. Should
     others not be listed if this topic is to be raised?
4)   Section I. C. ñ I understand that the bullet list ending this section is used to provide just
     some examples of specific system characteristics that could affect the perception of
     quality, but, in general, isnít this just a matter of pointing out that any domain/industry
     will have its own requirements (as well as certain application/system types)? I donít
     think the bullet list adds much since itís not comprehensive as a list of functional
     requirements even for these examples.
5)   Section II. A. ñ The second to last sentence in the first paragraph is a point that I think
     should be made earlier since it is so key to defining SQA.
6)   Section II. C. 1. a. ñ The term ìin a meetingî is used. I think this should be defined to
     make it what the Body of Knowledge is assuming regarding what it takes to ìhave a
     meetingî since it is saying a meeting is expected to apply the techniques to be
     discussed in this section. Is it presumed that techniques which do not presume
     sit-down, face-to-face gatherings in co-located room do not qualify here?
7)   Section II. C. 1. a. -- I have often used the following review/approval categories when
     thinking about the deliverables that need to be reviewed in terms of the level of review
     participation expected/required. That is, what level or review does each deliverable
     need and at what point? Some may need go through more than one level, of course.
     I think any discussion of review process/practice needs to acknowledge not only the life
     cycle point (e.g., the chart in this section), but also the level of visibility as represented
     above. I always instruct/teach this point when I cover review processes and
     deliverable approvals. Where fully independent review is not feasible, it should be
     considered so that a policy exists regarding why not. The categories are: Individual
     creating the deliverable, i.e., their say so alone is enough to move to the next stage of

d551f3bc-7a64-44b8-8646-67bb23d7c6c4.rtf     8
   the life cycle (e.g., Unit Test); Peer must approve the deliverable/results to move
   forward; (Immediate) Management must approve to move forward, i.e., the
   management directly above the individual creating the deliverable; (Fully) Independent
   review/test of deliverable, i.e., someone (staff and/or management) not directly in the
   management chain of the individual creating the deliverable (e.g., an independent
   testing organization); Client must review/approve deliverable/results.
8) Section II. D. ñ If this is going to be the only measurement discussion in the Body of
   Knowledge, then I think this topic is rather under-served. There are a lot of bullet lists
   here, more than in the rest of the Area, so it appears somehow incomplete or just an
   outline of what should be here. Again, perhaps there is another place where
   something more substantial is going to be said about measurement? If so, then I
   would not be as concerned about whatís here. But since there is no reference to a
   larger treatment, I think there is far too little guidance here on the subject of
   measurement. There Additional References and Standards sections (IV. B. & C.)
   have reasonable sources for further reading, but this content section seems to skim
   over the topic as well as take a ìright/wrongî approach to quality measurement, i.e., itís
   a defect or problem or it is okay, when the nature of various quality characteristics is
   that there may be degrees of acceptability defined.

d551f3bc-7a64-44b8-8646-67bb23d7c6c4.rtf   9