Docstoc

Name____________________ - DOC

Document Sample
Name____________________ - DOC Powered By Docstoc
					                              Semantic Web Study Guide
Blue indicates student response. Please edit as you find better and more concise answers.


1) There are three general forms of Expertise Location. What are the three (3) and what
   are the advantages and the disadvantages of each?
      Derivation- actions and writings, Advantages are: actual works Proof of expertise
      is the context of the document. Disadvantage: References from papers and
      documents may/may not be t heir own area of expertise (secretary/XO) no critiera
      for evaluation

       Declaration- BIO’s, self description Advantages are: Self declared to promote
       themselves, resume content. Disadvantage: validation based upon SME input
       Age of data

       Designation- organization description Work Role, PeopleSoft info, Advantages
       are: common terms Accuracy due to TPC ratings based on the work role.
       Disadvantage: often interpreted differently

2) How can natural language processing support and benefit expertise location?
     NLP is the creation of “understanding” from human communication by extracting
     entities and appropriately tagging the objects.

       Currie: NLP supports each form of Expertise location by extracting key words
       (entity extractors) and establishing “triples” to provide context and relationships
       to determine who is writing about a particular topic and reporting that person as a
       potential expert due to writings, PeopleSoft data or BIO’s.

       Deanne Answer: By the use of text mining and profiling combined with find
       ability via metadata, taxonomies, thesaurus, ontology’s, and folksonomies to
       connect or map options for expertise location. NLP use of Entity extractors and
       Fact extractors can operate on natural language where 80% of the data is
       unstructured and 20% in a database.

3) Knowledge Management Systems often have difficulty being accepted and used in
   organizations. What do Metcalf’s Law, Reed’s Law and the New Technology Chasm
   tell us about this problem?

       Currie: The value of a system is proportional to the number of users on the
       system/network. The more users you can have on the network, and the easier it
       allows them to do their job, the more likely it will be for those people who
       normally do not quickly accept change, to move toward adoption of the new KM
       system. Put early adopters on the system and modify it to suit their needs. The
       system that facilitates the knowledge sharing for all users is best.




                                                                              Page 1 of 17
       Deanne answer: These laws provide techniques to evaluate circumstances. The
       value of a system is proportional to the number of users on the system/network.
       Metcalf and Reed’s law are based upon a false assumption that all connections
       and groups are equally valuable.

4) What are the advantages of Results Driven Incrementalism (RDI) in the development
   of knowledge management systems? How are RDI and Evo related?

       Currie: Deliver an increment, get it to the user, receive feedback and build
       improvements into the next increments. RDI delivers specific capabilities at each
       increment.
       RDI fits to meet the culture at each increment. Rarely does anyone guess right
       the first time. The advantage is that the tools get utilized by the users quickly and
       improvements can be recommended and delivered to provide additional
       improvements at each increment. EVO (evolutionary) means something evolves
       and by definition you don’t know where the end is going to be. Goes in ways we
       cannot anticipate.

       Deanne: RDI speeds the achievement of tangible business benefits, reduces
       overall time to implement project, every step is concrete and any point of failure
       is rectified immediately. Desired results must be caputured. Incremental business
       process changes are easier to manage. RDI is done to fit culture . EVO doesn’t
       have an end point. Both RDO and Evo do drops/increments, but RDI focus on
       culture vs. no end mind. In an EVO, the system may never get used because it
       doesn’t fit the culture.

5) How does a taxonomy-based KMS respect privacy when it mines email?

       Allow users to review their profiles; leave the controversial tags (like “drinking”)
       off the taxonomy list so that they are never sought; or, allow users to review the
       taxonomy (tags being searched).

       Currie: It can be designed to look only at specific knowledge areas, not words
       outside the key word list.

       Deanne: The KMS could profile the content of what the organization does and
       open the profile for use. Tacit profiles can be used to make the profiles more
       accurate.

6) What does designing for acceptability (acceptance), as a design consideration or –
   ility, mean as it relates to knowledge management systems? Design for
   changeability?

       Currie: These areas are all directly related. Build changeability into the system,
       agility to quickly change it, and flexibility to easily change it. Pick the design-
       ability, scale-ability, accept-ability. ????



                                                                               Page 2 of 17
         Deanne: Acceptability includes the analysis that users are going to use to decide
         how and/or if they should use the system. Have the users involved. Analyze
         their business processes. Build system to fit the culture. Design for changeability
         is designing the system so that it can be changed by the user. This is often
         translated to mean agility and flexible.


7) Once an organization’s culture is understood and the rules for change are clear, what
   step would you take next in building a knowledge based organization?

         Currie: Select a “low hanging fruit” with a large “bang for the buck” and get it out
         to the users quickly to build corporate support.

         Deanne: Look for low hanging fruit and getting the biggest bang for the buck for
         the effort towards organizations objective.

8) Tiwana’s Road Map: How does it differ from SAIC’s stated KM Methodology.

         Tiwana takes a more structured approach (akin to a systems engineering
         methodology), while SAIC starts with a “pilot” implementation.

         Currie: Tiwana also recommends a complete KM strategic assessment to align
         KM goals with Corporate goals. Tiwana begins with a lengthy evaluation of
         current IT and tries to leverage that investment. SAIC quickly tries to establish
         pilots that can provide a value quickly and build corporate support.

         Deanne: TIWANA used SE approach and studied he organization to align with
         corporate goals. SAIC leads a with a pilot or prototype to get to RDI to KM.

9) Describe two techniques for “jumping” “The Technology Chasm” and increasing the
   likelihood of gaining organizational acceptance for a knowledge management
   initiative.

         Currie:
    1.   Empower people to create their own communities
    2.   Put as many early adopters on a project as possible
    3.   Put metrics into the system
    4.   Phase the system changes to meet workforce tolerances
    5.   Don’t call it a KM system

         Deanne: 1. Use RDI to empower people to make it happen. 2. Find people that
         are early adopters of the approach and look for use by example and reward
         publicly.




                                                                                Page 3 of 17
10) If one document refers to “high” as on drugs and another refers to “high” as in height
    or altitude, what is available for distinguishing between the two when automatically
    classifying documents? How would an automatic system accomplish this?
        Meta-data (tagging) is used to distinguish between the two objects.
        (http://www.searchtools.com/info/metadata.html)
        Automatic system would take advantage of this by making inferences based upon
        the tags.

       Currie: The word would be tagged (using XML) and then a relationship would be
       build using an entity extractor or taxonomy to develop a “triple” concept or
       Subject, Predicate, Object to develop context and meaning.

       Deanne: Tagging. The tags can be controlled to support specific classification
       schema an org. is operating on. An Entity extractor looks for a triple to develop
       context and meaning.

11) In determining the best KMS deployment strategy for an organization, what must I
    know first and foremost?

       Currie: The CULTURE of the company and workforce.

       Deanne: Culture of the org and it’s desired outcomes.


12) How are Results Driven Incrementalism and Design for Flexibility/Design for
    Agility related?

    Note: Flexibility characterizes a system’s ability to be changed easily. Agility
    characterizes a system’s ability to be changed rapidly.

       Currie: Design a system that can be designed easily and quickly. Evaluated by
       users and modified to better satisfy their needs.

       Deanne: Change occurs based upon the results looking for. RDI pkgs change into
       incremental elements that support specific objectives. Flexibility looks for easy
       ways to change and agility characterizes how fast you can change.




                                                                               Page 4 of 17
                                       Sample Set 2




                               The Smart Data Continuum

1) In the figure above (The Smart Data Continuum), explain the difference between the
   two levels a) XML taxonomies and documents with mixed vocabularies and b) XML
   ontology and automated reasoning

The “XML taxonomies and documents with mixed vocabularies” represents a point in
time when all tagging has been accomplished and is in place, many would say that we are
at that stage in regards to information available on the internet. The “XML ontology and
automated reasoning” stage represents a point in time when KM tools can conclude facts
that are not explicitly stated in the information; they accomplish this automated reasoning
through “declarative” logic. This latter stage represents the implementation of
inferencing.

Currie:
            a. Taxonomies are used to “tag” data and establish “specific” relationships
               using class-subclass categories. These are further developed linking the
               subject to the verb and predicate as part of the context of the sentence or
               paragraph. b. We use ontologies to establish more “general” relationships
               to allow for automated reasoning using “inferencing” to make
               relationships with other information and generate its own facts.

Deanne: A. XML & mixed vocabulary documents are classifications of data B.
Ontology is a data model of how that data is represented and used (reasoning).



                                                                              Page 5 of 17
2) Precision and Recall: What is the difference? What is the metric for each? Provide an
    example of precision-based search and of recall-based search.
(ref: http://www.inxight.com/pdfs/categorizer_wp.pdf)
Precision is the number of correct answers as a percentage of all answers a system
produces. Precision is a measure of how well a category definition finds only relevant
documents on a category you’re examining, even if it misses some relevant documents.

Recall is the number of correct answers actually produced as a percentage of the total
number of correct answers that can be produced. Recall is a measure of how well your
category definition finds all relevant documents on a category you’re examining, even if
it includes some irrelevant documents.

                   Number of Relevant Retrieved
    Precision =
                       Total Number Retrieved

                     Number of Relevant Retrieved
      Recall =
                   Total Number Relevant




As you increase precision, your recall is likely to fall, and vice versa.

Precision and Recall Example
Suppose you ask a text categorization system “What topic codes apply to my
document?” Not being a perfect system, it answers: “Sports, Europe, and World
Cup,” when the correct answer is “Police, Sports, South America, and World
Cup.” Since two out of three of the system’s answers are correct, the precision of
this answer is 2/3 or 66%. And since there are four correct answers that can be
produced, the recall score of this answer is 2/4 or 50%.

   Currie: Google is a “recall” based search engine. It seeks to find popular web sites
   and documents that have the “key word” searched. Precision search is more of an



                                                                             Page 6 of 17
      Enterprise search capability although “ask.com” is considered a web-based precision
      search engine. Relevence is a difficult measurement since most of these documents
      have some small measure of relevance. The metric is a percentage. The difference is
      measured by the relevance of what is retrieved.

                          # of Relevent Retrieved                           # Relevent Retrieved
      Precision =         -------------------------        Recall =         ------------------
                          Total # Retrieved                                 Total # Relevent

       o Precision search seeks to find the information most relevant to the actual search
         parameters.

Deanne: same metric. Precision returns the relevant documents of the retrieval and
accounts for all retrieved while recall returns relevant retrieved to number relevant or
sensitivity.

3) What role does XML play in the statement “The trend is to put the “smarts” in the
    data, not in the applications” by Mike Daconta.
XML provides an interoperable syntactical foundation upon which solutions to the larger
issues of representing relationships and meaning can be built. 1

Thus Daconta’s statement is to support the "Semantic Web." Semantic web is a web of
machine-processable data in which the data itself is smart. This goal pushes data mobility
and description beyond syntactic interoperability toward semantic interoperability. The
above Figure shows the progression of data along a continuum of increasing intelligence.
Four stages are shown; however, there will be more fine-grained stages as well as more
follow-on stages. The four stages in the diagram progress from data with minimal
intelligence to data embodied with enough semantic information to allow us to make
inferences about it. 2

      Currie: XML automatically tags a word as a “person, place, or thing” and then
      develops relationships between words within a sentence. Using RDF “triples”, XML
      allows us to distinguish between the words that are spelled the same, but have
      different meanings in certain contexts. Once the data is tagged in a common and
      universal format, any application can be used to make use of the data for a variety of
      purposes.

Deanne: AML creates the bridge between the computing platforms and the structure of
the data at the semantic level. The semantic level allows for the data to be processed by
other machines at a global level which then equates to interoperability.

4) What are triples (in the context of semantics)?



1
    http://www.xml.com/pub/a/2000/11/01/semanticweb/index.html
2
    http://web-services.gov/Designing%20the%20Smart-Data%20Enterprise.doc


                                                                                     Page 7 of 17
A Triple is a tuple (a finite sequence of objects) consisting of three elements: Subject,
Predicate, and Object. In the context of the semantic web, the triple is the basic building
block used in Resource Description Framework (RDF).

    Currie: Triples are the “subject, predicate, object” relationship contained within
    unstructured text documents. These expressions can be used to determine the
    dominant theme of an article or file and also relate or cross reference that person or
    topic to additional information available across the Web.

Deanne: triples are heuristics/rules passed as statements. They involve the identification
of the events, facts, and actions. As a data structure, they are as known as subject, object,
predicate with assigned values. The triples can create an ontology that can define a class,
it’s properties and it’s relationships with other classes.

5) What is the vision of the Semantic Web? What role does Inference play in that
   vision?

Short Answer: The Semantic Web is an extension of the current Web better enabling computers
and people to work in cooperation.3

Long Answer: The Semantic Web is a vision for the future of the Web in which
information is given explicit meaning, making it easier for machines to automatically
process and integrate information available on the Web. The Semantic Web will build on
XML's ability to define customized tagging schemes [XML] and RDF's flexible approach
to representing data [RDF Concepts]. The next element required for the Semantic Web is
a web ontology language which can formally describe the semantics of classes and
properties used in web documents. In order for machines to perform useful reasoning
tasks on these documents, the language must go beyond the basic semantics of RDF
Schema [RDF Vocabulary]. 4

What is “inference” on the Semantic Web?5
Broadly speaking, inference on the Semantic Web can be characterized by discovering new
relationships. As described elsewhere in this FAQ, the data is modeled as a set of (named)
relationships between resources. “Inferencing” means that automatic procedures can generate
new relationships based on the data and based on some additional information in the form of an
ontology or a set of rules. Whether the new relationships are physically added to the set of data,
or are returned at query time, is simply an implementation issue.

    Currie: The “Semantic Web” is a web of machine-processable data in which the data
    itself is smart. The Semantic Web agent relies on structured sets of information and
    inference rules that allow it to “understand” the relationship between the different


3
  http://www.w3.org/2001/sw/SW-FAQ#What1
4
  http://www.w3.org/TR/webont-req/#section-introduction
5
  http://www.w3.org/2001/sw/SW-FAQ#What5


                                                                                      Page 8 of 17
     data resources. – Ultimately you want the system to develop inferences to deduce
     facts through logical conclusions.

 Deanne: Tim Berner-Lee’s dream of when machines are capable of analyzing the data
 from the web for content, links, transaction between people and computers. The
 web/computers do some of the thinking for you. Inference plays a key role in that the
 knowledge base needs a valid reference, but the key is that the reference identified must
 also be valid to the user.

 6) How was the profile shown below generated? Describe its creation from a technical
    perspective. It is a component of the search system and the expertise location system
    presented in class lectures.


                                                             This profile is generated
                                                             using associations (ontology).
                                                             Collexis calls this
                                                             fingerprinting (profiling).
                                                             (Ref:
                                                             http://www.collexis.com/doc/
                                                             Collexis%20Product%20Ove
                                                             rview%205.0.pdf )

 The search fingerprint is a sideways bar graph generated from the thesaurus (taxonomy).
 Thus, a taxonomy is essential to create this function.

 From Lecture: These are sliders that are statistics on entities. He spoke of Term
 Frequency Inverse Document Frequency which shows the specificity based upon number
 of occurrences a word is used.

Currie: (Collexis) This is a taxonomy based extractor that counts the number of times a
     name/word appears in an article and creates a report on the left side. Then an
     algorithm de-limiter is applied to look for specificity around that word or name. The
     particular article with the highest score appears on the right side. If you slide the
     orange dot across the bar, the list of documents on the right side changes based upon
     the specificity.

 Deanne: It used a taxonomy to search based upon a fingerprint. This is done by
 maximizes the textual overlap. They used a profile (fingerprint/thesaurus) and a vector
 space model that adds data together looking at direction and magnitude of the data. The
 output can then affect the input (user moving orange dot for more precision), which
 reduce required training.

 7) How was the column labeled “Entities” shown below generated? Describe its creation
    from a technical perspective. It is a component of the search system and the expertise
    location system presented in class lectures.



                                                                              Page 9 of 17
The entities are derived from the Taxonomy and are a collection of associations that were
not specifically requested. They are suggestions.

   Currie: This utilized an “entity extractor” to identify keywords, develop “triples” to
   understand the context and cluster the results, by mining the data to understand
   people, places and things.

Deanne: It used an entity extractor to identify facts people, places, things triples. All
searches are related to the input, so the users knows what you will get back. The user
then mines the data further based upon the search results.

8) How is the column labeled “Concepts” shown below generated? Describe its creation
   from a technical perspective. It is a component of the search system and the expertise
   location system presented in class lectures.



                                     The concepts are associations based upon the
                                     taxonomy but are not specified in the search results.


                                     KEY LESSON: Putting different technologies
                                     together other than indexing – from lecture.




   Currie: This is a “taxonomy” based search that identified concepts that were not part
   of the original search, but were referred to in many of the documents, so you may
   want to search these as well.

Deanne: It used a taxonomy. They went into the search results and selected concepts
that was not part of the input, but was in the search results. It is close and you may want
to look at the data in that context.


9) XML Content Servers:
How do XML Servers bring greater precision to Search?
Greater precision is created by having more keywords (meta-data) available from diverse
sources.

Currie: They operate on the text and create metadata “data about data”, create “triples”
and through that can determine differences in meaning between words that are spelled the
same but mean something totally different in certain contexts. More info !!!




                                                                               Page 10 of 17
Deanne: They operate off of tags, but are not limited to key words and they use the
metadata to help. XML servers help by integrating the content from other sources, so
you can have multiple uses of the data (think multi-dimensional array or the O’Reily
make your own book effort.). Content Servers allow the data to be tailored to the user.
The tagged data lets you get more precise in your query.

9.b.) How can XML Servers improve university text books in becoming more specific to
the goals and objectives of the course and, at the same, time improve the bottom line of
publishers
Using SafariU as an example, professors can create a course specific text from many
while the various authors still receive a receive royalties.

Currie: In the example of SAFARIU, a professor could search a variety of text books
published by a majority of publishing houses and extract only the information they
wanted to teach to their students as part of his/her class. The professor would extract the
chapter, page or paragraph, put it into the new publication and pay only for the extracted
portion via a pre-determined royalty fee. The new publication would be printed and
made available in the bookstore within weeks.

Deanne: The XML content Servers allow you to look at the content of the data (e.g.
search all books for x topic). The Prof. can then select the most relevant data points for
the course he/she is teaching to get all the appropriate content for a book they want
tailored to their syllabus and course objectives. Content is based upon the what the XML
tagged data repository contains. The publisher can still make money because the books
are not re-useable, for one instructor’s efforts. Author makes money by selling content of
their book by subject/page.


10) Explain the difference between Information Extraction and Document Retrieval.
Document Retrieval is bringing back documents while Information extraction is
retrieving the actual portion of the document.

Currie: A document retrieval process identifies the subject of the document and retrieves
it for viewing. Info extraction finds the exact paragraph you want from within a given
document and, in some cases, combines paragraphs from multiple documents and builds
a new document on-the-fly.

Deanne: Information Extraction gets you a phrase from a document. Document retrieval
returns the entire document for viewing.

11) Entity Extractors work primarily by heuristics. Explain this statement.
Heuristics are rules (rules-of-thumb). Entity extraction is based upon a set of rules that
have been gathered over many years for a particular area.

   Currie: Heuristics are rules. The rules are normally established by experience over
   long periods of time. These rules tell the entity extractor exactly how to interpret the



                                                                              Page 11 of 17
   information. For instance, it might say that if it sees a number that begins with a “(“
   and has 10 numbers behind it, it’s a phone number.

Deanne: Heuristics are rules. If X then Y. If enough rules turn out to be true, then you
arrive at a conclusion. Generally rules are learned by experience. Rules can be very
brittle, meaning that if it not there system does nothing. Data is grouped into canonical
form e.g. dates, phones, people etc. Entities extractors then use heuristics for co-
references and the use the ability to match anaphora to its referent. All of thee action s
are done with a series of rules.

12) An Entity Extractor is often used to process text prior to its placement in an XML
    Content Server. What value does the Entity Extractor perform? How has it added to
    or changed the text?

Entity Extractors figure out what objects in the world a statement is talking about
(people, place, thing, etc) while a Fact Extractor figures out what the statement is saying
about the object. (Rau, pg 397, 2 nd full para)

   Currie: The Entity extractor does not change the text, it merely tags it and provides
   meaning as to what the word is (name, place, thing) The value of the entity extractor
   is its ability to find information , co-reference it to other information and establish
   structure.

         ?????????????????????

Deanne: An entity extractor reads all the text looks for things and tags it. The data gets
grouped into canonical forms. And then can try and do the analysis of anaphora match to
it’s referent. All the data gets tagged. Tagged data is wonderful for an XML content
server, because then the tags are searched to get some insight into the data within the
document. This is of great value to a user because ot makes the data able to represented
in some structures manner so that the search engines can rapidly respond to the search.
The entity extractor doesn’t modify the text, but tags the item.

13) Label each of the figures below: Classification, Key Word Indexing, Clustering,
   Entity Extraction, Fact Extraction




                                                                              Page 12 of 17
             Text Technology Continuum


                             Statistical                       Linguistic

                    Classification

                                              Entity                         Fact
          Keyword                             Extraction                     Extraction
          Incex
                 Clustering
                                                                         Who did what to whom,
                                                                         when, where, with what
   Aboutness                                                             instrument, etc.

Note: Clustering is purely statistical.

14) System of Systems
 Complete the following table by describing the Applicability of each Discriminating
 Factor, that is, the meaning and application of each Discriminating Factor as it relates
 to System of Systems.

Discriminating Factor                                        Applicability

                                Satisfycing – trying to get several systems to work together without full
  Stable Intermediate Form      integration – try something to see how it works
                                Deanne: repository of facts as data occurs & people do what
                                they do. Try, satisfice, rest, and see how evolve

                                Try something and see how it goes. Let if fail, or enhance the system if it
       Policy Triage
                                seems promising.

                                With many players, use n(squared) charts to see what’s
   Leverage at Interfaces
                                occurring at the interfaces.
                                     ????????????
                                Deanne: N2 diagrams, use when many players and functions


                       From the Rao textbook, KM Tools & Techniques

15) APQC classifies CoP’s into four types…….

APQC classifies Communities of Practice (CoPs) into four types:
     - Helping (peer-to-peer sharing of insights)
     - Best Practice Sharing (sharing of documented verified user practices)
     - Knowledge Sharing (connecting of members)



                                                                                           Page 13 of 17
       - Innovation (cross-boundary idea generation)
(Rao, pg 12, 1st full para)

   1. Helping – peer tp perr sharing of insights, 2. Best practice sharing – sharing
   verified user practices, 3. Knowledge sharing- connecting members and 4. Innovation
   – cross boundary idea generation
Deanne: p 12

16) At Infosys, authors earn ……..
(Rao, 14) Authors earn Knowledge Currency Units (KCU) when their documents or
artifacts are accepted into the KMS. KCUs are a KM tool that satisfies three major
purposes: 1. Reward and Recognition, 2. Measuring quality of knowledge assets, and 3.)
Measurement of KM benefits. Further, KCUs are metric.

Deanne: KCU is an incentive program that is trying to satisfy reward recognition,
measure quality of knowledge assets and measure KM benefits. Page 14

17) SNA common patterns that can be identified include…….

Common patterns that can then be identified include clusters (dense groups),
connectors (individuals linking to many others), boundary spanners (individuals
connecting to other parts of an organization), information brokers (those who
connect clusters), and outliers (peripheral specialists). (Rao pg 16)

(Rao, 331) SNA = Social Network Analysis.
    Identify teams and roles of individuals
    Identify isolated teams or individuals
    Spot opportunities for connecting subgroups
    Target opportunities to improve knowledge flow
    Raise awareness of importance of informal networks



18) Mathematical concepts or metrics that can be applied to SNA’s include……

Mathematical concepts like degrees of separation between nodes, the number of
connections to/from a node, centrality of the overall network, and density of possible
connections apply here. (Rao, pg 16, 4 th full para)

The number of ties and the strength of the ties reflect group membership and the
affinities in these groups. This information can then be used to address knowledge
problems (build better teams), communication problems (open more channels of
dialogue), or quality problems (increase the frequency of communication with
experts). (Rao, pg 16, 5th full para)

(Rao, 336)



                                                                           Page 14 of 17
      Surveys – users fill out a form
      Ethnographic Interviews – observation of users and interviews
      Electronic Activity Mapping – electronic tracking of activity and interactions

19) …….. a product of SNA, is very meaningful to people, and can help conceptualize
    deeper organizational patterns.
(Rao, 3 rd & 4 th full paras on pg 16) Graphical mapping.

Mathematical concepts like degrees of separation between nodes, the number of
connections to/from a node, centrality of the overall network, and density of possible
connections apply here. Common patterns that can then be identified include
clusters (dense groups), connectors (individuals linking to many others), boundary
spanners (individuals connecting to other parts of an organization), information
brokers (those who connect clusters), and outliers (peripheral specialists).

(Rao, 331-332) Social Capital is the sum of the relationships among people, including
their shared norms and values. In KM Strategy, the social capital is the “glue” that holds
together the other capital of interest such as Human, Structural, and Customer.

   Deanne: p16? As stated above it also provides Social capital that can be used to
   address knowledge problems (build better teams) , communication problems (open
   more channels of dialog), or quality problems (increase communication)

20) Online idea management systems have been deployed …….
(Rao, 19-20) …at companies like Bristol-Myers Squibb, Cadbury-Schweppes, and
Mott’s Apples. Managing an innovation pipeline, promoting an “idea central” or
ideas marketplace, and creating the “hundred headed brain” are some creative
approaches being adopted by KM pioneers, according to Mark Turrell, CEO of
Imaginatik Research.

This concept promotes corporate innovation by increasing connectivity, integrating
technologies, and collaborative filtering. This leads to the emergence of the “global
brain.”

Deanne: p20 At companies the Bristol Meyers Squibb, Motts Apples, promotes central
idea that infuses idea of KM into innovation (R&D), business strategy, organizational
models operating structures etc that can lead to enterprise “macro-innovation”. Thus can
ultimately lead to the “global brain” idea.

21) According to Nonaka and Nishiguchi, knowledge ……
(Rao, 23) They say that knowledge should be “nurtured” rather than “managed.”

New IT platforms and tools along with human-oriented approaches can help greatly
in knowledge-sharing processes: CAD/CAM/CAE (which improve the efficiency of
product developer’s inductive, deductive, and abductive reasoning processes),




                                                                             Page 15 of 17
simulation (to encourage experimentation), and prototyping (to refine solution
models).

22) Rao’s 8 C’s provide a framework (and a mnemonic) for a successful knowledge
    management practice……..
(Rao, Pg 34) The 8c’s audit which focuses on Culture, Cooperation, and IT platforms.

Connectivity, Content, Community, Culture, Capacity, Cooperation, Commerce, Capital
   Deanne: p 34


23) In a nutshell, KM practice should begin with a systematic __audit___ which maps
   communication___and _ information____flows into a content specific repository.
(Rao, Pg )

Deanne: What page was this from? I found similar on p 3 but not exact….


13) The knowledge “taxonomy” must fit the goals and strategies of the larger system



14) SNA common patterns and social networks (same as #17)



15) The number of ties and the strength of the ties reflect group membership and the
    strength of affinities in these groups.



24) Explain Faceted Navigation and provide examples: Home Depot like search where
you search for “tools”, then you choose between “hand tools” or “power tools”, the saws,
then the type of saw. Endeca produces this product.



25)      Explain the concept of Deep Web mining: This uses a concept called “Federated”
      search which develops a dynamic search that spawns searches into other portals to
      extract exactly what you want and create a new document “on-the-fly”.



26)      Explain the Anaphora resolution concept: This is the ability or issue of linking
      pronouns further in a sentence or paragraph to the original Noun (subject) used as the
      topic of content.



                                                                              Page 16 of 17
What does Kotter say about the common errors made in Leading Change?
         o Not establishing a sense of urgency,
         o Not establishijng a powerful guiding coalition,
         o Lacking a vision,
         o Under communicating the Vision (by a factor of 10),
         o Not removing obstacles to the new Vision,
         o Not planning for and creating short-term wins – incrementalism,
         o Declaring victory too soon,
         o Not anchoring changes in the corporate culture.


Business Case Studies:

To solve a problem, you can conduct an “Expertise Locator” to find the most
knowledgeable 5 people and establish a COP to let them solve the problem.

Be able to put several tools together to solve a problem. Always use at least 2 entity
extractors.

Be able to find various “triples” in a diagram.

Explain how you would “tag” something in XML. By using (1) taxonomies (2) using
an entity extractor to tag it as a “people, place or thing”, or (3) using folksonomies to tag
something personally.


Explain the difference between taxonomies and ontologies.
Taxonomies describe specific relationships (class, subclass), synonyms. Ontologies are
much broader; this person co-authored a book with another person or works for this
company. The number of relationships is the big difference between these two terms
(Ontology establishes a much larger relationship).


Define Knowledge Management: Process employed by organizations to capture and
share experiences, expertise and insight, promote collaboration, provide broad access to
the organizations information assets without regard to source or structure.




                                                                                Page 17 of 17

				
DOCUMENT INFO
Shared By:
Categories:
Stats:
views:35
posted:6/3/2010
language:English
pages:17