A Content Management Scheme in a SCORM Compliant Learning

Document Sample
A Content Management Scheme in a SCORM Compliant Learning Powered By Docstoc
					JOURNAL OF INFORMATION SCIENCE AND ENGINEERING 21, 1053-1075 (2005)




              A Content Management Scheme in a
          SCORM Compliant Learning Object Repository

                JUN-MING SU+, SHIAN-SHYONG TSENG*+, CHING-YAO WANG+
                 YING-CHIEH LEI+, YU-CHANG SUNG+ AND WEN-NUNG TSAI+
                                     +
                                    Department of Computer Science
                                    National Chiao Tung University
                                          Hsinchu, 300 Taiwan
                        *Department of Information Science and Applications
                                            Asia University
                                         Taichung, 413 Taiwan
                                E-mail: {jmsu; tsaiwn}@csie.nctu.edu.tw
                     E-mail: {sstseng; cywang; gis90529; is88036}@cis.nctu.edu.tw


                With the rapid development of the Internet, e-learning systems have become more
          and more popular. For sharing and reusing teaching materials in different e-learning sys-
          tem, the Sharable Content Object Reference Model (SCORM) has become the most
          popular international standard among the existing ones. In an e-learning system, teaching
          materials are usually stored in a database, called the Learning Object Repository (LOR).
          In the LOR, a huge amount of SCORM teaching materials, including associated learning
          objects, will result in management problems in a wired/wireless environment. Recently,
          the SCORM organization has focused on devising ways to efficiently maintain, search,
          and retrieve desired learning objects in LORs for users. This effort is referred to as the
          Content Object Repository Discovery and Resolution Architecture (CORDRA).
                In this paper, we propose a management approach, called the Level-wise Content
          Management Scheme (LCMS), that can be used to efficiently maintain, search, and re-
          trieve learning contents from a SCORM compliant LOR. LCMS includes two phases: the
          Construction phase and Search phase. In the former, the content structure of SCORM
          teaching materials (Content Package) is first transformed into a tree-like structure, called
          a Content Tree (CT), to represent each piece of teaching material. Based on Content
          Trees (CTs), the proposed Level-wise Content Clustering Algorithm (LCCAlg) then cre-
          ates a multistage graph showing relationships among learning objects (LOs), e.g., a Di-
          rected Acyclic Graph (DAG), called the Level-wise Content Clustering Graph (LCCG).
          The LCCAlg determines the relationships among LOs in different teaching materials by
          clustering all of the LOs for each level from bottom to top, according to a similarity
          measure. Moreover, a maintenance strategy is employed to rebuild the LCCG if neces-
          sary by monitoring the condition of each node within the LCCG. The latter employs the
          LCCG Content Searching Algorithm (LCCG-CSAlg) to traverse the LCCG and retrieve
          desired learning content with both general and specific LOs, according to queries sent by
          users in the wire/wireless environment. Some experiments have been done conducted to
          test the proposed scheme, and the results are reported here.

          Keywords: learning object repository (LOR), e-learning, SCORM, content management,
          XML




Received July 4, 2004; revised February 1, 2005; accepted March 15, 2005.
Communicated by Robert Lewis.




                                                     1053
1054        J. M. SU, S. S. TSENG, C. Y. WANG, Y. C. LEI, Y. C. SUNG AND W. N. TSAI




                                 1. INTRODUCTION

      As Internet usage has increased around the world, e-learning systems that include
online learning, employee training courses, and e-books have been accepted globally.
E-learning systems allow learners to study at any time and at any location conveniently.
However, because the teaching materials in different e-learning systems are usually de-
fined in specific data formats, the sharing of teaching materials among these systems is
difficult, making the creation of teaching materials expensive. To solve problem of
teaching material format, several standard formats, including SCORM [1], IMS [2],
LOM [3], AICC [4], etc., have been proposed by international organizations. By means
of these standard formats, teaching materials in different learning management systems
can be shared, reused, and recombined.
      Recently, at SCORM 2004 (aka SCORM 1.3), ADL outlined the plans for the Con-
tent Object Repository Discovery and Resolution Architecture (CORDRA), which is
designed to serve as a reference model and is motivated by an identified need for con-
textualised learning object discovery. Based upon CORDRA, learners will be able to
discover and identify relevant materials from within the context of a particular learning
activity [1, 5, 6]. This effort shows how important the efficient retrieval of desired learn-
ing contents has become for learners. Moreover, in the mobile learning environment,
retransmitting an entire document under a connection-oriented transport protocol, such as
TCP, will result in lower throughput due to the head-of-line blocking and Go-Back-N
error recovery mechanism employed in an error-sensitive environment. Accordingly, a
suitable scheme for managing learning resources and enabling the retrieval of desired
learning resources is needed in the wire/wireless environment.
      In SCORM, related learning objects can be packaged into teaching materials by
means of a content packaging scheme. The teaching materials with structure information
can be represented as a tree-like structure described in the XML language [7, 8]. In this
paper, we propose a Level-wise Content Management Scheme, called LCMS, which in-
cludes two phases: a Construction phase and a Search phase. In the former, the content
structure of SCORM teaching materials (Content Package) is first transformed into a
tree-like structure, called a Content Tree (CT), to represent each teaching material. Based
upon Content Trees (CTs), the proposed Level-wise Content Clustering Algorithm
(LCCAlg) then creates a multistage graph showing relationships among learning objects
(LOs), e.g., a Directed Acyclic Graph (DAG), called the Level-wise Content Clustering
Graph (LCCG). Moreover, a maintenance strategy is employed to rebuild the LCCG if
necessary by monitoring the condition of each node within the LCCG. The LCCG Con-
tent Searching Algorithm (LCCG-CSAlg) is employed to traverse the LCCG and retrieve
desired learning content with both general and specific LOs, according to queries sub-
mitted by users in the wire/wireless environment.
      The LCCAlg determines the relationships among learning objects (LOs) in different
teaching materials by clustering all the LOs in each level from bottom to top, according
to a similarity measure. In addition, for concept generalization, the clustering information
of the nodes in the lower level is rolled up to their parent nodes in the upper level. Then,
the nodes of the LCCG store the clustering results, which include information about the
relationships among learning objects, to make it easy to search for interesting learning
contents. Based on the LCCG and proposed LCCG-CSAlg, users can retrieve not only
                      A CONTENT MANAGEMENT SCHEME IN SCORM LOR                           1055




general materials but also specific materials. To evaluate the performance of our scheme,
some experiments were conducted, and the results are reported here.


                                2. RELATED WORKS

    In this section, we review the SCORM standard and some related works.

2.1 SCORM (Sharable Content Object Reference Model)

      Among the existing standards for learning contents, SCORM, which was proposed
by the U.S. Department of Defense’s Advanced Distributed Learning (ADL) organiza-
tion in 1997, is currently the most popular one. The SCORM specifications are a com-
posite of several specifications developed by international standards organizations, in-
cluding the IEEE [3], IMS [2], AICC [4], and ARIADNE [9]. In a nutshell, SCORM is a
set of specifications for developing, packaging, and delivering high-quality education
and training materials whenever and wherever they are needed. SCORM-compliant
courses leverage course development investments by ensuring that compliant courses are
“RAID,” that is, reusable: they are easily modified and used with different development
tools; accessible: they can be searched and made available as needed by both learners
and content developers; interoperable: they operate across a wide variety of hardware,
operating systems, and web browsers; and durable: they do not require significant modi-
fications with new versions of system software [10].
      In SCORM, a content packaging scheme is employed to package learning objects
into standard teaching materials, as shown in Fig. 1. The content packaging scheme




   Fig. 1. SCORM content packaging scope and corresponding structure of teaching materials.
1056        J. M. SU, S. S. TSENG, C. Y. WANG, Y. C. LEI, Y. C. SUNG AND W. N. TSAI




defines a teaching materials package consisting of 4 parts, that is, 1) Metadata, which
describe the structure of the teaching material; 3) Resources, which denote the physical
describe the characteristic or attribute of the learning content; 2) Organizations, which
file linked to each learning object within the teaching material; and 4) the (Sub) Mani-
fest, which describes the teaching material, consisting of itself and other teaching mate-
rial. In Fig. 1, the organizations define the structure of the whole teaching material,
which consists of many organizations containing an arbitrary number of tags, called
items, used to denote the corresponding chapter, section, or subsection within the physi-
cal teaching material. Each item as a learning activity can be also tagged with activity
metadata, which can be used to easily discover and reuse the activity within a content
repository or similar system and provide descriptive information about the activity.
Hence, based upon the concept of learning objects and the SCORM content packaging
scheme, teaching materials can be constructed dynamically by organizing the learning
objects according to learning strategies, students’ learning aptitudes, and evaluation re-
sults. Thus, individualized teaching materials can be offered to each student, and can also
be reused, shared, and recombined.

2.2 Other Related Research

      For fast retrieval of information from structured documents, Ko et al. [11] proposed
a new index structure which integrates element-based and attribute-based structure in-
formation to represent a document. Based upon this index structure, three retrieval
methods including 1) top-down, 2) bottom-up, and 3) hybrid methods, were proposed for
fast retrieval of information form structured documents. However, although the index
structure takes element and attribute information into account, it is too complex to be
managed when there is a huge number of documents.
      Efficiently managing and transferring documents over the wireless environment has
become an important issue in recent years. Several researchers [12, 13] have noted that
retransmitting whole documents is expensive due to faulty transmission. Therefore, to
efficiently stream generalized XML documents over the wireless environment, Wong et
al. [14] proposed a fragmenting strategy, called Xstream, for the flexible management of
XML documents in the wireless environment. With the Xstream approach, the structural
characteristics of XML documents are employed to fragment XML contents into
autonomous units, called Xstream Data Units (XDUs). Therefore, an XML document can
be transferred incrementally over the wireless environment based upon the XDU. How-
ever, determining the relationships among different documents and providing desired
contents from documents have not been discussed. Moreover, the above works [11-14]
did not take the SCORM standard into account.


       3. LEVEL-WISE CONTENT MANAGEMENT SCHEME (LCMS)

     In an e-learning system, teaching materials are usually stored in a database, called
the Learning Object Repository (LOR). Because the SCORM standard has been widely
accepted and applied, compliant teaching materials have also been created and developed.
Therefore, in the LOR, a huge number of SCORM teaching materials, including associ-
                     A CONTENT MANAGEMENT SCHEME IN SCORM LOR                       1057




ated learning objects (LOs) will result in management problems. Recently, the SCORM
organization has focused on finding ways to efficiently maintain, search, and retrieve
desired learning objects in LORs for users. In this paper, we propose a new approach,
called the Level-wise Content Management Scheme (LCMS), which can be used to effi-
ciently maintain, search, and retrieve learning contents from SCORM compliant LORs.

3.1 LCMS Processes

     As shown in Fig. 2, the LCMS is divided into a Construction Phase and a Search
Phase. In the former, a content tree is created from the SCORM content package through
the CP2CT process, and then, a multistage Directed Acyclic Graph (DAG) with rela-
tionships among LOs, called the Level-wise Content Clustering Graph (LCCG), is cre-
ated and maintained using clustering techniques. The latter traverses the LCCG by means
of the LCCG Content Searching Algorithm (LCCG-CSAlg) and retrieves desired learn-
ing contents with general and specific LOs, according to queries received from users
over the wire/wireless environment.




        Fig. 2. The flowchart of the Level-wise Content Management Scheme (LCMS).


    The Construction Phase includes the following three processes:

• Content Package to Content Tree (CP2CT) Process: this transforms the content
  structure of the SCORM teaching materials (Content Package) into a tree-like structure
  with the representative feature vector and the same depth, called a Content Tree (CT),
  to represent each teaching material.
• Level-wise Content Clustering Process: this clusters LOs according to content trees
  (CTs) and establishes the level-wise content clustering graph (LCCG) used to deter-
  mine the relationships among LOs.
1058          J. M. SU, S. S. TSENG, C. Y. WANG, Y. C. LEI, Y. C. SUNG AND W. N. TSAI




• LCCG Maintenance Process: this monitors the condition of each node within the
  LCCG and rebuilds the LCCG if necessary.

       The Search Phase includes the following two processes:

• SCORM Metadata Searching: this first searches all of the desired teaching materials
  using associated SCORM metadata by addressing the related nodes as entries of
  LCCG.
• Level-wise Content Searching: this then traverses the LCCG from the entry nodes to
  retrieve more precise learning objects from the LOR and deliver them to learners.

3.2 Content Package to Content Tree (CP2CT) Process

      Because we want to determine the relationships among LOs according to the con-
tent structure of the teaching materials, the organization information in the SCORM con-
tent package is transformed into a tree-like representation with a representative feature
vector, called a Content Tree (CT). To make the clustering process efficient, the depth of
all the CTs is the same. The CT is defined below.

Definition 1 Content Tree (CT) = (N, E), where
• N = {n0, n1, …, nm};
• E = { ni ni +1 | 0 ≦ i < the depth of CT}.
     In a CT, each node is called a “Content Node (CN)” and contains a feature vector,
V , which denotes the representative feature of the learning contents within this node. E
denotes the link edges from node ni in an upper level to node ni+1 in the next lower level.

     In this scheme, we apply the Vector Space Model (VSM) approach [15, 16] to rep-
resent the learning contents in a CN. Thus, based upon the Term Frequency - Inverse
Document Frequency (TF-IDF) weighting scheme [17-21], each CN can be represented
by an N-dimension vector as <tf1 × idf1, tf2 × idf2, …, tfn × idfn>, where tfi is the frequency
of the i-th term (keyword) and idfi = log(n/df(t)) is the Inverse Document Frequency (IDF)
of the i-th term in the document (where n is the total number of documents and df(t) is
the number of documents that contain the term).
     To make it easier to determine the relationships among learning objects according to
the content structure, we assume that every content tree (CT) transformed from the con-
tent package has the same tree depth. However, in many teaching materials, the depths of
the content structures vary. Therefore, in a CT, if the depth of a leaf CN is too short, the
Virtual Node (VN) will be repeatedly inserted as its child node until the difference of the
desired depth has been filled. The feature vector of every VN is the same as that of its
parent CN or VN. Also, if the depth of a leaf CN is too great, its parent CN with the de-
sired depth will merge the information of all the included child nodes into one new CN
whose feature vector is generated by averaging these included child nodes.
     The Example 1 shows the process of transforming the organization information of a
SCORM content package into a Content Tree (CT) with the feature vector V and the
same depth.
                      A CONTENT MANAGEMENT SCHEME IN SCORM LOR                         1059




Example 1: Given the SCORM content package shown on the left side of Fig. 3, we take
TF-IDF as the weighting scheme to create the feature vector V in each CN node. Be-
cause the depth of CN, “Chapter 1,” is too shallow, the VN named “1.1” is inserted, and
its feature vector V 21 = <3, 2, 2> is the same as V 11. Moreover, the CN, “3.1,” is too
long, so its included child nodes, i.e., “3.1.1” and “3.1.2,” are merged into one CN, “3.1,”
and their feature vector V 24 is the average of <1, 0, 1> and (<2, 1, 0> + <0, 1, 2>)/2
after the rolling up process. Then, the CT after the CP2CT Process is as shown on the
right side of Fig. 3.




Fig. 3. The corresponding content tree (CT) of the content package (CP) following the CP2CT
        process.


 Algorithm 1: Content Package to Content Tree Algorithm (CP2CTAlgo)
 Symbols Definition:
 CP: denotes the SCORM content package.
 CT: denotes the Content Tree transformed into a CP.
 CN: denotes the Content Node in the CT.
 CNleaf: denotes the leaf node CN in the CT.
 DCT: denotes the desired depth of the CT.
 DCN: denotes the depth of a CN.
 Input: SCORM content package (CP).
 Output: Content Tree (CT) with feature vector.

 Step 1: For each element <item> in CP
         1.1 Create a CN with a feature vector based on the TF-IDF weighting scheme.
         1.2 Insert it into the corresponding level in the CT.
 Step 2: For each CNleaf in CT
         If the depth of CNleaf < DCT
         Then a VN will be repeatedly inserted as its child node until the depth of CNleaf
         = DCT.
1060            J. M. SU, S. S. TSENG, C. Y. WANG, Y. C. LEI, Y. C. SUNG AND W. N. TSAI




         Else If the depth of CNleaf > DCT
              Then its parent CN in depth = DCT will merge the information of all in-
              cluded child nodes and run the rolling up process to average their feature
              vectors.
 Step 3: Content Tree (CT) with feature vector

3.3 Level-wise Content Clustering Process

     After the organization information of the content package is transformed into a con-
tent tree (CT), the clustering technique can be applied to determine the relationships
among the content nodes (CNs) in the CT. Thus, in this paper, we propose a Level-wise
Content Clustering Graph, called the LCCG, which can be used to store the related
information of each cluster. Based on the LCCG, the desired learning content, including
general and specific LOs, can be retrieved for users.

3.3.1 Level-wise content clustering graph (LCCG)

    The LCCG is a multistage graph with information about the relationships among
LOs; i.e., it is a Directed Acyclic Graph (DAG). Its definition is given below.

Definition 2 the Level-wise Content Clustering Graph (LCCG) = (N, E), where:
• N = {(CF0, CL0), (CF1, CL1), …, (CFm, CLm)}. This stores the related information, the
  Cluster Feature (CF) and Child List (CL), in a cluster, called the LCC-Node. The CL
  stores the CF value of the included child LCC-Nodes during the next stage.
• E = { ni ni +1 | 0 ≦ i < the depth of LCCG}. This denotes the link edge from node ni in
  an upper stage to ni+1 in the next lower stage.

     To facilitate content clustering, the number of the stages of the LCCG is equal to the
depth of the CT, and each stage handles the clustering result of these CNs in the corre-
sponding levels of different CTs. That is, the top/lowest stage of the LCCG stores the
clustering results of the root/leaf nodes in the CTs, respectively. In addition, in the LCCG,
the Cluster Feature (CF) stores the related information of a cluster. This is similar to the
Cluster Feature in the Balance Iterative Reducing and Clustering using Hierarchies
(BIRCH) [22] clustering algorithm, and it is defined as follows.

Definition 3 The Cluster Feature (CF) of a cluster is defined as a triple, CF = (N, VS ,
CS), where:
• N denotes the number of the content nodes (CNs) in a cluster.
      ∑ V denotes the sum of the feature vectors (V ) of CNs.
            N
• VS =
            i =1 i

• CS = | ∑ V / N | = | VS / N | denotes the average value of the feature vector sum in
                N
                i =1 i
  a cluster. | | denotes the Euclidean distance of the feature vector. ( VS / N) can be seen
  as the Cluster Center (CC) of a cluster.

       Moreover, during the content clustering process, if a content node (CN) in a content
                      A CONTENT MANAGEMENT SCHEME IN SCORM LOR                         1061




tree (CT) with a feature vector (V ) is inserted into the cluster CFA = (NA, VS A , CSA),
then the new CFA = (NA + 1, VS A + V , | (VS A + V ) / (NA + 1)|). An example of a Cluster
Feature (CF) and Child List (CL) is given in Example 2.

Example 2: Assume that a cluster C0 is stored in the LCC-Node NA with (CFA, CLA) and
contains four CNs, which include four feature vectors: <3, 3, 2>, <3, 2, 2>, <2, 3, 2>, and
<4, 4, 2>. Then, VS = <12, 12, 8>, CC = VS / 4 = <3, 3, 2>, and CS = 9 + 9 + 4 = 4.69.
Thus, CFA = (4, <12, 12, 8>, 4.69). Moreover, assume that CLA = <CF1, CF2>. A new
Content Node, CN B, with feature vector V B = <8, 3, 2> is inserted into cluster C0 in
NA. The child nodes of CN B belong to clusters C3 and C4, respectively. Then, the new
CFA = (5, < 20, 15, 10 >, 5.385) and CLA = <CF1, CF2, CF3, CF4>.

3.3.2 Level-wise content clustering algorithm (LCCAlg)

    Based on the definition of the LCCG, we propose a Level-wise Content Clustering
Algorithm, called LCCAlg, which can create the LCCG according to the CTs trans-
formed for CPs. LCCAlg includes three phases: the 1) Single Level Clustering Phase, 2)
Content Cluster Refining Phase, and 3) Concept Generalizing Phase. Fig. 4 shows a
flowchart of LCCAlg.




          Fig. 4. Flowchart of the Level-wise Content Clustering Algorithm (LCCAlg).


(1) Single Level Clustering Phase:
      In this phase, the content nodes (CNs) of the CT in each tree level can be clustered
by means of different similarity thresholds. The content clustering process starts at the
lowest level and proceeds to the top level in the CT. All the clustering results are stored
in the LCCG. In addition, during the content clustering process, the similarity measure
between two CNs is defined by the cosine function, which is the most common approach
in the document clustering [23, 24]. This means that, given two CNs NA and NB, the simi-
larity measure can be calculated as follows:

                                        V A i VB
     Similarity = cosine(V A , VB ) =            ,
                                        V A VB

where VA and VB are the feature vectors of NA and NB, respectively. The larger the value
is, the more similar the two vectors are. For example, two CNs are most similar when the
1062          J. M. SU, S. S. TSENG, C. Y. WANG, Y. C. LEI, Y. C. SUNG AND W. N. TSAI




cosine value of their feature vectors is equal to 1. The Single Level Clustering Algorithm
(SLCAlg) is shown below.


Algorithm 2: Single Level Clustering Algorithm (SLCAlg)
Symbols Definition:
CNset: the content nodes (CNs) in the same level (L) of content trees (CTs).
T: the similarity threshold for the clustering process.
Input: CNset and T.
Output: The set of LCC-Nodes storing the clustering results of CTs.

Step 1: insert a CN node n0 ∈ CNset into a cluster in the LCC-Node.
Step 2: ∀ni ∈ CNset.
        2.1 If ∃ a cluster with similarity value > T
               Then insert ni into this cluster and update the related CF and CL in the
               LCC-Node.
               Else insert ni into a new cluster stored in a new LCC-Node.
Step 3: Return the set of LCC-Nodes.

(2) Content Cluster Refining Phase:
     Because the SLCAlg algorithm performs the clustering process by inserting content
trees (CTs) incrementally, the content clustering results are influenced by the input order
of CNs. In order to reduce the effect of the input order, the Content Cluster Refining
Phase is necessary. Given the content clustering results of SLCAlg, the Content Cluster
Refining Phase utilizes the cluster centers of original clusters as inputs and performs the
single level clustering process again to modify the accuracy of the original clusters.
Moreover, the similarity between two clusters can be computed with the Similarity
Measure as follows:

                                           CC A i CC B   (VS A N A ) i (VS A N A )
       Similarity = Cos (CC A , CC B ) =               =                           .
                                           CC A CC B           CS A * CS B

      After the similarity is computed, if the two clusters have to be merged into a new
cluster, then the new CF of this new cluster is: CFnew = (NA + NB, VS A + VS B , | (VS A +
VS B ) / (NA + NB)|).

(3) Concept Generalizing Phase:
     The concept generalization phase is used to make the feature vectors of CNs of in-
ternal LCC-Nodes in the LCCG more objective and representative. Thus, we propose
using a roll-up operation to compute the feature vectors of CNs by averaging the cluster
centers of the content clusters which their included child CNs belong to.
     The Level-wise Content Clustering Algorithm (LCCAlg) is shown below. First,
an example to illustrate the creation of a Level-wise Content Clustering Graph (LCCG) is
given.
                      A CONTENT MANAGEMENT SCHEME IN SCORM LOR                             1063




Example 3: As shown on the left side of Fig. 5, we assume that there are two content
trees, CTA and CTB. The content nodes CN A and CN B belong to the cluster C01, and
their included child CNs belong to C11, C12, and C13. After the Level-wise Content Clus-
tering Process, the LCCG is as shown on the right side of Fig. 5. The LCCG-Node C01 in
stage S0 contains two CNs (CN A and CN B) and in stage S1 includes child LCCG-Nodes
C11, C12, and C13. Moreover, in Fig. 5, CN A with feature vector <1, 1, 2> contains 2
child CNs, where A0 is in cluster C11 and A1 is in cluster C12. The cluster centers (CC) of
C11 and C12 are <3, 3, 2> and <3, 2, 4>, respectively. Then, after the roll-up operation is
performed, the new feature vector of CN A is: Average((<3, 3, 2> + <3, 2, 4>)/2 + <1, 1,
2>) = <2, 7/4, 5/2>.




 Fig. 5. An example to illustrate the creation of a Level-wise Content Clustering Graph (LCCG).


 Algorithm 3: Level-wise Content Clustering Algorithm (LCCAlg)
 Symbols Definition:
 D: is the depth of the content tree (CT).
 L0 ~ LD-1: denote the levels of the CT, descending from the top level to the lowest
 level.
 S0 ~ SD-1: denote the stages of the LCCG.
 T0 ~ TD-1: denote the similarity thresholds for clustering the CNs in levels L0 ~ LD-1,
             respectively.
 CTset: the set of content trees (CTs) with the same depth (D).
 CNset: the content nodes (CNs) in the same tree level (L).
 Input: CTset
 Output: an LCCG which holds the clustering results in every content tree level.

 Step 1: For i = LD-1 to L0, do steps 2 to 4
 Step 2: Single Level Clustering:
         2.1 CNset = the CNs ∈ CTset in Li.
         2.2 Run the Single Level Clustering Algorithm (SLCAlg) for CNset with
             threshold Ti.
 Step 3: Content Cluster Refining:
1064        J. M. SU, S. S. TSENG, C. Y. WANG, Y. C. LEI, Y. C. SUNG AND W. N. TSAI




         3.1 Execute the following sub-steps (3.2-3.4) repeatedly until there is no dif-
             ference between two iterations.
         3.2 CNset = the nodes with cluster center (CC) ∈ the set of LCC-Nodes in Si.
         3.3 Run SLCAlg for CNset with threshold Ti.
         3.4 Store the resulting clusters in LCC-Nodes of the LCCG in stage Si.
 Step 4: Concept Generalizing:
         4.1: If i ≠ L0
             Then Perform the roll-up operation to compute the feature vectors of CNs
             from level Li-1
 Step 5: Output the LCCG


3.4 LCCG Maintenance Process

      As mentioned above, every SCORM Content Package (CP) is transformed into a
Content Tree (CT) with the representative feature vector to represent each teaching mate-
rial. Because the feature vector is computed based upon the Term Frequency - Inverse
Document Frequency (TF-IDF) weighting scheme [17-19], a set of all keywords, called
the KeywordSet, has to be integrated based on the activity metadata of items in the con-
tent package. However, to incrementally update the learning content in the LOR, the
keywords within the new SCORM content package may be Partially or Not included in
the KeywordSet, which causes in their feature vectors to be inaccurate.
      Therefore, in this paper, we propose the LCCG Maintaining Algorithm (LCCG-
MAlg), which rebuilds the LCCG if necessary by monitoring the condition of each node
within the LCCG, to solve the above problem. In LCCG-MAlg, content nodes (CNs) is
categorized according to three types: 1) CNs have keywords which are all in the Key-
wordSet; 2) Partial CNs (PCNs) have some keywords which are in the KeywordSet; 3)
New CNs (NCNs) have keywords which are not in the KeywordSet. During then
Level-wise Content Clustering Process, CNs and PCNs can be inserted into a suitable
cluster stored in the LCC-Node, but NCNs will be inserted in a new cluster stored in the
LCC-Nodenew. Moreover, we also define a cluster type call “Saturation” clusters to in-
dicate that the number of PCNs is larger than that of CNs in the same cluster.
      To check when to recreate the KeywordSet and LCCG, we define the following
two rebuilding conditions:

(1) The number of clusters with “Saturation Tag” is larger than that of clusters.
(2) The number of new clusters stored in the LCC-Nodenew is larger than that of clus-
    ters.

      Therefore, for every stage in the LCCG, if either of the two rebuilding conditions is
satisfied, the KeywordSet and LCCG will be recreated. An example is given in Fig. 6 to
illustrate the LCCG Maintenance Process. CN A1 and CN A2 in the new CTA are inserted
into C1 in the LCC-Node and into C2 in the LCC-Nodenew, respectively. As a result of
inserting CN A1, C1 is marked with the Saturation Tag, and num(LCC-Node with Satura-
tion Tag) is larger than num(LCC-Node) in stage S1.
                    A CONTENT MANAGEMENT SCHEME IN SCORM LOR                      1065




               Fig. 6. An example to illustrate the LCCG maintenance process.


Algorithm 4: LCCG Maintenance Algorithm
Symbols Definition:
KeywordSet: the set of keywords in the original LCCG used to create a feature vector.
CTset: the set of content trees (CTs) with the same depth D.
CN: a content node in CT whose keywords are all in the KeywordSet.
PCN: a partial content node in CT whose keywords are partially in the KeywordSet.
NCN: a new content node in CT whose keywords are not in the KeywordSet.
LCC-Nodenew: it stores the NCN.
Input: CTset
Output: A new LCCG.

Step 1: during the Content Tree Transforming Process, mark the content nodes in
         CTset as CN, PCN, or NCN.
Step 2: during the Level-wise Content Clustering Process,
        2.1 For each node in a CT
            If node = CN or PCN Then insert it into a suitable cluster stored in the
            LCC-Node.
            If node = NCN Then insert it into a cluster stored in the LCC-Nodenew.
        2.2 If num(NPN) > num(CN) in a LCC-Node
            Then mark the LCC-Node with Saturation Tag.
Step 3: For every stage in LCCG,
        3.1 If (num(LCC-Node with Saturation Tag) > num(LCC-Node)) or
               (num(LCC-Nodenew) > num(LCC-Node))
            Then re-execute the Construction Phase in LCMS to create new
            KeywordSet and new LCCG.


                        4. SEARCH PROCESS IN LCMS

    In this section, we describe the search process in LCMS, which includes SCORM
Metadata Searching and LCCG Content Searching, as shown on the right side of Fig. 3.
1066        J. M. SU, S. S. TSENG, C. Y. WANG, Y. C. LEI, Y. C. SUNG AND W. N. TSAI




4.1 SCORM Metadata Searching

      As mentioned above in section 2, the SCORM compliant teaching materials include:
1) Metadata, 2) Organizations, 3) Resources, and 4) (Sub) Manifest. Here, Metadata,
which refers to the IEEE’s Learning Objects Metadata (LOM), describe the characteris-
tics or attributes of teaching materials. The LOM describes learning resources based on
nine categories of information: 1) General: general information about the learning re-
source; 2) LifeCycle: the history and current state of, learning resource along with its
evolution; 3) Meta-MetaData: specific information about the metadata record itself; 4)
Technical: the technical requirements and characteristics of the learning resource; 5)
Educational: the key educational or pedagogic characteristics of the learning resource, 6)
Rights: the intellectual property rights and conditions of use for the learning resource; 7)
Relations: the relationships between this resource and other targeted resources; 8) An-
notations: comments on the educational use of the learning resource; and 9) Classifica-
tion: classification criteria and the hierarchy of the learning resource.
      Therefore, as shown in Fig. 7, all of the desired teaching materials in the learning
object repository (LOR) can be retrieved using the associated SCORM metadata by first
addressing the related LCC-Nodes as entries of the LCCG. Then, according to the entry
LCC-Nodes, e.g., C0m, more precise learning objects (LOs) of retrieved teaching materi-
als can be further searched by means of LCCG Content Searching (described later),
based on the LCCG.




                              Fig. 7. The searching process in LCMS.


4.2 LCCG Content Searching

     In the LCCG, every LCC-Node contains several similar content nodes (CNs) in dif-
ferent content trees (CTs) transformed from the content package of SCORM compliant
                       A CONTENT MANAGEMENT SCHEME IN SCORM LOR                                  1067




teaching materials. The content within the LCC-Nodes in a higher stage is more general
than that in a lower stage. Therefore, based upon the LCCG, users can get the desired
learning contents which contain not only general concepts but also specific concepts. The
interesting learning content can be retrieved by computing the similarity of the cluster
center (CC) stored in an LCC-Node and the query vector. If the similarity of the
LCC-Node satisfies the query threshold that users have defined, then the information
about the learning contents recorded in this LCC-Node and its included child
LCC-Nodes are of interest users. Moreover, we define the Near Similarity Criterion and
use it to decide when to stop the search process. Therefore, if the similarity between the
query and LCC-Node in a higher stage satisfies the definition of the Near Similarity Cri-
terion, then it is not necessary to search its included child LCC-Nodes, which may be too
specific for users to use. The Near Similarity Criterion is defined below.

Definition 4 Near Similarity Criterion
     Assume that the similarity threshold T for clustering is less than the similarity
threshold S for searching. Because the similarity function is the cosine function, the
threshold can be represented in the form of the angle. The angle of T is denoted as θT =
cos-1T, and the angle of S is denoted as θS = cos-1S. When the angle between the query
vector and the cluster center (CC) in an LCC-Node is less than θS − θT, we define that the
LCC-Node exhibits near similarity with the query. Near Similarity is illustrated in Fig. 8.




      Fig. 8. Near similarity according to the query threshold, Q, and clustering threshold T.


     In other words, the Near Similarity Criterion is satisfied when the similarity value
between the query vector and the cluster center (CC) in an LCC-Node is larger than
Cos(θS − θT), so Near Similarity can be defined again according to the similarity thresh-
old T and S:

     Near Similarity > Cos (θ S − θ T ) = Cosθ S Cosθ T + Sinθ S Sinθ T

                                        = S ×T +   ( 1 − S )( 1 − T ).
                                                           2           2
1068        J. M. SU, S. S. TSENG, C. Y. WANG, Y. C. LEI, Y. C. SUNG AND W. N. TSAI




    Based on the Near Similarity Criterion, the LCCG Content Searching Algorithm
(LCCG-CSAlg) is proposed as follows.

Algorithm 5: LCCG Content Searching Algorithm (LCCG-CSAlg)
Symbols Definition:
Q: is the query vector whose dimension is the same as that of the feature vector of a
    content node (CN).
D: is the number of the stage in an LCCG.
S0 ~ SD-1: denotes the stage of an LCCG from the highest stage to the lowest stage.
ResultSet, DataSet, and NearSimilaritySet: denote sets of LCC-Nodes.
Input: The query vector Q, search threshold T, and destination stage SDES, where S0 ≤
        SDES ≤ SD-1.
Output: the ResultSet contains the set of similar clusters stored in LCC-Nodes.

Step 1: Initiate DataSet = φ and NearSimilaritySet = φ.
Step 2: For each stage Si ∈ LCCG, repeatedly execute the following steps until Si ≧
        SDES
        2.1 DataSet = DataSet ∪ LCC-Nodes in stage Si, and ResultSet = φ.
        2.2 For each Nj ∈ DataSet,
             {If Nj is near similar with Q
               Then insert Nj into the NearSimilaritySet.
               Else If (the similarity between Nj and Q) ≥ T
                    Then insert Nj into the ResultSet.}
        2.3 DataSet = ResultSet. //to search more precise LCC-Nodes in the next
                                         stage of the LCCG.
Step 3: Output the ResultSet = ResultSet ∪ NearSimilaritySet.


          5. EXPERIMENT RESULTS AND IMPLEMENTATION

   To evaluate the performance of LCMS, several experiments using synthetic and real
SCORM compliant teaching materials were conducted.

5.1 Synthetic Teaching Materials and Evaluation Criterion

     Firstly, we used synthetic teaching materials (TM) to evaluate the performance of
our proposed algorithms. All the synthetic teaching materials were generated using three
parameters: 1) V: The dimension of the feature vectors of the teaching materials (TM); 2)
D: the depth of the content structure of the TM; 3) B: the upper and lower bounds of the
included sub-section for each section of the TM.
     In the Level-wise Content Clustering Algorithm (LCCAlg), the Single Level Clus-
tering Algorithm (SLCAlg) can be seen as a kind of traditional clustering algorithm. To
evaluate the performance of the LCCAlg, we compared its performance with that of the
SLCAlg, which uses leaf-nodes as inputs to CTs and does not apply run concept genera-
tion process. The resulting cluster quality was evaluated using the F-measure [25],
which combines the precision and recall results of the information retrieval process. The
F-measure is formulated as follows:
                              A CONTENT MANAGEMENT SCHEME IN SCORM LOR                                                                1069




          2× P× R
     F=           ,
           P+R

where P and R are precision and recall, respectively. The range of the F-measure is [0, 1].
The higher the F-measure value, the better the clustering result.

5.2 Experimental Results for Synthetic Teaching Materials

     Five hundred pieces of synthetic teaching materials with V = 15, D = 3, and B = [5,
10] were generated. The clustering thresholds of the LCCAlg and SLCAlg were both
0.92. After clustering without refinement, there were 101, 104, and 2,529 clusters gener-
ated from 500, 3,664, and 27,456 content nodes in levels L0, L1, and L2 of the content
trees (CTs), respectively. Then, 30 queries generated randomly were used to compare the
performance of the two clustering algorithms. The F-measure of each query with a
threshold of 0.85 is shown in Fig. 9. This experiment was performed on a PC with an
AMD Athlon 1.13GHz processor and 512 MB DDR RAM under the Windows XP oper-
ating system.
     As shown in Fig. 9, the differences in the F-measures between the LCCAlg and
SLCAlg are small in most cases. Moreover, as shown in Fig. 10, the execution time
needed by the LCCG-CSAlg in the LCCAlg was far less than that needed by the SLCAlg.
Fig. 11 shows that clustering with clustering refinement could improve the accuracy of
the LCCG-CSAlg search results.


                                                                                                       SLCAlg       LCCAlg


                                                                           600

                                                                           500

                                                                           400

                                                                           300

                                                                           200

                                                                           100

                                                                             0
                                                                                 1   3    5     7   9 11 13 15 17 19 21 23 25 27 29



     Fig. 9. The F-measure of each query.                                        Fig. 10. The executing time using the
                                                                                          LCCG–CSAlg.


                                                       SLCAlg   LCCAlg(with Cluster Refining)


             1

            0.8

            0.6

            0.4

            0.2

             0
                  1   2   3   4   5   6   7   8   9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30



           Fig. 11. Comparison of SLCAlg and LCCAlg with clustering refinement.
1070        J. M. SU, S. S. TSENG, C. Y. WANG, Y. C. LEI, Y. C. SUNG AND W. N. TSAI




5.3 Experiment with Real SCORM Compliant Teaching Materials

     As shown above, the performance of the LCMS scheme when synthetic teaching
materials were used was good. To evaluate the Satisfaction Degree of the search results,
we also performed an experiment using real SCORM compliant teaching materials. We
implemented a prototype LCMS system. As shown in Fig. 12 (1), users could first set
search conditions needed to retrieve the desired learning contents. The search results
along with hierarchical relationships are shown in Fig. 12 (2). Users could select a link to
display the desired learning contents, as shown in Fig. 12 (3).




                      Fig. 12. Screenshot of the prototype LCMS system.


     In this experiment, there were 100 articles with 5 specific topics: concept learning,
data mining, information retrieval, knowledge fusion, and intrusion detection. There
were 20 articles for each for each topic. Each article was transformed into SCORM com-
pliant teaching materials and then imported into the prototype LCMS system.
                        A CONTENT MANAGEMENT SCHEME IN SCORM LOR                                 1071




     In addition, 15 participants, who were graduate students studying in the Knowledge
Discovery and Engineering Lab of NCTU, used the prototype LCMS system to search for
the desired learning contents. Finally, they evaluated the performance of the LCMS sys-
tem in a questionnaire.
     The questionnaire included the following two questions: 1) accuracy degree: “Are
the learning objects those which you desired?”; 2) relevance degree: “Are the obtained
learning objects with different topics related to your query?” Based on the results shown
in Fig. 13, we can conclude that the LCMS scheme is workable and beneficial for users.



                                    Accuracy Degree       Relevance Degree


               10

                8

                6

                4

                2

                0
                    1   2   3   4   5    6    7       8     9    10    11    12   13   14   15




Fig. 13. The results for accuracy and relevance in the questionnaire (10 is the highest possible
         score).


                                    6. CONCLUSIONS

     In this paper, we have proposed a Level-wise Content Management Scheme, called
the LCMS, which includes two phases: a Construction phase and a Search phase. To
represent each teaching material a tree-like structure, called a Content Tree (CT), is first
obtained from the content structure of a SCORM Content Package in the Construction
phase. According to the CT, the Level-wise Content Clustering Algorithm (LCCAlg)
then creates a multistage graph showing relationships among learning objects (LOs),
called the Level-wise Content Clustering Graph (LCCG). To incrementally update the
learning contents in the LOR, a maintenance strategy is applied to manage the LCCG.
The latter includes the LCCG Content Searching Algorithm (LCCG-CSAlg), which
traverses the LCCG and retrieves the desired learning contents with both general and
specific LOs, according to queries received from users over the wire/wireless environ-
ment. To evaluate the performance of our scheme, some experiments were conducted.
The results show that the LCMS is efficient and workable. In the near future, we will
enhance the scalability and flexibility of the LCMS in order to provide web services
based on real SCORM teaching materials.


                                ACKNOWLEDGMENTS

     This research was partially supported by National Science Council of the Republic
of China under contract numbers NSC 93-2524-S-009-001 and NSC 93-2524-S-009-002.
1072        J. M. SU, S. S. TSENG, C. Y. WANG, Y. C. LEI, Y. C. SUNG AND W. N. TSAI




                                     REFERENCES

 1. Sharable Content Object Reference Model (SCORM) 2004, Advanced Distributed
    Learning, http://www.adlnet.org/.
 2. Instructional Management System (IMS) 2004, IMS Global Learning Consortium,
    http://www.imsproject.org/.
 3. IEEE Learning Technology Standards Committee (LTSC) 2004, IEEE LTSC| WG12.
    http://ltsc.ieee.org/wg12/.
 4. Aviation Industry CBT Committee (AICC) 2004, AICC − Aviation Industry CBT
    Committee, http://www.aicc.org.
 5. CETIS 2004, ‘ADL to make a ‘repository SCORM,’ The Centre for Educational
    Technology Interoperability Standards, http://www.cetis.ac.uk/content2/2004021915
    3041.
 6. LSAL 2003, ‘CORDRA (Content Object Repository Discovery and Resolution/ Re-
    pository Architecture)’, Learning Systems Architecture Laboratory: Carnegie Mel-
    lon LSAL, http://www.lsal.cmu.edu/lsal/expertise/projects/cordra/.
 7. W3C (updated 9 Jun. 2004), World Wide Web Consortium, http://www.w3.org.
 8. eXtensible Markup Language (XML) (updated 26 Mar. 2004), Extensible Markup
    Language (XML). http://www.w3c.org/xml/.
 9. Alliance for Remote Instructional and Authoring and Distribution Networks for
    Europe (ARIADNE) 2004, ARIADNE: Foundation for the European Knowledge
    Pool, http://www.ariadne-eu.org.
10. E. R. Jones, 2004, Dr. Ed’s SCORM Course, http://www.scormcourse.jcasolutions.
    com/index.php.
11. S. K. Ko and Y. C. Choy, “A structured documents retrieval method supporting at-
    tribute-based structure information,” in Proceedings of the 2002 ACM Symposium
    on Applied Computing, 2002, pp. 668-674.
12. H. V. Leong, D. MeLeod, A. Si, and S. M. T. Yau, “On supporting weakly-con-
    nected browsing in a mobile web environment,” in Proceedings of the 20th Interna-
    tional Conference on Distributed Computing Systems (ICDCS 2000), 2000, pp.
    538-546.
13. S. M. T. Yau, H. V. Leong, D. MeLeod, and A. Si, “On multi-resolution document
    transmission in a mobile web,” the ACM SIGMOD Record, Vol. 28, 1999, pp. 37-42.
14. E. Y. C. Wong, A. T. S. Chan, and H. V. Leong, “Efficient management of XML
    contents over wireless environment by Xstream,” in Proceedings of the 2004 ACM
    Symposium on Applied Computing, 2004, pp. 1122-1127.
15. V. V. Raghavan, and S. K. M. Wong, “A critical analysis of vector space model in
    information retrieval,” Journal of the American Society for Information Science, Vol.
    37, 1986, pp. 279-287.
16. D. R. Cutting, D. R. Karger, J. O. Predersen, and J. W. Tukey, “Scatter/gather: a
    cluster-based approach to browsing large document collections,” in Proceedings of
    the 15th Annual International ACM SIGIR Conference on Research and Develop-
    ment in Information Retrieval, 1992, pp. 318-329.
17. G. Salton and M. J. McGill, Introduction to Modern Information Retrieval, McGraw
    & Hill, New York, 1983.
                    A CONTENT MANAGEMENT SCHEME IN SCORM LOR                      1073




18. H. Avancini, A. Lavelli, B. Magnini, F. Sebastiani, and R. Zanoli, “Expanding
    domain-specific lexicons by term categorization,” in Proceedings of ACM Sympo-
    sium on Applied Computing, 2003, pp. 793-797.
19. F. Debole and F. Sebastiani, “Supervised term weighting for automated text catego-
    rization,” in Proceedings of ACM Symposium on Applied Computing, 2003, pp.
    784-788.
20. C. Y. Wang, Y. C. Lei, P. C. Cheng, and S. S. Tseng, “A level-wise clustering algo-
    rithm on structured documents,” in Proceedings of NCS2003, Taiwan, 2003, pp.
    2213-2220.
21. Y. C. Lei, “A level-wise clustering algorithm on structured documents,” Master The-
    sis, Department of Computer Information Science, National Chiao Tung University,
    Taiwan, 2003.
22. T. Zhang, R. Ramakrishnan, and M. Livny, “BIRCH: an efficient data clustering
    method for very large databases,” in Proceedings of ACM-SIGMOD International
    Conference on Management of Data, 1996, pp. 103-114.
23. F. Sebastiani, “Machine learning in automated text categorization,” ACM Computing
    Surveys, Vol. 34, 2002, pp. 1-47.
24. W. C. Wong and A. Fu, “Incremental document clustering for web page classifica-
    tion,” in Proceedings of IEEE International Conference on Information Society in
    the 21st Century: Emerging Technologies and New Challenges (IS 2000), 2000.
25. B. Larsen and C. Aone, “Fast and effective text mining using linear-time document
    clustering,” in Proceedings of the 5th ACM SIGKDD International Conference on
    Knowledge Discovery and Data Mining, 1999, pp. 16-22.


                           Jun-Ming Su (蘇俊銘) was born in Kaohsiung, Taiwan, on
                      February 18, 1974, he graduated with a B.S. degree from the De-
                      partment of Information Engineering and Computer Science,
                      Feng Chia University, Taiwan in 1997. He received the M.S. de-
                      gree from the Institute of Computer Science, National Chung
                      Hsing University, Taiwan in 1999. Currently, he is a Ph.D. stu-
                      dent at National Chiao Tung University, Taiwan. His current re-
                      search interests include intelligent tutoring system, knowledge
                      engineering, expert systems, and data mining, etc.



                           Shian-Shyong Tseng (曾憲雄) received his Ph.D. degree in
                      Computer Engineering from the National Chiao Tung University
                      in 1984. Since August 1983, he has been on the faculty of the
                      Department of Computer and Information Science at National
                      Chiao Tung University, and is currently a Professor there. From
                      1988 to 1992, he was the Director of the Computer Center Na-
                      tional Chiao Tung University. From 1991 to 1992 and 1996 to
                      1998, he acted as the Chairman of Department of Computer and
1074        J. M. SU, S. S. TSENG, C. Y. WANG, Y. C. LEI, Y. C. SUNG AND W. N. TSAI




Information Science. From 1992 to 1996, he was the Director of the Computer Center at
Ministry of Education and the Chairman of Taiwan Academic Network (TANet) man-
agement committee. In December 1999, he founded Taiwan Network Information Center
(TWNIC) and is now the Chairman of the board of directors of TWNIC. Form 2002, he
is a President of SIP/ENUM Forum Taiwan. In July 2003, he organized committee of
Taiwan Internet of Content Rating Foundation and is now the Chair. Currently, he is also
the Dean of College of Computer Science at Asia University. His current research inter-
ests include parallel processing, expert systems, computer algorithm, and Internet-based
applications.


                             Ching-Yao Wang (王慶堯) received a B.S. degree in In-
                        formation Management from Chang Jung University in 1998 and
                        received a M.S. degree in Information Engineering from I-Shou
                        University in 2000. He is currently a Ph.D. student in Institute of
                        Computer and Information Science at National Chiao Tung Uni-
                        versity. His current research interests include data mining, data
                        warehousing, expert system, artificial intelligence and soft com-
                        puting, and Internet application.




                             Ying-Chieh Lei (雷穎傑) was born in Taipei, Taiwan, on
                        September 1, 1979, he graduated with B.S. and M.S. degrees
                        from the Department of Computer and Information Science,
                        National Chiao Tung University, Taiwan in 2001 and 2003,
                        respectively. Currently, he is a research assistant in Computer &
                        Communications Research Laboratories, Industrial Technology
                        Research Institute (ITRI), Taiwan. His current research interests
                        include e-learning, data mining, etc.




                             Yu-Chang Sung (宋昱璋) was born in Taipei, Taiwan, on
                        May 16, 1981, he graduated with B.S. degree from the Depart-
                        ment of Computer and Information Science, National Chiao Tung
                        University, Taiwan in 2003. Currently, he is a Master student at
                        National Chiao Tung University, Taiwan. His current research
                        interests include intelligent tutoring system, knowledge engi-
                        neering, expert systems, and data mining, etc.
A CONTENT MANAGEMENT SCHEME IN SCORM LOR                   1075




      Wen-Nung Tsai (蔡文能) received his B.S. degree from
 National Chiao Tung University in 1977, and his M.S. degree in
 Computer Science from National Chiao Tung University in 1979.
 He was in the Ph.D. program in Computer Science at Northwest-
 ern University between 1987 and 1990. He is now an Associate
 Professor in the Department of Computer Science and Informa-
 tion Engineering. His current research interests include mobile
 computing, distributed computing, network security, operating
 system, and distance learning.