Docstoc

FINAL PROJECT

Document Sample
FINAL PROJECT Powered By Docstoc
					                    INTRODUCTION: PROJECT DESCRIPTION

E-Learning as defined by Clark and Mayer’s book e-Learning and the Science of Instruction is
instruction delivered on a computer by way of CD-Rom, Internet or intranet that is designed
to support individual learning or organizational performance goals. (Clark & Mayer, pg. 7)
They also describe eLearning with the following features: content relevant to the learning
objective, instructional methods such as examples and practice to help learning, and uses of
media elements such as words and pictures to deliver the content and methods. It is led by
an instructor or designed for self-paced individual study and it builds on new knowledge
and skills that are linked to individual learning goals or to improve organizational
performance. (Clark & Mayer, pg. 10) Their definition indicates that the goal of eLearning,
whether asynchronous or synchronous, is to build job-transferable knowledge and skills
linked to organizational performance or to help individuals achieve personal learning goals
with the emphasis being on instructional programs that are built or purchased for work
force learning. Three instructional methods unique to eLearning are (1) integration of
collaboration with self-study (2) dynamic adjustment of instruction based on learning and
(3) use of simulation and games.

While there is an array of topics that can be taught via eLearning, the five types of content
that may be delivered via eLearning include: facts, concepts, processes, procedures and
strategic principles. Facts are specific and unique data for example the history of an
organization. Concepts are categories that include multiple examples for instance calculus
problems. Processes are a flow of events or activities for examples how to log into a
computer system. Procedures are defined as tasks performed with step-by-step actions for
example how to complete a sales report. Lastly, strategic principles as tasks performed by
adapting guidelines for example how to close a sale. (Clark & Mayer, pg. 15)

There is no true answer to determine whether eLearning is necessarily better than
classroom learning. From all the media comparison research, it has been determined that it
is not the delivery medium, but rather the instructional methods that cause learning.
According to Clark and Mayer, instructional methods support the learning of the content.
Instructional methods include techniques such as examples, practice exercises and
feedback. They define media elements as the audio and visual techniques used to present
words and illustrations that include text, narration, music, still graphics, photographs and
animations. One of their fundamental tenets is that to be effective, instructional methods
and media elements that deliver them must guide learners to effectively process and
assimilate new knowledge and skills. (Clark & Mayer, pg. 16)

There are three metaphors of learning that learning psychologists have developed that
summarize how learning happens in any instructional setting. The first is called the
response-strengthening view of learning. In this metaphor the learner is a passive recipient
of rewards or punishments and the teacher is dispenser of rewards (which serve to
strengthen a response) and punishments (which serve to weaken a response). This type of
learning is also referred to as a directive instructional architecture. A typical instructional
method is to present simple questions to learners, and when they respond tell them
whether they are right or wrong. The main criticism of this metaphor is that it is
incomplete, as it does not explain meaningful learning. The second metaphor is called the
information–acquisition view of learning, in which the learner’s job is to receive
information and the instructor’s job is to present it. Also referred to as a receptive
instructional architecture or empty vessel/sponge view of learning it too is incomplete in
that it conflicts with what is known about how people learn.

The last metaphor, and the one that I will prove throughout this project is strongly
associated with what is called knowledge-construction. According to the knowledge-
construction view, people are not passive recipients of information, but rather are active
sense makers. They engage in active cognitive processing during learning, including
attending to relevant information, mentally organizing it into a coherent structure, and
integrating it with they already know. (Clark & Mayer pg. 33-34)

The challenge for the learner is to carry out these processes within the constraints of
several limits on how much processing can occur in each channel at one time. There are
three kinds of demands on cognitive processing capacity:
         i. Extraneous processing- which is cognitive processing that does not support the
            instructional objective and is created by poor instructional layout;
        ii. Essential processing- which is cognitive processing aimed at mentally
            representing the core material and is created by the inherent complexity of the
            material; and
       iii. Generative processing-which is cognitive processing aimed at deeper
            understanding of the core material and is created by the motivation of the
            learner to make sense of the material.
The challenge for the instructional designers is to create learning environments that
minimize extraneous cognitive processing, manage essential processing, and foster
generative processing. (Clark & Mayer, pg. 35-37)

This project is meant to be read in conjunction with PROJECT PowerPoint Presentation.
The remaining parts of this paper will demonstrate the following topics:

I) Multimedia Theories & Principles based on cognitive processing
     a) Presentations, including PowerPoint
     b) Multimedia and Contiguity Principles
     c) Modality and Redundancy Principles
     d) Coherence and Personalization Principles
II) Design Justification
     a) Communication Tools & Web 2.0
     b) Learning Management Systems
     c) Future Media in eLearning
III) Final Reflections in eLearning
                MULTIMEDIA THEORIES & PRINCIPLES

                        ABSORB ACTIVITY: PRESENTATIONS
Absorb activities inform and inspire. These activities enable motivated learners to obtain
crucial, up-to-date information they need to do their jobs or to further their learning. In
absorb activities learners read, listen and watch. Absorb activities may sound passive, but
they can be an active component of learning.

Of the three types of activities (absorb, do and connect), absorb activities are the closest to
pure information. Absorb activities usually consist of information and the actions learners
take to extract and comprehend knowledge from that information. In absorb activities, the
learner may be physically passive yet mentally active: perceiving, processing, consolidating,
and judging the information.

Because absorb activities provide information efficiently, they are ideal when learners need
a little information. They are especially helpful when just updating current knowledge and
skills. Learners who understand the fundamentals of a field can increase their knowledge
by absorbing new details that elaborate a theory, concept or principal. (William Horton,
ELearning by Design, pg. 47-48)

In absorb activities it is the content that is in control. The learner absorbs some of the
knowledge offered by the content. (Horton pg. 48) There are several types of absorb
activities that have established themselves in conventional education. They include:

       Presentations
       Storytelling
       Readings
       Field Trips

Presentations supply information in a clear, well-organized, logical sequence. Experienced
live, via recording or online, students learn from presentations by watching and listening.
Presentations convey information and demonstrate procedures and behavior in a
straightforward flow of experiences.

Presentations allow the designer to control the sequence of learning experiences. (Horton,
pg. 49) Their primary pathway is linear with the designer controlling the order of learning
experiences. In recorded presentations learners can control the pace of the presentation.

There are several familiar models of presentations that include:
    Slide shows
    Physical demonstrations
    Software demonstrations
    Informational films
    Dramas
    Discussions
    Podcasts
A slide show, such as the one to be read in conjunction with this report, includes
informative graphics and just enough text to convey the main point. A good slide show
makes each point on a single slide. Many use a recorded voice to narrate the slides.
Take for instance the slide below. In a presentation absorb activity learners should be able
to extract & comprehend knowledge. This particular presentation makes it impossible for
learners to perceive, process, consolidate, consider or judge information. The content
should be in control, however, because of the amount of text, there are only two words for
this presentation– INFORMATION OVERLOAD. This particular slide is too “wordy”,
cramming too much information into one slide. In addition, the color scheme and fonts are
too boring which will for sure allow the audience to lose interest.




Now take for instance the next slide. Using the same concept, the main points were pulled
from the narrative and placed into 2 main points. However, based on the Redundancy
Principle (which will be discussed later), normally the recorded voice-over would not
repeat the narrative of the slide word-for word, In spite of this, this particular slide was
created with the audience in mind. The audience consists of non-native English speakers
that need extra reinforcements in both auditory and text modes. In this case, the redundant
text provides a supplemental learning mode.
                       VIOLATES: MULTIMEDIA PRINCIPLE
According to Clark & Mayer, cognitive theory and research evidence recommends that
eLearning courses include words and graphics, rather than words alone. The term
“multimedia presentation” refers to this recommendation as any presentation that contains
both words and graphics. Encouraging learners to engage in active learning by mentally
presenting the material in words and in pictures, multimedia presentations also allows for
that mental connection between the pictorial and verbal representations.

While decorative graphics serve to decorate the page without enhancing the message of
lesson, I believe the slide below is attempting to be representational.




  http://www.presentationzen.com/presentationzen/2009/08/10-ways-to-use-images-
                                   poorly.html

Trying to illustrate the appearance of an object, in this case the handshake in front of the
globe, this representation misses the mark if it’s referencing the fertility rate in Japan. Yet
even if we were talking about "international partnership" the image is still a cliché. The facts
and concept are unclear. Learning is facilitated when the graphics and text work together to
communicate the instructional message. The irrelevant graphics with text does not foster
deeper cognitive processing in the learner.
                        VIOLATES CONTIGUITY PRINCIPLE I
            SEPARATION OF TEXT & GRAPHICS ON SCROLLING SCREEN

The psychological advantage of integrating text and graphics results from a reduced need to
search for which parts of a graphic corresponding to which words, which allows the user to
devote limited cognitive resources to understanding the material. Focusing on the idea that
on-screen words should be placed near the parts of the on-screen graphic to which they
refer, Contiguity Principle I involves the need to coordinate printed words and graphics.
Clark and Mayer’s principle recommends that corresponding graphics and printed words be
placed near each other on the screen (that is cognitive in space).

In these particular slides, Contiguity Principle I is violated in several ways.




    http://www.loyola.edu/edudept/PowerfulPowerPoint/ExamplesFromRealPeople.
                                      Html

Consider how the on-screen text is integrated with the on-screen graphic. In the case of the
first slide, the printed words (the actual description) of the Dulcimer are nowhere near the
on screen graphic of the instrument. Supposedly you must click on the word “description”
for the slide to advance or scroll down the next slide where the description is given. This is a
direct violation of the principle as it indicates that the printed words are placed next to the
part of the graphic to which they refer.

One alternative for the dilemma of possibly having too much text fit on the screen is to have
the described action or state appear as a small pop-up message that appears when the
mouse touches the corresponding portion of the graphic. The “mouse-over or rollover” is
transient. The text box appears when the curser moves to a different location on the screen,

Other violations that appear in this presentation also include:

The description text box does not link to the description on the next page when clicked.
Instead it inaccurately plays a tune, allowing for the learner to devote limited cognitive
resources to understanding the message. The learner experiences a heavier load on
working memory, leaving less capacity for deep learning. In addition, without any
instruction, the learner must click the graphic of the Dulcimer in order to lead/scroll to the
next slide where the description is finally given. The direction to complete the exercise is
completely missing from the application screen.
                      VIOLATES: CONTIGUITY PRINCIPLE I
           SEPARATION OF FEEDBACK FROM QUESTION OR RESPONSE

According to Clark & Mayer, another violation of Contiguity Principle I is when feedback is
placed on a screen separate from the question or from the learner’s answer. Usually
requiring the learner to look else where between the question and the feedback, adding to
the cognitive load to learning.




    http://www.loyola.edu/edudept/PowerfulPowerPoint/ExamplesFromRealPeople.
                                      html

While the first screen does a good job at presenting examples, it violates the rule altogether
as the answer to the question “How many sides and angles does a pentagon have?” is never
given, therefore, learner does not receive immediate feedback, increasing the cognitive load.
The revised slide uses the same examples, however the last animation provides the correct
answer, allowing a quick and easy comparison, thus reducing the learner’s cognitive load.
                            THE MODALITY PRINCIPLE

Based on cognitive theory and research evidence of the human brain function, it is known
that the human brain can only process information at a certain rate. According to the
cognitive theory of learning, people have separate information processing channels for
visual/pictorial processing and for auditory/verbal processing. (Clark & Mayer, p 105) One
might expect these competing sources of information to overwhelm or overload the learner
however Baddley & Hitch (1974) suggests that the working memory has two somewhat
independent subcomponents that tend to work parallel thus allowing the simultaneous
processing of information received visually and via an auditory means. With that in mind, it
has been proven that a combination of auditory and visual stimuli has the greatest result in
regard to the learner’s retention of the presented information.

Explaining the modality effect from an information processing/cognitive load perspective
Richard Mayer and his associates have shown learners are better able to transfer there
learning given multimodal instruction. Therefore, based on research evidence presented in
Clark & Mayer’s ELearning and the Science of Instruction, it is recommended that the
designer put words in spoken form rather than printed form whenever the graphic is the
focus of the words and both are presented simultaneously. It was repeatedly found that
students learning given multimedia with animation and narration were significantly better
when it came to applying what they had learned after receiving multimedia rather than
mono-media (visual only) instruction.
(http://www.bookrags.com/wiki/Multimedia_learning)

The rationale for their recommendation is that learners may experience an overload of their
visual/pictorial channel when they must simultaneously process graphics and the printed
words that refer to them. Thus, even though the information presented, learners may not be
able to adequately attend to all of it because their visual channels become overloaded.
Being able to attend to relevant words and pictures is a crucial first step in learning and
eLearning courses should be designed to minimize the chances of overloading learners’
visual/pictorial channel.

This overload can be reduced on the visual channel by presenting the verbal explanation as
speech. The verbal material enters the cognitive system through the ears and is processed
in the auditory/verbal channel. At the same time, the graphics enter the cognitive system
through the eyes and are processed in the visual/pictorial channel. In this way neither
channel is overloaded, but both words and pictures are processed. Therefore, when visuals
are relatively complex, using audio allows the learner to focus on the visual while listening
to the explanation. So in essence, the modality effect is when people learn more deeply
from multimedia lessons when words explaining concurrent graphics are presented as
speech rather than as on-screen text. (Clark & Mayer pg. 106-113)
          http://edtech2.boisestate.edu/brownj/edtech513/et513_modred.html

For example, in the PowerPoint slide above, a graphic is given, as well as, detailed on-screen
text explaining the graphic. However, based on the Modality Principle, students are more
likely to learn when verbal information is presented in auditory manner, such as speech
rather than visually as on-screen text. Modality allows the designer to verbally illustrate
what the graphic is showing instead of overloading the visual channel with text.




          http://edtech2.boisestate.edu/brownj/edtech513/et513_modred.html

As seen in the revised slide above, in order to align it with the Modality Principle and to
prevent overload of the visual channel, one alternative is to instead remove the unnecessary
text from the screen and narrate the content in a summative or explanative manner, thus,
allowing the audience to use both sensory memory channels (visual and auditory). This
revision meets the overall goal in applying the modality principle by reducing the cognitive
load in the learner’s visual/pictorial channel by off-loading some of the cognitive processing
onto the auditory/verbal channel. (Clark & Mayer pg. 113)
                          THE REDUNDANCY PRINCIPLE
The use of redundant on-screen text grew out of the belief held by some multimedia
developers that people are either visual or auditory learners. Therefore, in order to
accommodate both learning styles, these developers began to combine redundant text with
identical audio narration to describe on-screen graphics. This technique is considered
superfluous because the printed text (on-screen text) is redundant with the spoken text
(narration or audio).

While learners engaged in multimedia instruction do utilize two channels (pictorial and
verbal) for processing information, as explained in the previous section, these channels
maintain a limited capacity that can be overloaded quiet easily when faced with excessive
information such a redundant text.
(http://www.edweb.sdsu.edu/eet/articles/redundant.htm) The psychological advantage of
presenting words in audio alone is that you avoid overloading the visual channel of the
working memory. Clark and Mayer developed the Redundancy Principle to help eLearning
designers understand the adverse effects of duplicate on-screen text on the learning
process.

According to Clark and Mayer’s cognitive theory of multimedia learning (a) all people have
separate channels for processing verbal and pictorial material, (b) each channel is limited in
the amount of processing that can take place at one time, and (c) learners actively attempt
to build pictorial and verbal models from the presented material and build connections
between them. Based on these assumptions, adding redundant on-screen text to a
multimedia presentation could overload the visual channel. In addition, learners may waste
cognitive resources trying to compare the printed words with the spoken words as they are
presented. This wasted cognitive processing, known as extraneous cognitive processing,
dictates that if a learner uses their cognitive capacity to reconcile printed and spoken text,
they can’t use it to make sense of the presentation. (Clark & Mayer pg 121-123)




 http://www.authorstream.com/Presentation/bradloripender-157821-Pender-513Mod-
 Redund-Mini-Project-2-Applying-Modality-Redundancy-Principle-Learning-Example-1-
                Special-Situations-Adding-Education-ppt-powerpoint/

For example, the slide above presents on-screen text, a graphic and word-for-word
narration of the text displayed on screen. By including redundant text in this multimedia
presentation, visual focus is split as the learner is forced to process multiple pieces of
information through the visual channel. This split focus, leading to what is called the spit-
attention effect, not only overloads the visual component of working memory but hinders
the learner’s ability to process the information efficiently, build appropriate mental models
and ultimately make the conceptual connections intended by the instructional media.
(http://www.edweb.sdsu.edu/eet/articles/redundant.htm) Because of the limited capacity
of the human information processing system, it would be better to present less material
(the graphic with just corresponding narration) than more material (graphics with
corresponding narration and printed on-screen text).

               EXCEPTIONS TO THE REDUNDANCY PRINCIPLE
There are however, special situations that will not overload the learner’s visual information
processing system, such as when:

      There is a complete absence of any on-screen pictorial information;
      The learner has plenty of time to process the multimedia elements (text and
       graphics) or when the information is presented sequentially; or
      The relative linguistic complexity of the audio might be difficult for the learner to
       understand, as with foreign language learning or certain learning disabilities.




Take for instance, the slide presented above. There is a visual, on-screen text and an
accompanying simultaneous narration. However, since this slide was developed knowing
some in the audience have a limited grasp on the English language, using multiple modes of
presentation can be helpful, especially because the spoken material may be hard to process.
Printing the unfamiliar definition on the screen may actually reduce cognitive processing
because the learner does not need to grapple with decoding the spoken words. (Clark &
Mayer pg. 125-127) In this case, the redundant text provides a supplemental learning mode.
                           THE COHERENCE PRINCIPLE:
              How using gratuitous visuals, text, and sounds can hurt learning.

The Coherence Principle is important because it is commonly violated, is straightforward to
apply, and can have a strong impact on learning. (Clark & Mayer; p. 133)

   First version of the Coherence Principle: Avoid e-Lessons with Extraneous Audio

Clark & Mayer recommend that designers avoid eLearning courseware that includes
extraneous sounds in the form of background music or environmental sounds because
background music and sounds may overload working memory. They are most dangerous in
situations in which the learner may experience heavy cognitive load, for example, when the
material is unfamiliar, when the material is presented at a rapid rate, or when the rate of
presentation is not under learner control. It is recommended to avoid extraneous sounds,
especially in situations in which the learner is likely to experience heavy cognitive
processing demands as the addition of extra sounds in the form of music is likely to depress
learning. (p. 136-137)

The theoretical rationale against adding music and sounds to multimedia presentations is
based on the cognitive theory of multimedia learning, which assumes that working memory
capacity is highly limited. Background sounds can overload and disrupt the cognitive
system, so the narration and the extraneous sounds must compete for limited cognitive
resources in the auditory channel. The cognitive theory of multimedia learning predicts
that students will learn more deeply from multimedia presentation that do not contain
extraneous sounds and music than from multimedia presentations that do; therefore, the
theory indicates that adding background music does not improve learning, in fact, it
substantially hurts learning. (p. 138) Research has found that when students received both
background music and environmental sounds, their retention and transfer performance
was much worse than when students received neither. (p. 139)

   The second version of the Coherence Principle: Avoid e-Lessons with Extraneous Graphics

Clark & Mayer also recommend avoiding adding extraneous pictures as it interferes with
the learner’s attempts to make sense of the presented materials, as extraneous graphics can
be distracting and disruptive of the learning process. Research has shown when pictures
are used only to decorate the page or screen, they are not likely to improve learning (p. 141)
These types of pictures can interfere with the process of sense-making because learners
have a limited cognitive capacity for processing incoming material.

There are three ways extraneous pictures can interfere with learning:
           1. Distraction: By guiding the learner’s limited attention away from the
               relevant material and toward the irrelevant material;
           2. Disruption: By preventing the learner from building appropriate links
               among pieces of relevant material because pieces of irrelevant material are
               in the way; and
           3. Seduction: By priming inappropriate existing knowledge, which is then used
               to organize the incoming content. (p. 142)
Consequently, the cognitive theory of multimedia learning predicts that students will learn
more deeply from multimedia presentations that do not contain extraneous photos,
illustrations, or video. Take for instance the example below. While the extraneous image of
Bon Jovi is interesting on the Slippery When Wet road sign, research has proven that adding
such interesting but irrelevant graphics can result in less learning from a multimedia
presentation in several ways:




        http://mycacpro.com/filesforjordan/GlobalDev/webroot/prisonCulture/resources
                                    /CoherencePrinciple.pdf

            o   It tends to hurt student performances on subsequent transfer tests (p. 144)
            o   Adding illustrations to scientific text hurts learning, particularly for students
                who have difficulty in processing information.
            o   Low-ability students are more easily overloaded by extraneous material as
                low-ability students spend more time looking at irrelevant illustrations than
                did high-working students, indicating that graphics can be particularly
                distracting for learners with low ability.

Another important implication of the coherence principle is that illustrations should not be
embellished to make them look more realistic. Simple line drawings can be more effective
than detailed color drawings or photos. Butcher’s 2006 study showed that students who
had learned with text and simple illustrations performed better than those who had learned
with text and detailed drawings. Students who studied text and simple illustrations made
more integration inferences than did students who studied text and complex illustration
proving that studying a simplified visual actually promoted more mental processing by
learners, who will fill in the visual gaps to understand the meaning of the diagram. (p. 145)

    The third version of the Coherence Principle: Avoid e-Lessons with Extraneous Words

Lastly, Clark & Mayer recommend that designers avoid adding extraneous words to lessons
because it results in poorer learning. Designers should stick to basic and concise
descriptions of the content as it helps implement the modality principle effectively. (p. 146)

As with extraneous sounds and graphics, adding words for interest, elaboration or for
technical depth can interfere with the learning process. (p. 148) Take for example the slide
below. It is obvious that the addition of irrelevant material does not help learning in fact it
can even hurt it. The designer should have asked whether the additional verbiage was
really necessary to achieve the instructional objectives. Since it does not help, it should have
been omitted.
              http://www.ntc.blm.gov/satnet/powerpoint/images/05_5_visual_a.jpg

However, the challenge for instructional professionals is to stimulate interest without
adding extraneous material that distracts from the cognitive objective of the lesson (Clark &
Mayer; p. 151) As designers we need to make a distinction between entertainment and
learning. This is not to say that an effective e-Learning course should not be interesting.
Mayer discusses prior distinctions between cognitive interest and emotional interest.
Cognitive interest stems from materials that promote understanding of the content
presented — in other words from materials that optimize learning. Emotional interest
comes from the addition of extraneous materials that have been shown to depress learning.
Our goal should be to promote cognitive interest and avoid emotional interest in situations
that require cognitive learning processes.
(http://faculty.washington.edu/farkas/TC510/ClarkMultimediaPrinciples(Mayer).pdf)

In summary, the coherence principle essentially tells us that “less is more” when learning is
the primary goal. It suggests that visuals or text that is not essential to the instructional
explanation should be avoided. It also suggests that designers should not add music to
instructional segments. Lastly, it suggests that lean text gets to the point is better than
lengthy elaborated text.
(http://faculty.washington.edu/farkas/TC510/ClarkMultimediaPrinciples(Mayer).pdf)
                       THE PERSONALIZATION PRINCIPLE:
      How the use conversational tone and pedagogical agents can increase learning.

Experiments summarized by Byron Reeves and Clifford Nass in their book, The Media
Equation, showed that people responded to computers following social conventions that
apply when responding to other people. They discovered that deeply ingrained conventions
of social interaction tend to exert themselves unconsciously in human-computer
interactions. These findings prompted a series of experiments that show that learning is
better when the learner is socially engaged in a lesson either via conversational language or
by an informal learning agent.
(http://www.widged.com/wiki/doku.php?id=en:academe:education:e-
learning:principles:personalisation )

The first version of the Personalization Principle dictates that designers use an informal
conversational style to introduce the lesson, as it resembles human-to-human conversation.
Research has shown that rather than using the passive voice, a designer should use a
second-person active voice. (Clark & Mayer; p. 160) The reasoning is that by using
conversational style in a multimedia presentation it conveys to learners the idea that they
should work hard to understand what their conversational partner (IE: the course narrator)
is saying to them. Expressing information in conversational style can be a way to prime
appropriate cognitive procession in the learner.

According to the cognitive theories of multimedia communication, instruction containing
social cues, such as conventional style, activates a sense of social presence in the learning.
(p. 162) The feeling of social presence in turn, causes the learner to engage in deeper
cognitive processing during learning which results in a better learning outcome. In essence,
it was found that students who learned with personalized text (using the words “you” and
“yours”) performed subsequently better on transfer tests than students who learned with
formal text. (p. 163)

Along with using the second-person active voice, research by Reeves and Nash indicated
several other ways to promote personalization. For instance, a designer can promote
personalization through voice quality. Referring to these findings the voice principle, states
that people learn better from narration with a human voice than from narration with a
machine voice. There is also some preliminary evidence that people learn better from a
human voice with a standard accent rather a foreign accent. Lastly, there is also some
evidence that both men and women prefer to learn form female voices for female-
stereotyped subjects such as human relationships and to learn from male voices for male-
stereotyped subjects such as technology.

A related implication of the Personalization Principle is that on-screen agents should be
polite. (p. 166) Research has determined that student learning is not only influences by
what an on-screen agent says but also how they say it. Using polite conversational language
maximizes the benefits of social presence in learning as students who learned with a polite
agent performed better than those who learned with a direct agent. (p. 167)

However, these results should not be taken to mean that personalization is always a useful
idea as there are cases in which this theory can be overdone. In applying the
Personalization Principle, it is always useful to consider the audience and the cognitive
consequences of the script. (p. 165) Therefore, the challenge for instructional professionals
is to avoid over-using conversational style to the point that it becomes distracting to the
learner (p. 163)

The second version of the Personalization Principle involves using an effective on-screen
coach to promote learning. On-screen coaches, otherwise known as pedagogical agents, are
characters who help guide the learning process during an eLearning episode. Used to
provide instruction rather than for entertainment purposes, agents can be represented
visually as cartoon-like characters or virtual reality avatars; they can be represented
verbally through machine-simulated voices, human recorded voices or as artificial
characters using animation and computer-generated voices. (p. 168) While some computer
scientists are working to make agents very realistic, a series of studies have found that the
appearance of the agent made little difference in learning, a cartoon or human worked just
as well. However, congruent with the Modality and Personalization Principles, learning is
better when the agent’s words are presented in audio rather than in text and in a
conversational style rather than in a formal style, and with human-like rather than machine-
like articulation. (http://www.widged.com/wiki/doku.php?id=en:academe:education:e-
learning:principles:personalisation and Clark &Mayer; p. 172)

Considered “giving a voice to the text,” the last personalization principle involves making
the author visible in order to promote learning. (p. 173) Applied in both synchronous and
asynchronous forms of eLearning, the main motivation for using a visible author style is to
promote learner motivation. Research suggests that when authors are visible, students
might see the author as “a personal guide through an otherwise difficult situation.”
Consistent with Mayer’s extension of the cognitive theory of multimedia learning, the visible
author technique can help prime a sense of social presence in the learner—a feeling of being
in a conversation with the author, therefore, encouraging the learner to engage in deeper
cognitive processing during learning, leading to better learning outcomes. (p. 176)

Based on the above explanations, the first example below shows the use of formal language
without an on-screen agent. Next to it shows the revision using conventional style, informal
language and the use of an appropriate, visible on-screen agent that gives a voice to the text.




In conclusion, we know that learning is based on engagement of the learner with the
content of the instruction. Even though learners know that computers are inanimate, the
use of conversational language either directly in the program or via an agent seems to
stimulate very ingrained unconscious social conventions that lead to deeper learning.
(http://www.widged.com/wiki/doku.php?id=en:academe:education:e-
learning:principles:personalisation)
                              COMMUNICATION TOOLS
ELearning is instruction that is delivered in an electronic format. ELearning may be
synchronous, meaning that it must be taken at a specific time with an instructor and/or
other learners, or it may be asynchronous, meaning that it is a self-directed learning event
that learners can take at their convenience.
(http://www.leanforward.com/elearning_development/elearning_reference.html)
Synchronous training is used at a specific time period where students and teachers
converge online for a learning session that’s led by an instructor. Asynchronous training, on
the other hand, uses different materials that are made available through the Internet and
are ready to be accessed and used by students at any time.

Both synchronous and asynchronous ELearning offer the following advantages:

      Scalability – ELearning enables organizations to train large populations, quickly
       and easily.
      Accessibility – ELearning enables users to be geographically separate from the
       instructor. This often empowers more users to access the training while also
       providing more timely access to the training
      Reduced travel costs – ELearning enables users to learn via their desktops,
       eliminating the need for students and/or instructors to travel.
      Simplified training documentation – in many cases, ELearning provides for
       automatic documentation of training activity.
       (http://www.leanforward.com/elearning_development/elearning_reference.html)

As mentioned previously, Synchronous Learning occurs in real-time via instructor-led
online learning event in which all participants are logged on at the same time and
communicate directly with each other. In this virtual classroom setting, the instructor
maintains control of the class, with the ability to "call on" participants.
(http://www.uen.org/core/edtech/glossary.shtml#S)

Some common forms of Synchronous Learning include:

      Audio and Video Conferencing
      Chat
      Shared Whiteboard
      Application Sharing
      Instant Messaging

The main advantage of asynchronous learning is that content delivery is convenient for the
learner because it is dictated by their preference for pacing and learning needs—it does not
require the presence of a teacher. The challenge for this type is that the content must be
engaging and provide high quality information. It should be complete and interesting so
that it can stand alone to let students master what they need to learn.
(http://www.articlesfactory.com/articles/education/asynchronous-elearning-solutions-
and-rapid-elearning.html)
Common examples of “different time, different place” eLearning tools include:

     Discussion boards
     Blogs
     Social Networking Sites
     E-mail
(http://www.profhacker.com/2010/01/11/tools-for-synchronous-and-asynchronous-
classroom-discussion/)
                                          WEB 2.0
Web 2.0 refers to a trend in web design and technology that facilitates the publishing and
sharing of information among Internet users.
(http://www.cch.com.au/DocLibrary/cch_professionals_web20_whitepaper_final.pdf) The
technical definition of the term Web 2.0 emerged from publisher Tim O’Reilly in 2004: “The
business revolution in the computer industry caused by the move to the Internet as
platform.” Web 2.0 is now more often used to describe a new generation of web-based
services that allow people to interact, collaborate and share information.
(http://www.information-age.com/channels/information-
management/features/650221/web-20-in-business.thtml)

Web 2.0 encourages the development of a participatory culture, where users contribute
content back to the web rather than merely consuming it. Traditionally, websites consisted
of static pages for commerce and the one-way delivery of information. Now applications
such as blogs and social networks enable users to contribute and share information in ways
that did not even exist a few years ago. Web 2.0 sites such as Wikipedia, MySpace and
Facebook are now household names, with many students acknowledging the use of these
tools in their personal and academic lives.
(http://www.cch.com.au/DocLibrary/cch_professionals_web20_whitepaper_final.pdf)

The following describes some of the most common uses of Web 2.0 tools and defines their
functions:

WIKIS: In its simplest form, a wiki is a collection of pages that can be easily created, edited
and re-edited by any member of that wiki’s community. The community may be students
working on a particular project, or it may be the entire online world as is the case with
Wikipedia. Unlike intranets and websites, publishing to a wiki is very simple and it allows
linking between pages that is difficult in standard office documents, leading many
educational institutions to adopt it as a collaborative tool.

Debate continues to rage over the accuracy of material to be found on public access wikis
such as Wikipedia. The anonymity of authors and the lack of a formal editorial or review
process mean that it is easy to publish inaccurate information. However, it is just as quick
and easy for any reader to correct this information or view older versions. It is not unusual
for articles on Wikipedia to be edited and re-edited until a range of contributors are
satisfied with the final output – a form of peer-review in itself. Ultimately, the type and
validity of information that appears on a wiki is dependent upon the community that
maintains it. One would expect a different quality of material to be published on an
institution’s internal wiki than would be found on Wikipedia or a public access legal wiki
that is maintained by volunteers.

BLOGS: Blogs, short for web-logs, are easily updated web pages where an individual or
group of people can keep an ongoing journal of news, ideas and opinions.
Students can add comments to a blog post and bloggers will also often write about each
other’s posts. In the blogging world (or “blogosphere”), reputation is built by the number of
people who read or write about a particular blog.

Currently, blogs are written by authors who have been established as thought-leaders in
traditional publishing and media. New bloggers who contribute ideas or information and
enter into meaningful dialogue with existing thought-leaders will be recommended on to a
greater range of readers. The rapid publishing rate and ability to track comments often
leads to a level of dialogue and debate that cannot be matched by the traditional discourse
of journals and other publishing.

SOCIAL NETWORKS: A social network consists of software or a website that allows people
with common interests to develop a virtual or online community. Essentially it takes the
process of networking in a recreational or professional context and translates it to an online
environment.

The primary function of a social network is to provide information about an individual, such
as their interests, accomplishments, and activities. This information is primarily provided
by the individual and as such will reflect the image they wish to project to the world. This
level of centralized functionality has the potential to replace a range of other messaging
technologies, particularly email.

RSS: RSS is a group of formats that enables a website to notify users of new or altered
content such as a blog post, a news article or a change to a wiki page. Rather than simply
appearing on the original website, this content can be pushed out to an individual’s RSS
Feed Reader (similar to an email inbox) or to a specified location on another website. RSS
differs from the other Web 2.0 tools because it is a tool for delivering content rather than
for generating it.

RSS feeds are available on a range of websites with the most common ones being blogs,
traditional media sites, and wikis. Many corporate websites are adopting RSS as a method of
providing information and company news to customers without cluttering email inboxes.
Readers must choose to subscribe, which means that unlike email inboxes, RSS readers will
not receive spam.
(http://www.cch.com.au/DocLibrary/cch_professionals_web20_whitepaper_final.pdf)
                       LEARNING MANAGEMENT SYSTEMS

A learning management system (LMS) is a set of software tools for delivering, tracking
and managing online training and education. LMS options range from systems for managing
training records to more flexible software for distributing courses over the Internet and
offering features for online authoring. In some instances, corporate training departments
purchase an LMS to automate their record-keeping, as well as to allow registration of
employees for classroom and online interactive courses. Key features may include student
self-service, self-registration, instructor-led training, skill group management, user
notifications and deadlines, manager hierarchies, wait-lost management and of course
actual serving of the training material. Also common in an LMS is an automated testing
facility which records answers, grades tests and keeps all data for later reporting and
analysis. Optional LMS features may include a built-in authoring tool, chat boards and
discussion boards. (http://nationaltrainingsoftware.com/lmsdefinition.html)

A learning content management system (LCMS) is a related technology to the learning
management system in that it is focused on the development, management and publishing
of the content that will typically be delivered via an LMS. An LCMS is a multi-user
environment where developers may create, store, reuse, manage, and deliver digital
learning content from a central object repository. The LMS cannot create and manipulate
courses; it cannot reuse the content of one course to build another. The LCMS, however, can
create, manage and deliver not only training modules but also manage and edit all the
individual pieces that make up a catalog of training. LCMS applications allow users to create,
import, manage, search for and reuse small units or "chunks" of digital learning content and
assets, commonly referred to as learning objects. These assets may include media files
developed in other authoring tools, assessment items, simulations, text, graphics or any
other object that makes up the content within the course being created. An LCMS manages
the process of creating, editing, storing and delivering e-learning content.
(http://en.wikipedia.org/wiki/Learning_management_system)

Every organization's requirements for a learning management system differ. Once the
decision has been made to implement an LMS, the next step is to analyze the organization
and project’s needs. There are many learning management systems to choose from with a
wide variety of functionality. The question is: What functionality does the organization or
project really need? Creating use studies is an excellent way to determine what functionality
the program must have versus what is just nice to have. Once the needs have been analyzed,
a better decision can be made regarding which LMS will meet those needs.
(http://www.theacademy.com/lmsselection.aspx)

There are quite a few LMS barriers that need to be consider that include:
    Cost
    IT Support
    Customization integration with other systems
    Legacy system integration
    Mind set to move learning online
    Problems with vendor
    Support from management
    Flexibility for future requirements
     Support from stakeholders
     Clear business goals
     Security
     Support from learners
     Tool/vendor selection
     Administration
     Problems with third-party consultant
     Compliance
(http://elearningtech.blogspot.com/2007/09/lms-satisfaction-features-and-barriers.html)

The purpose of the previous class assignment was to compare three management systems
as they relate to multimedia support. The comparison has been done in the context of
transferring a traditional corporate training class to an online format. As a result, not every
feature has been explored, only those relevant and practical to the course.
BlackBoard has become one of the most used commercial LMS in the world. Over 2000
colleges and universities, 1000 K-12 school districts, and 2700 corporate and government
organizations utilize BlackBoard for traditional and distance education purposes. The
selection of Blackboard seemed natural, especially because of its large corporate market
share. Additionally, a free 60-day trial allowed a full inspection of the features and the
partial implementation of the training course.

However, there were two other LMS considered, Moodle and PostNuke. Both were highly
recommended, but PostNuke was selected because of the larger number of modules and its
availability on the Distance-Educator.com web site. Additionally, PostNuke is more of a
Learning Content Management System which can be tailored specifically to educational and
multimedia uses.

The selection of components for the comparison was based upon an informal needs
assessment. The following items were determined to be essential to the development of an
online training class at the corporate level:
  ◦     Basic administration and configuration of the course
  ◦     Customization the “look and feel” of the site
  ◦     Adding students
  ◦     Publishing announcements or news
  ◦     Providing downloadable documents
  ◦     Posting content
  ◦     Setting up and posting to a forum
  ◦     General support and documentation

Two additional elements were considered, but because of limited access to PostNuke, the
required modules could not be uploaded. These components were a calendar and quizzes.
Both of these features are available on Blackboard, but were not examined for this
comparison. (http://www.guhsd.net/mcdowell/et650/index.html)
                          FUTURE MEDIA OF ELEARNING

The annual Horizon Report describes the continuing work of the NMC’s Horizon Project, a
research-oriented effort that seeks to identify and describe emerging technologies likely to
have considerable impact on teaching, learning, and creative expression within higher
education.

The short list for 2010 includes the following technologies and the estimated time frame
adoption of their usage:

Time-to-Adoption Horizon: One Year or Less
    Mobile Computing
    Open Content
    Aggregators
    Cloud Computing
Time-to-Adoption Horizon: Two to Three Years
    Location-Based Services
    Electronic Books
    Simple Augmented Reality
    The Semantic Web
Time-to-Adoption Horizon: Four to Five Years
    Gesture-Based Computing
    Data Visualization & Analytics
    Wireless Power
    3D Video

Below I have selected one technology from each of the timeframes to discuss in detail and
how the emergence of such technology will effect learning in the future. I chose these three
technologies in particular because they are technologies that I currently use in my everyday
life. From utilizing my mobile phone to go online, to downloading and reading ebooks, to
using gesture-based computing while playing my Wii; these current and emerging
technologies are part of the world in which we live.

MOBILE COMPUTING: The mobile market today has nearly 4 billion subscribers, three-
fourths of whom live in developing countries. Over a billion new phones are produced each
year, and the fastest-growing sales segment belongs to smart phones — which means that a
massive and increasing number of people all over the world now own and use a computer
that fits in their hand. Third-party applications for all kinds of tasks can now be developed
once and ported to a variety of mobile platforms, increasing their availability. It is these
applications that are making mobiles such an indispensable part of our lives. Tools for
study, productivity, task management, and more have become integrated into a single
device that we grab along with car keys and wallet. More and more, online applications have
a mobile counterpart; Blackboard's mobile app, for instance, gives students access to their
course materials, discussions, assignments, and grades. Other mobile and handheld devices,
such as netbooks, smartbooks, ebook readers, and email readers are also commonly carried.
It is easier than ever before to remain connected anytime and anywhere.
(http://horizon.wiki.nmc.org/2010+Mobile+Computing)
ELECTRONIC BOOKS: Electronic books are now accessible via a wide variety of readers;
from dedicated reader platforms like the Kindle to applications designed for mobile phones,
and are enjoying wide consumer adoption. As screen technology has become more
sophisticated, the experience of reading electronic materials has become more comfortable,
and the popularity of e-books has increased. Electronic books can be a portable and cost-
effective alternative to buying printed books, although most platforms lack features to
support advanced reading and editing tasks such as annotation, collaboration, real-time
updates, and content remixing. Electronic books have entered the mainstream in the
consumer world and are beginning to make inroads on campuses. The potential for
education includes the obvious advantages of lowering costs and making it easier to carry
the information contained in several heavy textbooks, but electronic books and readers are
also raising questions about the textbook and publishing industries that may have deeper
implications in academia. (http://horizon.wiki.nmc.org/2010+Electronic+Books)

GESTURE BASED COMPUTING: Devices that can accept multiple simultaneous inputs (like
using two fingers on the Apple iPhone or the Microsoft Surface to zoom in or out) and
gesture-based inputs like those used on the Nintendo Wii have begun to change the way we
interact with computers. We are seeing a gradual shift towards interfaces that adapt to—or
are built for—humans and human gestures. The idea that natural, comfortable movements
can be used to control computers is opening the way to a host of input devices that look and
feel very different from the keyboard and mouse. Gesture-based computing allows users to
engage in virtual activities with motion and movement similar to what they would use in
the real world. Content is manipulated intuitively, making it much easier to interact with,
particularly for the very young or for those with poor motor control. The intuitive feel of
gesture-based computing is leading to new kinds of teaching or training simulations, that
look, feel, and operate almost exactly like their real-world counterparts. Larger multi-touch
displays support collaborative work, allowing multiple users to interact with content
simultaneously, unlike a single-user mouse. (http://horizon.wiki.nmc.org/2010+Gesture-
Based+Computing)
                                  FINAL REFLECTIONS
During the earlier lessons, I came to realize that not all presentations are good
presentations. I was surprised to see how many presentations and websites are out there
without any structure. In that I mean they are poorly organized, use too many words and
have poor color schemes. I had a difficult time following and even understanding some of
them. It was quite interesting applying the theories learned in the lessons to those
presentations and seeing how terribly wrong they were.

Through out the course I also enjoyed creating a poor presentation of my own and then
having to apply what I learned to revise it. This allowed me to fool around with PowerPoint
and re-familiarize myself with the program. While I did have some difficulty with some of
the features (i.e.: using customer animations and synching up my narrations) when I finally
figured them out I felt like I had taught myself something valuable, therefore, I am sure I will
be more inclined to use these features with ease more extensively for future assignments.

In Lesson 3, Multimedia and Contiguity Principles, I was introduced to John Medina’s Brain
Rule #10: “Vision trumps all other senses.” Basically, we humans find pictures and graphics
visually stimulating; remembering 65% of a graphic 3-days later versus only remembering
10% when hearing the same information after the same 3-days period. He indicated that
while recognition doubles for a picture compared to text, if and when you need to use text
be sure to utilize ‘concrete text’ rather than ‘abstract text’ as the former is more effective
being better able at eliciting visual cues. A rule of thumb is to remember humans have three
times better recall for visual information and six times better recall for visual and oral
information.

Continuing on the theme of invoking as many senses as possible to improve comprehension
and recall, Medina discusses “The McGirk Effect.” An auditory illusion, the McGirk Effect
dictates that accurate perception of information can involve the participation of more than
one sensory system. This multimodal perception utilizes all senses working together to
perceive the world. Therefore, the brain encodes and remembers more when senses work
together. This practice leads to Medina’s Brain Rule #9: Stimulate more of the senses and
his theory of the Learning Link which states that multisensory environments do better than
uni-sensory environments. Combining audio and visual stimuli leads to more recall and
longer lasting resolution. In addition, combining sight and touch in hepatic learning leads to
the active learning that occurs when the learning experience is participatory or, in other
words, if it involves doing the real thing verses reading about doing it.

I discovered that because multisensory information is more elaborate, information that is
more elaborately encoded is better remembered. It was proven that by just adding one of
the other senses during the learning process, the number of creative solutions on problem-
solving tests was doubled. However that is not to say that one cannot over do it. The
Redundancy Principle states that due to the split attention effect of the Modality Principle
material that is presented through animation, narration and on-screen text can become
repetitive and therefore redundant to the learner.

One way to prevent this from happening is to remember the Coherence Principle. This
theory states that seductive details that may seem to engage and excite the learner may in
fact impede overall learning. I discovered that extraneous sounds, graphics and words
distract attention from relevant material and disrupt cognitive connections. Therefore, any
audio, graphics or words must be relevant and cognitively interesting, not just emotionally
interesting or arousing.

However, a designer still wants to create a motivating learning experience and by adhering
to the Personalization Principle, it is possible to stimulate the learner by presenting
information in a more informal style. This will in turn activate a social response in the
learner and may increase active cognitive processing and learning.

While working on the Communication Tools assignment, I realized that I have personally
used each of the mentioned forms of synchronous learning (i.e.: Audio and Video
Conferencing, Chat, Shared Whiteboard, Application Sharing and Instant Messaging) in both
my educational and professional experiences. I have had to use audio & video conferencing
where computers are connected to the Internet and the conference used audio conferencing
such as IP Audio Conferencing or Voice-over-IP to communicate with participants. I have
also utilized video conferencing where the participants needed computers with digital
cameras and/or special video conferencing devices that connected over the Internet or
phone lines to provide communication. I use chat tools on an everyday basis,
communicating with several people typing comments. I have used shared white boards
within virtual software packages in my educational endeavors. The white board allowed for
a group of us to type comments, draw, highlight and point to information. Professionally,
I’ve used application sharing to share documents and information alike. Lastly, instant
messaging is another tool that I use on the daily where I’ve communicated with others, kept
a list of people that I messaged often indicating who was online, offline and available to chat.
(http://www.e-learningconsulting.com/consulting/what/synchronous.html)

Aside from email and social networking sites, such as Facebook and Twitter, which I use on
a daily basis for personal and professional reasons, I have also used asynchronous tools
such as Discussion boards, Blogs, Social Networking Sites and E-mail primarily in my
educational undertakings. Integrated into my online learning environments, I’ve had to
utilize discussion boards that produced incredibly rich conversations. Though I’ve had
limited experiences with blogs, as a student, when I did use it, it forced me to discuss topics
with other students as well as write for a wider audience. The open nature allowed for
communication between myself and students in other classes who were studying the same
topics.

The following are some examples of how I can use asynchronous online communication
tools to enhance the learning environment, as well as advantages of using these particular
tools:
             Asynchronous online tools will allow students to collaborate at any time--
               day or night--at times suited to their schedules. They can also participate in
               discussions when they are inspired (i.e. not just during scheduled hours). 

             Online resources will easily be shared quickly and accurately. Most online
               programs allow URLs pasted into the text of the message to be clickable. If
               the complete URL is included in a message to a group, any member of the
               group can click on it and access the resource. 

             Communications will extend well beyond the physical limits of the
               classroom. Students from all over can join to discuss topics of common
               interest without regard to differences in time zones.
              Students in need will be identified by their participation (or lack of
               participation) and personalized attention can be given to them.
               (http://pixel.fhda.edu/%7Eheidi/ONE/clo_tutorial/L2/lesson2.html)
              There will be less disruption to workflow as students will be able to access
               the training when it is most convenient for them, rather than when it is
               convenient for the instructor.
              There will also be an overall reduction in overall training time as
               asynchronous ELearning has been shown to reduce the amount of time
               required to bring a user up to the desired level of competency by 50-80%
               over traditional classroom instruction.
               (http://www.leanforward.com/elearning_development/elearning_reference
               .html)

After this learning experience, I will also use Web 2.0 technology such as social networking
sites, Wikis and Blogs in the cache of my eLearning tools. Social Networking Sites, such as
Ning or Facebook and Twitter can play important roles in my asynchronous
communications strategy. Facebook or Ning pages for my class will be the destination for
up-to-date information about the course, with my students having to friend me (or even one
another). Twitter and Twitter lists will also be useful sites of asynchronous discussion,
although not in the threaded format that one is used to seeing in a discussion board setting.
Wikis will be utilized in order to summarize lessons, as a collaboration tool for notes, to
introduce and explain projects, to introduce and disseminate important concepts, and to
assess individual projects. Blogs will be used for classroom management, overall
collaboration, discussions and student portfolios. In additional, I will use documentation
and presentation tools to create, host and/or share PDFs, ebooks and/or presentations as
well as image, audio and video tools to create, edit and/or host images, podcasts and video.
(http://c4lpt.co.uk/Directory/)

While I have extensively used Web 2.0 tools in my personal life, I have limited experience
using these tools educationally. However, as of late, my instructor has required the use of
Ning for my eLearning class. Ning is an online platform for people to create their own social
networks. Some of the features of Ning include:
     Being able to define your own profile questions for incoming members where
        members can customize their profile pages with their own design, choice of widgets
        and profile applications.
     A real-time, dynamic activity feed of everything happening across the Ning Network
        including status updates from members.
     Being able to pull in one or more RSS feeds from a blog, Web site or news source for
        an ongoing stream of information into the Ning Network. All features for public Ning
        Networks are also available automatically via RSS.
     Being able to enable members to see who's online and chat in real-time with the
        persistent chat feature across the bottom of the Ning Network or pop it out into its
        own window. (http://about.ning.com/product/index.php)

I would like to conclude by discussing the future of three new technologies that I believe
will impact media projects that I am currently working on and those I will work on in the
future. I chose these three technologies because while they are considered technology of the
future, I see them working and moving forward today. They include:
New Ways to Collaborate: In 2008, most of the attention when it came to collaboration
focused on Web 2.0 technologies such as social networks, wikis and Twitter. Plus, we saw
the rise of services designed to adapt these technologies for business and educational uses.
But 2009 saw some radical new twists on the idea of collaboration. Browser maker Opera
released a new technology called Opera Unite, which was essentially a Web server inside a
browser. From a functionality standpoint, Unite is an intriguing idea where every Web user
can connect with and serve data to others without the need of external servers and cloud-
based systems. I currently use Opera Unite to log onto Rio Salado’s web sites because it
works better with my Mac. 2009 also saw the introduction of Google Wave, probably one of
the most misunderstood technology releases of the year. While many focused on the initial
beta of Wave and its focus on collaboration and task management, the truly interesting
aspect of Wave is its potential to be a platform for open and constant development of
systems for collaboration and content delivery.

Mobile Operating Systems: Not long ago, mobile operating systems were seen as weak,
inflexible and closed systems. For developers, they were difficult to build for and offered
nearly impossible hurdles in order to get applications to users. And users often found them
unfriendly and to have limited options for customization. The iPhone bucked these
expectations by providing an excellent operating system and an (more) open forum for
creating and delivering applications. And then in 2009 we saw the rise of Android based
phones and new systems such as Palm WebOS, which have shown that mobile operating
systems can be dynamic, flexible and more open to application developers.

Search Engines Compete Anew: Google's dominance of Web search saw a few major new
challenges in 2009. Wolfram Alpha, though not a traditional Web search engine in that it
searches a closed database of information, offered an interesting look at a search engine
designed to provide actual answers to questions rather than just provide lists of results. But
the biggest (and probably most surprising) challenge came from Microsoft's Bing, which
made inroads against Google and offered a much more attractive and interactive search
engine. While Google had long championed basic and simple as the preferred interface for
search, Bing showed that there is a place for attractive, interactive and dynamic search
interfaces.
(http://etech.eweek.com/content/search/emerging_technology_in_2009_an_engine_for_gro
wth.html)

Throughout this course I have had the chance to discover and utilize a collection of
multimedia that I have created and revised over the semester. Each lesson added a piece to
what I learned the previous week. From discovering trends and views in eLearning to
understanding and practicing principles and theories of multimedia to ascertaining new
perspectives regarding the future of eLearning, this course was quite informative and will
most definitely serve as a basis of my future work in eLearning Instructional Design.

				
DOCUMENT INFO
Shared By:
Categories:
Tags:
Stats:
views:46
posted:8/4/2011
language:English
pages:28