DB Poster

Document Sample
DB Poster Powered By Docstoc
					Children's Expression of Emotional Meaning in Music Through Expressive Movement
R. Thomas Boone Assumption College Brandeis University Joseph G. Cunningham Brandeis University

Abstract
Recent research has demonstrated that preschool children can decode emotional meaning in expressive body movements; however, to date, no research has considered preschool children’s ability to encode emotional meaning in gestures and expressive movement. The current study investigated 4- (N=23) and 5- (N=24) year-old children’s ability to encode the emotional meaning of an accompanying music segment by moving a teddy bear using previously modeled gestures and expressive movement to indicate one of four target emotions (happiness, sadness, anger, or fear). Adult judges visually categorized the silent videotaped gestural and expressive movement performances by the children ages with greater than chance level accuracy. In addition, accuracy in categorizing the emotion being expressed varied as a function of age of child and emotion. A subsequent cue analysis revealed that children as young as 4-yearsold were systematically varying their gestural movements with respect to force, rotation, shifts in movement pattern, tempo, and upward movement in the process of emotional communication. The theoretical significance of such encoding ability is discussed with respect to children’s nonverbal skills and the communication of emotion.

Introduction
Children’s Emotional Understanding of Music: - The intrinsic structural specification hypothesis argues that higher-order invariants of musical structure intrinsically specify some forms of emotional meaning in music(Clynes, 1982; Cunningham& Sterling, 1988; Scherer & Oshinsky, 1977) . - In a developmental process, children should be able to identify emotional meaning in music as soon as they are able to perceive these higher-order structural invariants (Cunningham & Sterling, 1988). - Children as young as four years of age are capable of reliably identifying discrete categories of emotional meaning in music at levels greater than chance. (Cunningham & Sterling, 1988; Cunningham and Leviton, 1991; Gentile, Pick, Flom, & Campos; 1994). - Some gender differences in children’s ability to identify emotional meaning in music have been reported; females have been more accurate in the identification of sad and fearful segments (Cunningham & Sterling, 1988).

Emotional Understanding of Expressive Body Movements/Dance: - Anecdotal evidence has suggested that infants and children respond to music through movement (Moog, 1976). - Infants’ movements were more lively and rhythmic in response to a lively and rhythmic segment than to a slow music segment (Trehub, 1990). - Children as young as 4 years of age can nonverbally identify discrete emotional meaning in expressive body movements at above-chance levels and children as young as five years of age show an increased ability to identify such emotional meaning in expressive body movements and utilize specific body movement to make emotion attributions (Boone & Cunnigham, 1998).

Goals of Phase 1
To determine the ability of preschoolers to encode emotional meaning through the medium of expressive movement/dance.

Hypotheses
Children will encode the emotional meaning of the music segments via their videotaped dance/expressive movement performances. Five-year-old children will more accurately portray emotional meaning than will four-year-old children. Females may more accurately portray sad and fearful categories of emotion than will males.

-

-

Method
Subjects
Preschoolers:
Twenty-three four year-olds (M=46.7 mos) and twenty-four five year-olds (M=59.7 mos). Both age groups had approximately an equal numbers of males and females. Sixty-six raters (M=18.0 yrs), balanced for gender, were drawn from an introductory psychology class and a college-level summer program for high school students.

Adults:
-

Design
A 2(Age) X 2(Gender) X 3(Emotional Category) design was employed, with age and gender as between-subject factors and emotional categorization as a within-subjects factor.

Stimuli
Twelve Music Segments, pre-rated as belonging to one of four emotional categories - happy, sad, angry, and fearful, were used. Four segments, one from each emotion category, were used as modeling segments. The remaining eight segments, two from each emotion category, were used as test segments. Each segment was approximately 20 to 30 seconds long. Table 1 below provides the pilot information obtained on all 12 segments. Two identical neutral-faced teddy bears were utilized as play partners to the preschoolers.

-

-

Table 1
Percentage of Interrater Agreement Among Adult Pilot Subjects for the Primary Emotion in Each Music Segment: Modeling Segments
Composition/Composer Agreement Rumanian Rhapsody, Opus 11/Enesco Peer Gynt; Ase's Death Suite No. 1 Opus 46/ Grieg Theme to Lifeforce/ Mancini Surprise Attack, Emotion Happiness Sadness Percentage 100% 100%

Anger Fear

55.9% 76.5%

Test Segments
Composition/Composer
Concerto in D, Opus 35/ Tschaikovsky The Humorous Song/ Lyadov The Rite of Spring/ Stravinsky The Red Poppy, Dance/ Gliere Winter Games/ Foster Anvil of Crom Conan/Poledouris Venus/ Holst The Walls Converge,

Emotion
Sadness Happiness

Percentage Agreement
100% 94.1%

Fear
Anger the Russian Sailor's

85.3%
76.5%

Happiness
Anger Sadness Fear Star Wars/ Williams

100%
88.2% 100% 76.5%

Procedure
Preschoolers: - The task was structured as a game. Subjects were brought to a quiet area where the equipment was set up and told the following: "We are now going to play a game that involves music and dancing. This bear, Fuzzy," (holds up his/her bear) "likes to dance to music. Your bear, Furry, does too." (Experimenter hands the second bear to the subject.) "Let me show you how to play this game. (Turns on music. Experimenter starts to dance. Note: the first four segments used for modeling were approximately twice as long to allow time to model the dance behavior.) "Can you hear that music? This music has feeling. Can you hear the feeling in the music? I can also dance with that same feeling. Can you see the feeling in the way that I'm dancing? Can you dance that way?" (Experimenter allows the subject dance on his or her own. If necessary, the experimenter replayed the segment until the subject shows an appropriate imitative behavior.)

-

(Then as each new modeling segment is introduced) "Uh-oh, the music has changed. Did you hear that? This music has a different feeling. I'm going to have to change the way I'm dancing; see how I'm dancing. I'm dancing with this new feeling. Can you hear that feeling in the music? Can you see that feeling in the way that I'm dancing? Can you dance that way?”

-

(Then to introduce the testing task) "You know what? Furry wants to dance these next eight pieces of music with just you. Can you do that? Good. Listen to the music." (The subject then danced alone with the second bear. Between each segment, the experimenter prompted the child to dance to the segment, but no evaluative feedback was given. Occasionnally the experimenter would state that the bear was having fun. After the fourth testing segment, the second bear would ask to dance alone with the child.)

Adults:
Videotapes of the preschoolers dancing/expressively moving the bears were created. Segments were remixed and shown to adult subjects. Each videotape segment was show for the duration of the musical accompaniment and lasted approximately 20 to 30 secs. Adult subjects were used to rate the videotaped segments of the preschooler’s dance performances. Adults viewed the videotaped segments in groups of five for a period of one hour. Approximately 20 of the subjects watched all six videotapes; the remaining subjects watched only a single taped session. ( Results showed that there were no differences between the subjects who watched all six tapes and the subjects who only watched a single tape.) Videotaped segments were presented in groups of four, all from the same subject, representing one of each of the four target emotions. Collectively, each group of four segments was shown without interruption, then repeated with 10 second pauses to allow the raters to evaluate each segment.

-

-

Raters were asked to categorize each segment into one of the four emotions and rate how intensely the emotion was being expressed on a seven point Likert scale. Raters were told that each of the four target emotions was represented within each grouping of four performances. However, they were also informed that the children may not have accurately depicted the emotion and that they should answer freely which emotion they felt was being expressed within each distinct performance. Thus, raters were provided with some information to allow discriminant categorization, but were also free to answer with any emotion for a given performance.

Results
Accuracy Analysis - Accuracy was measured as a function of how often adult raters identified the target emotion in each videotaped performance. For any given segment, there was a total of 16 raters; thus, the range of correct identifications for each segment was from 0 to 16. - A criterion method was used to identify which segments each child accurately portrayed the target emotion. With an expected (chance) frequency of correct identification by adult judges of 4 (25%) out of 16, a total of 8 (50%) or more judges out of 16 categorizing the performance as matching the target emotion yields a chi-square value of 5.25 with an associated probability of 022†. Given the low statistical sensitivity of the chi-square statistic, such a criterion method is actually a more conservative method for assessing accuracy.
† Although judges were presented with an equal number of happy, sad, angry, and fearful performances, their
responses showed a stronger preference for happiness (33%) and sadness (28%) than for anger (23%) and fear (16%). To adjust for this bias, these percentages were used to estimate the expected (chance) frequency of correct identification when calculating the adjusted chi-square criteria for each emotion.

- Both four-year-olds (M=5.35) and and 5-yr-olds (M=6.40) performed above chance (M=4.0), t(22)=4.44, p<.001 and t(23)=6.52, p<.001, respectively, depicted in Figure 1.
Figure 1. Accuracy Scores - Main e ffect for Child Age
16 14

Mean Accuracy Score

12 10 8 6 4 2 0 5.35

F(1,43)=4.72,p=.035

6.40 Chance Performance 4.00

4-year-olds

5-year-olds

Child Age Group

The data were subjected to a 2 Child Age X 2 Child Sex X 4 Emotion Analysis of Variance with repeated measures on the last factor which revealed significant main effects for Age, F(1,43)=4.72, p=.035, depicted in Figure 1, and Emotion, F(3, 129)=21.59, p<.001, depicted in Figure 2.
Figure 2. Accuracy Scores - Main effect for Emotion Category
16 14 F(3,129)=21.59,p<.001

Mean Accuracy Score

12 10 8 6 4 2 0 Happines s 7.35

7.69 4.96 3.56 Chance Performance 4.00

Sadn ess

Anger

Fear

Emotion Category

Goals of Phase 2
To determine if preschool children systematically vary their expressive movement performances, changing the spatiotemporal cues of their movements depending upon which emotion they are attempting to encode.

Hypotheses
Children will systematically vary the spatiotemporal cues of their expressive movement performances as a function of the target emotion category. Five-year-old children will show more variation than four-yearold children in their encoding of the spatiotemporal cues. There may be some gender differences in the manner in which children systematically vary their expressive movement performances.

-

-

Method
Coders
Ten coders assessed six targeted behavioral cues used by the child actors when enacting the expressive movement patterns to music. Each coder evaluated all 376 segments for the specified behavioral cue.

Design
A 2(Age) X 2(Gender) X 3(Emotional Category) design was employed, with age and gender as between-subject factors and emotional categorization as a within-subjects factor.

Results
Cue Analysis - Each cue was analyzed in a separate 2(Age Group of Child) X 2(Sex of Child) X 4(Target Emotion Category) analysis of variance with repeated measures across all 376 expressive movement performances. - There was no main effect for Age Group of Child for any of the cues. - There were main effects for Target Emotion Category for Force (F(3,129)=28.10, p<.001), Rotation (F(3,129)=7.21, p<.001), Shifts in Movement Pattern (F(3,129)=13.69, p<.001),Tempo (F(3,129)=38.36, p<.001), and Upward Movement cues (F(3,129)=20.55, p<.001), but not for Facial Affect. - There were significantly greater amounts of Rotation and faster Tempo in the happy and angry performances than the sad and fearful performances. - There were significantly greater amounts of Force and Upward Movement in the happy and angry performances than the fearful performances. Sad performances had significantly lower amounts of Force and Upward Movement than the fearful

Table 2. Mean Cu e Ratings as a Fun ction o f Target Emotion and S ex o f Child Cue  Facial Affect Males Females Force Males Females Rotation Males Females
a,b,c

Happiness 4.72 4.79 4.66 3.51a 3.82 3.21 3.20a 3.48 2.94

Sadness 4.54 4.65 4.44 2.34c 2.46 2.23 2.54b 2.55 2.53

Anger 4.57 4.50 4.64 3.57a 3.95 3.22 3.14a 3.12 3.17

Fear 4.51 4.42 4.58 3.03b 3.29 2.77 2.65b 2.58 2.72

Lettered superscripts represent means that are sign ificantly different from one another for a given cue * Represents gender difference within each emotion category per specific cue
   

Bold ed valu es represent the means for the entire sample

Cue Analysis (Cont.) - There were significantly lower amounts of Shifts in Movement Pattern for the sad performances compared to the angry, happy, and fearful performances. - There were significant Sex of Child X Target Emotion Category interaction effects for Tempo (F(3,129)=3.24, p=.024) and Upward Movement analyses (F(3,129)=4.02, p=.009). While both boys and girls showed greater activity for the happy and angry performances compared to the sad and fearful performances, boys showed greater relative activity than the girls with these happy and angry performances. - A Discriminant Function Analysis yielded a single function which utilized the cue ratings from force, rotation, shifts in movement pattern, and tempo; upward movement loaded on a third nonsignificant function. High values of this function predicted a group categorization of happiness, moderately high values predicted a group categorization of anger, moderately low values predicted a group categorization of fear, and very low values predicted a group categorization of sadness.

Table 2. (Cont. ) Mean Cu e Ratings as a Fun ction o f Target Emotion and S ex o f Child Shi fts in Mvmt Pattern Males Females Tempo Males Females Upward Mvmt Males Females
a,b,c

2.98a

2.11b

3.10a

2.69a

3.21 2.77 3.44a 3.78* 3.10 2.96a 3.35* 2.58

2.24 2.00 2.30b 2.37 2.23 2.04c 2.25 1.84

3.13 3.07 3.30a 3.58* 3.04 3.11a 3.84* 2.42

2.82 2.55 2.62b 2.62 2.61 2.51b 2.84 2.19

Lettered superscripts represent means that are sign ificantly different from one another for a given cue * Represents gender difference within each emotion category per specific cue
   

Bold ed valu es represent the means for the entire sample

Table 3. Classification R esults o f Discriminant Fun ction Analysis and Adult Judg es Predicted (Judg ed) Emotion C ategory Actual Emotion C ategory Happiness Sadness Anger Fear Happiness 40%(42%) 9%(20%) 28%(28%) 17%(29%) Sadness 20%(21%) 67%(44%) 24%(25%) 40%(24%) Anger 24%(23%) 13%(17%) 32%(28%) 17%(27%) Fear 17%(15%) 12%(18%) 16%((19%) 27%(20%)

Discussion
These findings demonstrate children as young as 4- and 5-yearsold are able to encode emotional meaning through expressive movement, approximately the same age at which they begin to decode emotional meaning in this and other nonverbal media. These findings provide support for the use of nonverbal response formats when studying the early recognition of emotional meaning. These findings suggest a strong intermodal connection between music, movement, and emotional meaning. These children were able to reliably use expressive movement to portray emotional meaning in music. These findings suggest that a more thorough investigation of children’s ability to use such cues to decode and encode emotional meaning across all nonverbal media is warranted. Aside from some minor differences, there was no evidence of a dramatic gender difference in the ability to encode emotional meaning via expressive body movement.

-

-

-

-

References
Boone, R. T., & Cunningham, J. G. (1998). Children’s decoding of emotion in expressive body movement: The development of cue attunement. Developmental Psychology, 34, 1007-1016.

Clynes, M. (ed.). (1982). Music, mind, and brain: The neurospscyhology of music. New York, NY: Plenum.
Cunningham, J.G., & Sterling, R.S. (1988). Developmental change in the understanding of affective meaning in music. Motivation and Emotion, 12, 399-413. Cunningham, J.G., and Leviton, J.M. (1991). Preschoolers' understanding of emotional meaning in music. Poster presented at the meetings of the Society for Research in Child Development, Seattle, WA.

Gentile, D. A., Pick, A. D., Flom, R. A., & Campos, J. J. (1994). Adults’ and preschoolers’ perception of emotional meaning in music. Poster presented at 13th Biennial Conference on Human Development, Pittsburgh, PA. Moog, H. (1976). The development of musical experience in children of preschool age. Psychology of Music, 4, 38 - 45. Scherer, K. R., & Oshinsky, J. S. (1977). Cue utilization in emotion attribution from auditory stimulus. Motivation and Emotion, 1, 331 - 346.

Trehub, S. (1990). Infant Movement Responses to Music. Poster presented at the International Conference on Infant Studies, Montreal.


				
DOCUMENT INFO
Shared By:
Categories:
Tags:
Stats:
views:11
posted:11/15/2009
language:English
pages:26