# matlab

Document Sample

```					Table of Contents

Executive Summary

I. Introduction

a) Purpose

b) Audience

c) Methodology

II. Testing

a) Initial Pilot Test

b) Quantitative

1) First Round

2) Second Round

c) Qualitative

1) First Round

2) Second Round

III. Analysis and Recommendations

IV. Conclusion

V. Appendices

Appendix B: Survey

1
INTRODUCTION

Purpose

The purpose of this manual is to introduce students to basic programming in MATLAB. Every
engineering student is required to learn a programming language and any programming
experience will help students prepare for a career in engineering. Furthermore, instructors may
assume that students know how to use software such as MATLAB, but it is not stated as a
prerequisite or taught in class. This manual should provide users with a solid introduction to the
basics of MATLAB programming.

The manual covers the operations and syntax of MATLAB and simple logical statements used in
programming. After completing the manual, users should have enough knowledge of
programming in MATLAB to create a simple program of their own.

Audience

The primary audience for this manual is UF engineering students who have not taken a course in
computer programming, or have little experience in programming. The secondary audience is
anyone with a general interest in computer programming using MATLAB. The information
included in this manual may be too simplistic for experienced programmers.

Methodology

Iterative testing was utilized to test the word choice and clarity of the manual, and the usefulness
of the debugging section. Testing the manual involved three major stages; a preliminary pilot
testing, followed by two rounds of iterative testing. We conducted a pilot test to determine the
difficulty level of the manual. Adjustments were then made to produce a testable instruction
manual. The revised manual was used to test both individual testers and also pairs of testers (Co-
discovery). A total of 10 individuals and 4 co-discovery pairs were tested.

In the manual, the tester(s) were asked to create two programs called “Hello World” and
“Equation of a Line”. After each tester or co-discovery team finished the manual, they were then
given a simple problem: write a program on MATLAB that will use the quadratic formula to
solve an equation. Testers were encouraged to use the manual during the quadratic equation test.
We estimated the completion time for both the manual and quadratic equation problem to be
approximately 30—45 minutes.

Both qualitative and quantitative data were recorded for each test. Quantitative data of each
tester’s mistakes was obtained from the saved program files and examined to determine success
rate. The number of logical errors and syntax errors were recorded for each program the tester
completed. Success was defined by two criteria: 1) The tester(s) made no logical programming
errors (organization errors), and 2) The tester(s) made three or fewer syntax errors (“typos”). We
expected to achieve a mean 70% success rate from all test subjects.

2
Qualitative data was taken during the test by encouraging the individual testers to think aloud,
and by recording (via webcam) the dialogue between partners during the co-discovery sessions.
Testers were also prompted to fill out a short survey that asked them to give their opinion on the
clarity of the manual. After analyzing the qualitative and quantitative data recorded from the first
round of testing, the debugging section was added to the manual. Then the manual, with added
debugging section, was tested on individuals and co-discovery partners. Data from both rounds
of testing was compared to determine the usefulness of the debugging section.

TESTING

Our testing was done in three phases. The first phase was a pilot phase to weed out major
problems without wasting time or test subjects. The second phase of our testing was conducted
with the major problems corrected, but no debugging instruction. Finally, stage three was done
with a debugging section added into the manual to see if test subjects could correct their mistakes
themselves.

We’ve broken the data down into what program errors were made in, what kind of errors they
were, and how many were corrected. It was expected that everyone would make a few errors in
syntax, but we expected subjects to create a final product with no logical errors. Simply put, we
weren’t concerned about minor typos, but the subjects need to have the logical flow of the
program down. Even experienced programmers forget a semi colon or parenthesis in code
sometimes, so we set the bench mark as three or fewer syntax errors, and zero logical errors for a
successful test.

Pilot Test

The original manual was tested by two people. Both testers were upper level engineering
students with some programming background, but no experience in MATLAB. Both subjects
made five or more errors on the line program, eight or more on the test program, and neither was
able to correct enough errors to pass the test. Neither test subject completed the manual
successfully.

Based upon this, we decided to amend the initial report. We found that the subjects were not
given enough explanation or clarification on some syntax uses. Also, the original graphics were
inadequate and showed programs written in styles not explained within the manual. Even more
pronounced was that users were being confused by the introduction of two logical structures, the
“if-then” statement, and the “for loop.” So, the “for loop” was removed from the manual, and
the instructions rewritten for just the “if-then.” Since the “High or Low” program demonstrated
“for loops” it was replaced with “Grade Book” which contained only “if-then” statements. All
graphics were consequently updated. At this point, we proceeded to full scale testing.

3
Quantitative

The purpose of our quantitative testing was to isolate any common problems that had occurred
between the test subjects. We felt that the best way to judge how well a subject performed was
measure the accuracy of their program. This included tracking how many errors were made,
which include both logical and syntax errors.

First Stage

During this stage four single testers and two co-discovery groups were tested. Out of the testers,
only two individuals and one co-discovery team were able to successfully complete the trial. The
results of the first stage of testing are shown below in Table 1 and Figure 1.

Table 1: First Testing Round

Subjects Syntax Corrected Logical Corrected Net    Net    Pass/Fail
Errors  Syntax   Errors   Logical Syntax Logical
Errors            Errors

Single         1         4         1           1          1          3        0          P
Testers

2         3         2           1          0          1        1          F

3         4         2           1          0          2        1          F

4         2         2           0          0          0        0          P

Co-          5         4         3           1          1          1        0          P
discovery

6         4         1           2          0          3        2          F

This results in a 50% success rate. The mean syntax errors was 3.50 errors and the mean
corrected errors was 1.83 errors. The testers were able to correct 53% of their syntax errors. The
mean logical error was one mistake. The subjects corrected only 33% of their logical errors.

4
First Testing Round

4.5
4
Syntax Errors   3.5
3
2.5                                                           Errors
2                                                           Corrected Errors
1.5
1
0.5
0
1    2       3          4    5           6

Testers

Figure 1) The graph shows the syntax errors along with the number of corrected errors
for the test subjects during the first round of testing.

Second Stage

For the second stage, a debugging section was added to the manual just after the line program. It
was thought that students would be able to write the line program on their own from the
instructions, utilize the debugging section to correct their mistakes, and be in good shape for
writing and correcting a program of their own.

Table 2: Second Testing Round

Subjects      Syntax       Corrected   Logical         Corrected         Net      Net      Pass/Fail
Errors        Syntax     Errors           Logical         Syntax   Logical
Errors                      Errors

Single                           1            1          1                1          1                0       0          P
Testers

2            1          0                1          0                1       1          F

3            5          5                2          1                0       1          F

4            2          2                1          1                0       0          P

5            3          2                0          0                1       0          P

6            4          2                1          1                2       0          P

Co-                            7            3          2                1          1                1       0          P
discovery

8            4          4                1          1                0       0          P

5
The second round of testing had the success rate climb from 50% to 75%. The mean syntax
errors was 2.88 errors and the mean corrected syntax errors was 2.25 errors. The testers were
able to correct 71.3% of their syntax errors. The mean logical error was one mistake. The subject
corrected 75% of their logical errors.

Second Testing Round

6
5
Syntax Errors

4
Errors
3
Corrected Errors
2
1
0
1   2   3     4    5        6   7   8
Testers

Figure 1) The graph shows the syntax errors along with the number of corrected errors
for the test subjects during the second round of testing. This graph demonstrates that the
subjects in round 1 were able to correct more syntax errors than the subjects in round 2.

Qualitative

them to give their opinion on the content of the manual. Co-discovery recordings were also
analyzed to gather qualitative data.

We wanted to find out if testers were able to follow the organization of the manual and
understand the visuals of the example programs. Also, we wanted to know if the wording of the
specific steps was confusing to the users. The results of our survey are given below:

First round

   Most users found that the manual contained a sufficient amount of instructions in order to

   Some users were confused about writing multiple statements in an if-then structure.

   Several test subjects found that specific tasks were not well pronounced.

   Most users felt that the instructions should be placed before the figures.

6
   Some test subjects were confused about the syntax of logical conditions.

   Most test subjects felt that the visuals were very helpful in completing the tasks.

   Users did not know how to fix errors when their program failed to run.

Second round

   Users gave positive feedback about the debugging section.

   Users found the organization and content to be easily understandable.

   Users found the instructions to be sufficient.

   Most users felt they could create a program in MATLAB.

ANALYSIS

Our tests have shown this to be a working manual with a few minor problems. Our original goal
was to have 70% success on writing our test program, which is what we finally got on our third
round. We saw a strong correlation between adding the debugging section and subjects being
able to complete a working program on their own.

Students often made the same mistakes in their tests. The test program looked very similar from
group to group. Often times the subject(s) would leave off a semi colon at the end of a few lines
that should have it, and placed some statements inside or outside of the “if” statement that should
not have been there. Given a little bit of time, and the debugging section for stage 3 testers, most
errors were corrected. Based on this, we are adding more emphasis in the manual on these two
subjects to improve the final release’s success percentile.

The 70% success was our original goal, but error rates varied significantly between groups.
With this in mind, a larger test group’s numbers may fall. To ensure that the manual maintains at
least the minimum success rate, we’ve made a few changes for the final draft. The most
significant being the emphasis on certain problem sections in the manual as described above.
Secondly, our tables, graphics and pages were not numbered during testing. Groups reported this
as confusing and frustrating. This also has been corrected. Finally, we’ve decided to move the
debugging section up before the line program. The testers told us that they sometimes had to just
copy the code during the line program without knowing exactly what it meant. We feel that if
people read the debugging section first, they’ll know why the code reads how it does and will be

7
CONCLUSION

After testing was complete, several changes were made to the original MATLAB instruction
manual. In the survey, many users commented that some steps were particularly confusing or
poorly worded. Changes were made to the word choice and clarity of certain steps in the manual
according to the suggestions in the surveys taken by the testers.

Also, the debugging section was added to the manual in order to help users fix common
programming errors. The addition of a debugging section showed a significant improvement in
success rate among both individual and co-discovery testers. It is encouraged that all
programmers, no matter the skill level, check each line of their program for common syntax
errors.

Regardless of whether the testers used the manual with or without the debugging section, the co-
discovery teams had a higher success rate than the individual testers. Furthermore, the
completion time for co-discovery teams was generally shorter. Working in teams will most likely
improve users understanding of the manual and the programming process.

8

```
DOCUMENT INFO
Shared By:
Categories:
Tags:
Stats:
 views: 0 posted: 1/29/2013 language: Unknown pages: 8