The Difference Success Makes
A Study of Division I-A Intercollegiate Football Programs and The Effects on Their Institutions
John Valentine Advisor – Professor David Garman Economics of Higher Education January 2006
The impetus behind a college’s foray into the pricey world of NCAA Division 1-A football is evident; more success, more national exposure, and more applicants. The administration and trustees of many schools believe the birth and continuance of their football programs to be an extension of their marketing or public relations department. The higher ranked a school’s football program, the more games they play against top-tier schools on national television. There is potentially no better advertising than having prospective students watch a big victory for a school in a bowl or championship game on CBS. There has been sufficient research conducted asserting that a national championship team will inherit an applicant pool the following year with major increases in numbers and depth. The enormous amounts of investment in a Division 1-A football program could cause schools with perennial records of sub-.500 play to question their investments. Each college must build and maintain a football stadium with a capacity of 50,000 people or greater. To remain competitive colleges must spend millions of dollars each year recruiting top regional and national talent. Also, the recruiting and salary of top-level coaching is another major drain on the college’s athletic budget. The only way for a school to generate revenue is to keep its program competitive. Therefore, it is easy to understand why schools with losing programs have a difficult time moving their teams up the list of college football rankings. There are some questions that have been largely unanswered by current literature surrounding Division I-A intercollegiate football and the effects on the college admission process. Various secondary questions arise and will be answered in this research paper with the intent to determine the net impact of college football programs on their schools.
This research paper may lead perennial poorly performing school to question the existence of their football programs. In order to come to a conclusion about this macro question, it is important to analyze smaller components of the college admission and college football ranking effects on their host-institutions. The first question of analysis is whether or not the number of applications for a particular academic year is inversely correlated with that school’s final football ranking in the previous and current years. There is strong evidence to suggest that a national championship or a bowl victory will produce an increase in the number of applications the following year, but is there a strong correlation with the final computer rankings of that year? Specifically, did schools that rose from the bottom quarter of the rankings to the second quarter experience an increase in applications? Similarly, did a school that consistently ranked near the mean experience a significant drop in applications if their football ranking dropped to the bottom quarter? It is evident that championship winners will be awarded more applicants the next admissions process, but is this result applicable over all schools and for equal increases and decreases in rankings? An OLS regression will be able to measure the extent to which a great season impacts applications over multiple application periods in the future. Will a great season followed by a mediocre season drop the team back into normalcy? The second question that is important is whether a football team’s success increased the percentage of applicants from the national and/or international pool. This question is important because it determines how increases or decreases in national exposure affect the demographics from each particular region of the country. Will the percentage of out-of-state students increase the year following a school’s football
success? Or is success correlated more strongly with a better showing from the in-state applicant pool? It is possible that a school’s prominence will create unity within their home state and thus create a stronger rise in the applicants from instate. If this were true, would it be more advantageous to use the money normally marked for football for other forms of marketing? The last question is whether the accepted student yield will increase or decrease following success on the football field. If it is true that a better football ranking is strongly correlated with more applicants, is it possible to determine the full value of the prospective students who decide to apply because of the success of that school’s football program? I believe that measuring the change in yield will be essential in predicting the value of these additional applicants. If a college’s yield decreases the following years after football success, it can be concluded that these extra applicants are contributing less to a school’s long-term prestige, thus decreasing the overall value of the success. This test will aid the investigation to determine the extent to which the additional applicants are serious in attending a school that has recently experienced a solid football season. Is the student only valuable for the school in the percent of applicants accepted statistic (college search phase), or does the yield also improve with football performance (college choice phase)? Each Division I-A football program plays each year with the hopes and dreams of winning the elusive national title. The administrations envision lucrative television contracts, massive amounts of new alumni support, and increased revenues from ticket sales. This end result is only attainable for less than five percent of the competing programs each year. Furthermore, many of the institutions that win a championship in a
given year have been successful in the past. It is difficult to break the barrier of success, so is it profitable and advantageous for a second tier or lower school to spend millions on the slim hopes of achieving greatness?
Many economic researchers have attempted to answer the question of athletic influence at the university level, but each paper published to date has taken a very different angle on the concept. In “Intercollegiate Athletics and Student Choice”, Toma and Cross studied Division I football and basketball and its impact on student choice of undergraduate institution. Using the Peterson’s Guide to Four-Year Colleges, the researchers took data from championship-winning colleges and studied application increases and decreases for three years following after a championship season. They also sought to determine if schools of the same caliber also experienced similar increases and decreases in applicants. Lastly, they attempted to determine whether or not the change in applications was sustainable. (Toma and Cross 635). For football, Toma and Cross found that “of the 16 schools that won or shared championships in college football, 14 showed some increase in the number of applications received for the first freshman class following the championship, 7 enjoyed an increase of 10% or more, and 2 schools had an increase of 20% or more. Similarly, over 3 years, 14 of 16 championship institutions showed an increase in applications, and 13 of these 14 schools experienced an increase of 7% or more” (Toma and Cross 639). They concluded that winning a championship is directly correlated to significant gains in applicants for the next year and possibly for five future years.
This paper is noteworthy because it established a significant link between the epitome of athletic success (winning championships) and future applications to that institution, a major institutional goal. Toma and Cross laid out questions of geographical diversity and corresponding changes in yield as applications increased to guide future research intended to build upon their results. I will be taking their analysis on championship football schools further by applying the concept to the entire Division I-A football league. Few teams can realize a championship season, so it is beneficial to study data and trends over the entire league by using computer rankings. In a 1993 paper, Irvin Tucker and Louis Amato compared Division I football and basketball national rankings with annual changes in SAT scores. From 1980-1989, their sample set, they concluded that a higher ranking translates into higher SAT scores for future matriculating classes. This study validates the idea that athletic success has an impact on university prestige. However, “the effect was small. A school whose football program finished in the top twenty for each of the 10 years in the sample would expect to attract a freshman class with 3 percent higher SAT scores than a school whose program never finished in the top 20”(Frank 17). Only approximately one-sixth of the football schools are able to finish in the top 20. What becomes of the five-sixths of the NCAA Division I-A schools who fail to break into the top 20? If a school has not broken the top 20 in the past ten seasons, is it worthwhile devoting such a large amount of resources to a losing squad? My research will attempt to shed light on these insightful questions. In Robert Frank’s paper, “Challenging the Myth: A Review of the Links Among College Athletic Success, Student Quality, and Donations”, college football is framed as a winner-take-all marketplace. In Frank’s model, each football school is competing for a
scarce number of quality football players and coaches. The biggest payouts are given to the schools that place within the top 10 teams in the entire league. The rest of the league is given compensation that is not equal to the amount of investment for the year. Frank details the current state of college football programs where schools continually spend more in order to get the premier coaches and recruiting tools. As their peer institutions ratchet up spending, the levels reached continually rise. As a result, Frank estimates that most schools would attain break-even status over the course of ten years. However, Frank points out ways in which universities deviate from the controlled environment of economic models. “For one, it (the model) assumes that when a university assesses its prospects, it is accurate in its estimate of the probability that its own program will be among the winners. Yet there is abundant evidence that potential contestants are notoriously optimistic in their estimates of how well they are likely to perform relative to others”(Frank 8). Frank found that if there is any indirect benefit to the university from college football, the effects were very small. This paper is important because it takes a unique perspective on the profit and university enhancement motives of colleges participating in Division I-A football. While there is no ready and available source of data on the profitloss sheets for college athletic departments, Frank’s points are valid. Because there is only a little over one hundred million dollars paid out, with the majority paid out to a few elite teams, it is highly probable that many teams are losing money in this venture. Is it to the advantage of an institution to move their football team into Division I or to keep competing in Division I after establishing ten years of losing seasons? Because Frank
does not entertain the question directly, more analysis of the indirect benefits is necessary before making an assertion. In an April 2004 paper titled, “The Impact of Undergraduate Performance on Undergraduate College Choice,” Roshni Jain measured the impact of athletic success at the NCAA Division I and Division III levels on the SAT scores of selected colleges. By using variables including matriculation rate, winning percentages, SAT scores, and playoff performances for four sports at each school, Jain was able to determine athletic performance effects on SAT scores, number of applicants, and matriculation rate. To complete his data table, Jain used Peterson’s Four Year College Guides from 1994-2004 and NCAA record books. Jain’s analysis yielded the conclusion that athletic success in men’s football, baseball, basketball, and soccer does affect applicant quantity and quality. The results illustrated that the greatest impact was felt with the Division III schools. Also, there was no significant impact on applications to Division I schools from the success of their football teams. This study is important because it shows that each sport has a different effect on their schools’ applicant draw. While most of these researchers have analyzed in some form the changes in future applicant pools resulting from athletic successes, no one has isolated the entire Division I football league and used the combination of applicants, computer rankings, yield, and percentage of students in the top ten percent of high school class to determine the value of football on student choice.
III. Problem Framing In order to create models with which to arrive at conclusions for the questions listed in the introduction, it is essential to describe various elements of the data set.
Which Schools? The colleges listed in the data set are all universities that fielded a Division I-A football team any year from 1995-2002. This large data set enables the data set to produce conclusions for the entire league as opposed to specific divisions or ranked schools. There were a total of 119 schools that fielded a team during those years.
How to Measure Athletic Success? In order to measure athletic success, the computer rankings from all 119 teams were collected. The Associated Press, Coaches, and ESPN/USA Today polls all factor in personal opinion rankings. The Congrove computer rankings are a system that statistically measures significance in win-loss percentages as well as a school’s strength of schedule. I chose this method because an 11-3 record from Boise State cannot be compared to an 11-3 record for Georgia. Both teams are in different divisions that are comprised of different opponent calibers. While the non-conference schedule will be more characteristic of the team caliber, the computer rankings are a standardized format that has a large degree of accuracy. Also, computer rankings are much easier to regress using the OLS format.
What time period? The time period selected for the data set was 1995-2002. All data was gleaned from the Peterson’s Four Year College Guide from 1997-2004. The information accuracy and lag between years was important in data collection and analysis. For the matriculating class of 1995 the computer rankings would be taken from the 1994 list because the prospective applicants would be reacting to the success or failure of football teams in the fall of 1994. Since the Peterson’s Guide has a two-year lag on statistics, the 1997 Peterson’s Guide must be used for admission statistics for the matriculating class of 1995. The students also may be influenced by the football results the year before, and therefore the lag in rank year must be 1993. This process was unchanged throughout the data.
IV: The Economic and Statistical Models The first question that will be analyzed is the relationship between the final Congrove Computer Rankings and the number of applications a school receives for the next year’s matriculating class. In order to determine the correlation, an OLS regression is executed in the form Y i = β0 + β1 X1 + β2 X2 + β3 X3 + u. In this situation, X1 represents the number of applications received by the total number of schools in division 1-A football for each year, X2 represents the final football computer rankings influencing that year’s application process, and X3 represents a variable created to minimize the impact of the fields with no data. There are circumstances when a school did not provide Peterson’s data for a period of time. Although this regression will provide data on aggregate numbers of applicants, it does not capture the percent changes that are needed
to accurately answer the question posed. To provide further insight into this question, I included regressions where applications are regressed with the final rankings from the previous year to determine its impact on applications and applications regressed with ranking and rank lag if applications were over or under a certain amount in order to separate small and large universities. Also, year-to-year changes in applications are regressed with changes in ranking to determine the extent to which college football rankings effect undergraduate applications. I hypothesize that as the rankings of football teams improved (lowered), the number of applications would rise. I predict that the change is even more dramatic for schools that comprise the top ten schools in the country. The second question refers to the extent football success or failure has an effect on the national scope of applicants. I believe that athletic success will increase the national scope of the next year’s applicant pool resulting from the notoriety and national media attention gained from the competitive outcomes. The variables that are used in this regression are the percent of students that are residents of the state in which their college is located and the change in rank experienced from year-to-year. The last question looks at the role of college football rankings in changing the yield at Division 1-A football universities across the country. The OLS regression will include the percent of admitted students that accept the offer to attend certain institutions and the year-to-year change in rank of those schools. I conjecture that a successful college football program will attract students to apply to their school because of the national media attention. But when it comes to attending the institution I believe many of those accepted will eventually choose a school that better suites their style and needs. It
is extremely difficult to pinpoint the exact reasons for students choosing individual colleges. There could be students who become interested in a school through the football media attention only to find out that the school is an ideal fit, but the chances of that happening seem insignificant.
V. Variables Teams (team): In the first column I listed the 119 colleges that fielded a Division I-A football team any year from 1995-2002. In order to predict the changes in applications and rankings I needed to re-list each college for each year from 1995-2002. Colleges that did not field a team for particular years were given a zero in each column for the absent year and not eliminated completely. Year (year): The year variable was placed to the right of the team name to keep the data organized and easy to understand. Applications (apps): The number of applications from males and females for a specific year is listed under this variable. There were a couple schools that listed the exact same number of applicants for consecutive years, which leads me to believe that the data may be marginally incorrect. Percent change in applications (percentchapps): This variable measures the yea-to-year change in applications. The data for 1995 is absent because of the unavailability of a 1994 Peterson’s Guide. Percent accepted (acc): This variable measures the percent of prospective students that were accepted on each given year. The Peterson’s Guides from the year’s 1998-2002
listed the number applied and accepted, and thus division was applied to achieve the percent accepted statistic. Percent Enrolled (enr): This variable measures the percent of accepted students that enrolled at each university. Again, in some guidebooks there were only numbers of students accepted and enrolled. The percents are measured to the second decimal place. Percentage of Students living in state (perinstate): This variable measures the percent of students currently attending a specific university that live in the same state where they attend school. The guidebook for the year 2000 omits this statistic. Public universities tend to have a greater portion of their students from in state because of the tuition reductions. This statistic is measured across the whole student body and not just the entering class. Percent of students in the top ten percent of their class (toptenper): This variable measures the percentage of incoming freshman that maintained a rank in the top ten percent of their class. The years Peterson’s Guide for the years 2000-2002 did not contain information concerning the top ten percent. Final college football ranking (rank): A variable that measures the final college computer ranking. Essentially, as the rank lowers, the performance of the team is rated higher. Change in ranking (chrank): This variable measures the nominal change in rank from year-to-year for all D1-A football schools. Percent change in rank (percentchrank): This variable represents the percent change in rank from yea-to-year. Lagged Rank (ranklag): This variable represents the final college football standings when the matriculating class was in its junior year of high school.
Dkrank, dkranklag, dkenr, dkacc, dkchrank, dktoptenper: Each of these variables is included with to minimize the effect of missing information on the regressions. Large: this variable is used to denote schools with above average application numbers. Lrank: This variable is large*rank. It will enable the f-test to be performed. chneg: This is a variable identifies schools whose ranks have improved over the past year. Negrank: chrank*chneg This variable will allow me to perform a test to determine the differences in application changes between schools that improved rank and schools that lost ground on their ranking position.
VI. Empirical Results and Conclusions 1.) The first question that was discussed in the introduction concerned changes in applications. There is a presumption that exposure to the national media during and after a successful season will create a significant rise in applications. To help determine the overall net benefit in having an NCAA Division I-A football, the mean change in applications was calculated. On average, the change in applications for the Division I-A universities as a whole is 3.5%. This number is truly a startling figure. Data retrieved from the Chronicles of Higher Education shows that college enrollments have been growing in the low single digits in the past five years and will continue to grow five percent in the next five years. Following the past data, Division I-A football schools will grow on average 18% in the next five years. This evidence provides the groundwork for a compelling case for university leaders to bring or keep a Division I football team at
their school. Also, Irvin and Tucker’s 3% increase in applications from the top 20 ranked football programs is applicable to the entire division. Those numbers are indicative of a general division-wide trend, but when the percent change in applications is regressed with change in numerical rank, more conclusions can be reached. If regressed on the entire data set, a positive change of 1 in rank (Worse) will produce an extremely small decrease in the number of applications received per university. While there seems to be very small amounts of significance in the regression, there appears to be greater sensitivity to rank changes in schools that receive less than 5000 applications. This result is straightforward and logical because if 2,000 prospective students decide to apply to a school because it won the Rose Bowl and improved its rank, the percent change in applications will be greater at universities that have a smaller applicant base. A greater significance is observed when number of applications is regressed with rank. For every increase in rank there is a drop in 47-person drop in applications. This leads me to conclude that the most successful Division I-A college football teams are those with the largest student populations and schools that perform better on the football field are receiving more applications on average. In order to provide concrete examples beyond OLS regressions, I will look at the rank and change in application projections of Florida State, Texas, Boise State, New Mexico State, and Nebraska. Florida State is arguably the most successful team of the past decade in college football. They have maintained a great record in each of the last ten seasons. However, they have not been in the top five at the end of a season since 2000. This school has shown an average increase in applications of 12% from 1996-1999 (seasons of success).
The matriculating class of 2000 showed a gigantic 30% jump in applications following the championship year in 1999. Since then, Florida State has been bleeding applicants. In 2002 alone, the number of applications to the school dropped 27%, even though they finish 20th in the nation. The truth remains that the chances of a football school winning a championship many years in a row is slim. As the Florida State example shows, a return to normalcy after a championship season is evident. The net long-term benefit in applications to Florida State is equal to zero. The only effective benefit of year-over-year rank improvements is if the level of ranking can be sustained over the long run. The University of Texas is a prime example of sustained improvement. From 1993 to 2000, Texas maintained a final ranking in the 20’s. From 2001 onward, Texas consistently reached the top 10. The college’s year-over-year application growth has been marked by consistent yearly rises from 10-17%. Texas’ rise to the top of the college football world is the ideal model for budding football programs everywhere. This college is one of a select group of schools that have been able to maintain success over long periods of time without relapse into mediocrity. Boise State is a small university that began its quest for football glory in 1996. With the debut of the football program, applications increased 14%. From 1997 to 1998, their ranking jumped from 96th to 69th and applications jumped 43%. This development is evidence that an increase in applications is not necessarily contingent upon playing in a bowl game or winning a national championship. In years after the precipitous rise until 2002, the application numbers have decreased slightly. I can postulate without current statistics that Boise State’s applications increased significantly since 2002 with their rise
to the top 10 in the final computer rankings. It seems as if a rise from the doldrums of the league warrants an increase in applications similar to a rise from mediocrity to the top 10. New Mexico State was a small Division I-A school that maintained a football program consistently performing near the bottom of the computer rankings. New Mexico State is unique in that it frequently jockeyed between the doldrums of the league and a mediocre final ranking. For example, between 1998 and 2003, the program bounced between rankings in the mid-90s and mid 60s. Their yearly change in applications reflected this inconsistency, with applications increasing and decreasing 20% at a time. Apparently their bi-yearly failures were nullifying their bi-yearly successes. The University of Nebraska is a football school that has performed consistently at a high level for the past decade. From 1994 to 2001, Nebraska did not finish out of the top 15 listing of college football programs in the country. While they were continuing to be in the national headlines for their stellar play, their application numbers were largely unchanged. This is because they had reached the highest threshold of notoriety. Because the number of college football fans doesn’t vary significantly, they were already reaching their market penetration potential. My analysis has led me to the conclusion that success and failure on the football field has direct and lasting changes to the applicant pool of colleges across the country. As I have alluded to earlier, the ideal is to be able to characterize your football program with sustainable success. The attainment of a “top 10” caliber performance in one year does not guarantee a lasting application boost unless the team can continue the performance year-over-year. The pivotal season is the year following the rise from a lower tier to a higher one. The most resources as far as recruiting and money spent
should come the year following a breakout season. Sustainable success is the overarching goal of each university.
2.) The second question dealt with the effect of rankings changes on the percent of students coming from the college’s home state. I initially hypothesized that the national exposure gained from a bowl appearance or national championship victory would decrease the percentage of instate students populating the individual university. An OLS regression of percent of in-state students and the percentage change in rank revealed that as rankings improved, the percent of in-state students declined. This positive correlation is exactly what was expected in the hypothesis. Unfortunately, the results from the OLS regression reveal that the quality of the result derived may not be acceptable statistically. One interesting regression that provided a t-statistic greater than 2 was the percentage of in-state students regressed with the change in rank from the previous year if
the school’s rank was less than 10 and another that specified greater than 75. The regression with the stipulation of a ranking of less than 10 showed that for every increase in ranking, the percent of in-state students increases .5% across Division I-A football schools. This correlation turns negative when the rank of the schools is greater than 75. This evidence may indicate that schools with poor football programs do not receive a strong interest from in-state students. As the program improves, the in-state students rush to the college at a higher percentage than out-of-state students. This is an extremely unique outcome that was particularly surprising. 3.) The final point of analysis will tackle the change in yield resulting from success or failure on the football field. I hypothesized that the yield of a school experiencing recent success will receive an increase in applications but not yield because national headlines for football can serve a student more in the college search process than the decision process. Students who applied to a school before its realizing football success had more knowledge about the school and its academic and social environment and decided to apply because of those reasons. Prospective applicants that applied to the university mainly because of its football success would probably be less informed about the academics and environment of the university and, upon acceptance, be turned away after an open house visit or further research. The regression performed on enrollment percentages and rank detailed a positive correlation between a worsening rank and a higher yield. Although the t-statistic of |1.3| weakens the value of the regression, the correlation met my expectations. The value of the acceptance of each student that did not factor in football success or failure is greater than the value of each student that used football rankings as a factor for applying.
In conclusion, I have given strong evidence that NCAA Division I-A football final rankings are strongly correlated with the make-up of a university’s applicant pool. For the value of a football program to outweigh its costs, there must be an element of consistent and sustainable success in the final football rankings. The University of Texas model for building a football program has proven the most successful. An increase in applications as a result of a good ranking in one year’s season is only as good as the school’s performance in the long run.
VII. Future Research To build on my research, I would suggest that each individual conference be taken individually and put through the regressions to be able to view possible inconsistencies. Also, it would be very interesting to track changes in university football budgets and compare that to changes in rank.
VIII. Acknowledgements I would like to thank Professor David Garman of the Tufts University Economics Department for his guidance and advice through the research and execution of this work.
Works Cited "College Football Poll, Best of the 90s." . February 6, 2006 . Congrove Computer Rankings. <http://www.collegefootballpoll.com/historical_composite_rank.html>. "Enrollment and Student Aid by the Numbers: Applications in 2003." The Chroncile of Higher Education 30/4/2004 2004, 34 ed., sec. B17:. Frank, Robert H. "Challenging the Myth: A Review of the Links among College Athletic Success, Student Quality, and Donations." Knight Foundation Commission on Intercollegiate Athletics (2004) Jain, Roshni. The Impact of Athletic Performance on Undergraduate College Choice. University of Pennsylvania, April 2004. Toma, J. Douglas, and Michael E. Cross. "Intercollegiate Athletics and Student Choice: Exploring the Impact of Championship Seasons on Undergraduate Applications." Research in Higher Education 30.6 (1998): 633.