Appendix O



Report on Grade Distribution


Implementation:  Upon approval by the President and development of procedures when applicable

Introduction and Rationale

The question related to grade distribution that we should concern ourselves with should be focused on how well the grades that students earn in a given course reflect the learning that has taken place and how well the student has met the learning goals for the course. We also should concern ourselves with questions related to how our university policies and procedures impact student and faculty attitudes towards grades and learning.

According to many researchers and scholars, over the past several decades, students’ mean Grade Point Average (GPA) has been steadily increasing at universities and colleges across the country. According to Rojstaczer (2016) and others, prior to the Vietnam War, the most common grade on college campuses was a C. By the early seventies, the average GPA at a college campus rose to 2.9. Beginning in the early eighties, grades began to rise again but at a much smaller, almost indiscernible pace. By the mid-1990s, an A was the most common grade at an average four-year college campus. According to Rojstarczer’s (2016) review of grades from 400 schools, by 2015, the average college student had a 3.15 GPA. Review of the biannual grade distribution report published by Penn State’s Faculty Senate indicates that when looking at GPAs in the aggregate, average GPAs have risen from 3.07 in 2005 to 3.12 in 2015.

There is a debate in the literature about what factors are influencing this increase in the mean GPA. Some authors suggest that this steady increase reflects “grade inflation.”  In other words, these scholars are asking whether students are receiving grades commensurate with their ability. Kuh and Hu (1999), for example, cite several factors that may influence mean GPA, including students enrolling in majors where average grades are higher, the consumer orientation of students and their families, university policies that allow students to avoid the negative impact of a low grades, and changing student demographics that may influence persistence.

Others, however, question whether “grade inflation” is the appropriate term for the average increase in the GPAs of college students. Pattison, Grodsy, and Muller (2013) assert that increases in the mean GPA do not necessarily reflect that grade inflation is at work. Pattison et al. build upon a chapter written in 2008 by Adelman’s (2008) that debates whether grade inflation is occurring and highlights the importance of examining the distribution of data and using representative, transcript-based data. Pattison and her colleagues establish that focusing on changes in measures of central tendency, such as a mean, is misleading. They argue that what is of most importance is the “signaling power” of GPA, that is, the ability of grades to provide important information both to and about students. Their findings indicated that the signaling power of grades has not diminished. GPA was associated with educational plans, persistence to degree, occupational prestige, and long-term earnings.

Boretz (2004) argues that the average increase in GPA is a “harsh judgment on the quality of student learning in higher education” (p. 42). She cites such factors as high grade expectations, increases in faculty development programs, a mastery approach to learning, changes in grading policies, the lack of clarity about how teaching evaluations are used in personnel decisions, and an increase in a variety of student services as possible reasons for increases in average GPAs.  Boretz strongly advocates for campus-specific approaches, rather than focusing on national trends.

We recommend taking Boretz’s suggestion one step further. Given the breadth and scope of Penn State, this committee is in agreement that it is not useful to look at these data in the aggregate. Rather, we believe these data are far more useful at the department/division and college/campus levels. In particular, we suggest that units work closely with the Office of Planning and Assessment to determine appropriate means of assessing learning outcomes across the curriculum. In addition, it would be helpful to develop best practice guidelines for and for assisting performance reviews that include evaluations of teaching, including providing department heads with relevant data to aid such reviews. With respect to the role of grades in performance reviews, it is important to note that authors such as Millet (2016) caution against solely using grades as a metric to evaluate faculty members as this may have unintended consequences; for example, instructors may, in an attempt to improve their grading reliability scores, use GPAs to assign grades in a course. As Millet stresses, grading reliability is strongly influenced by variance in students’ GPA’s and such data should be incorporated into any interpretation of the influence of leniency on grading reliability.

In examining grade distributions at Penn State, another factor that this committee identified is the combination of institutional policies and student practices that contribute to a culture of GPA protection.  We also strongly believe that there are steps Penn State can take to counter this culture of GPA protection, including reviewing “entrance to major” standards and other university policies and procedures. As students face increasingly high standard of entrance to major GPA thresholds, competition for internships, entrance to graduate and professional schools, and career placement, students may be selecting courses on perceived grade outcome as opposed to taking more challenging courses in which they are interested in order to protect their GPA. In addition, students have more opportunity to withdraw from courses in which they are doing poorly in order to protect their GPA.

Finally, as the university invests additional resources in areas related to student success, we should expect (and welcome) students receiving higher grades. This support ranges from faculty development in the area of pedagogy, to more clearly defining learning outcomes in courses, to changes in teaching methods to include models such as mastery, to the engagement of students in more project and group-based work in the classroom, to the expansion of student support services across both academic and student life areas. All of these efforts clearly support our university goals related to access and retention as we work to proactively advise and work with students who come from a range of backgrounds and preparation. This represents a significant change in culture from seeing the university as a place to “weed out” under-performing students to one in which we believe that every student who is admitted to Penn State belongs here and has the potential to succeed.


  1. Analysis and evaluation of grade distribution should take place at the departmental/division level. We recommend that dashboards be created that provide department/division heads data on grade distribution in courses in their unit. In addition, we recommend that best practice guidelines be developed to assist unit heads in both analyzing these data and in using these data to assist with pedagogical, curricular, and performance review discussions.

Department/division heads bring critical knowledge to an analysis of grade distribution patterns including an understanding of the pedagogy used in the course, the composition of the cohort of students who are earning a high grade in a course, the size of the course, etc. For example, in small seminar courses that are taught using a mastery model, we would expect to see a high percentage of students earning ‘A’ and ‘B’ grades, no matter the level of the course. Likewise, in courses that involve a lot of team work, we expect to see stronger students lifting the learning and hence the grades of weaker students in any given group. This is one of the purposes of group work (Yamarik, 2010). If a course is composed of students who are majoring in that discipline, we again might expect higher grades given the interest level of the students (Main & Ost, 2014). Unit heads also are in the best position to examine other patterns of grade distribution such as those that there might be across sections of a given course or whether or not grade distribution varies between major courses and general education courses.

Knowledge such as this and streamlined access to grade distribution data gives department/division heads the tools to have conversations about the learning that is occurring and how this learning relates to the goals of the course. This shifts the conversation from an examination of grades to one that indicates whether or not students have earned their grades by learning the materials and meeting the goals of the course. It is for these types of reasons that grade distribution is best understood at the local level.

  1. We recommend that the annual report produced by the Committee on Undergraduate Education on grade distribution be discontinued. As currently produced, the report is not particularly useful in gauging learning outcomes among our students.

Given what we believe is the necessity for the analysis and evaluation of grade distribution on the local level, the committee questions the utility and necessity for an annual grade distribution report for the whole university.

  1. Examine university policies and procedures and external requirements that may lead to a culture of GPA protection.

The number of controlled entry majors at the university has increased over the last decade and the cumulative GPA requirement to enter many of the majors typically is in the 3.2-3.5 range. In addition, some majors have State requirements such as the 3.0 GPA needed in Education to be certified as a teacher. In addition, as discussed earlier in this report, students face increased competition for internships, entrance to graduate and professional schools, and career placement and therefore students may not be taking challenging courses in which they are interested in order to protect their GPA.

Given increased enrollments over the last ten years or so, conversations are starting to take place to examine better ways to find the right balance of students in various majors relative to the available departmental/college resources.  While no definite changes are on the table at this point, discussions are emerging that recognize that our current Entrance to Major process, which relies on a student’s cumulative GPA, does not provide a good mechanism to accurately to manage enrollments in departments with limited instructional capacity.  To counter the culture of GPA protection, active steps to change institutional policies would be beneficial.


Adelman, C. (2008). Undergraduate grades: A more complex story than inflation. In L.H. Hunt (Ed.), Grade inflation: Academic standards in higher education. (pp. 13-44). Albany: State University of New York Press.

Boretz, E. (2004). Grade inflation and the myth of student consumerism. College Teaching, 52(2), 42-46.

Main, J.B. & Ost, B. (2014). The impact of letter grades on student effort, course selection, and major choice: A regression-discontinuity analysis. Journal of Economic Education, 45(1), 1-10.

Millet, I. (2016). The relationship between grading leniency and grading reliability, Studies in Higher Education, jttp://

Pattison, E., Grodsky, E. & Muller, C. (2013). Is the sky falling? Grade inflation and the signaling power of grades. Educational Research, 42, 259-265.

Rojstaczer, S. 2016. “Grade Inflation at American Colleges and Universities.” Last modified March 29 2016. Accessed June 30 2016.

Yamarik, S. (2010). Does cooperative learning improve student learning outcomes? Journal of Economic Education, 38(3), 259-277.


  • Andrew J. Ahr
  • Barbara A. Barr
  • Jesse Barlow
  • Paul Bartell
  • Kathy Bieschke
  • Gretchen Casper
  • Richard Duschl
  • David Eggebeen, Vice Chair
  • Joyce Furfaro
  • Yvonne M. Gaudelius
  • Sammy Geisinger
  • David Han
  • Clare Kelly
  • Patricia Koch
  • Teresa Lang
  • Karen Pollack
  • Janina M. Safran
  • Ann Schmiedekamp
  • David R. Smith
  • Samia Suliman
  • Mary Beth Williams
  • Matthew Wilson, Chair
  • Richard Young