alhmadulillah

Embed Size (px)

Citation preview

  • 8/6/2019 alhmadulillah

    1/7

    Feature

    Approaches to Biology Teaching and Learning

    Rubrics: Tools for Making Learning Goals and EvaluationCriteria Explicit for Both Teachers and Learners

    Deborah Allen* and Kimberly Tanner

    *Department of Biological Sciences, University of Delaware, Newark, DE 19716; and Department of Biology,San Francisco State University, San Francisco, CA 94132

    INTRODUCTION

    Introduction of new teaching strategies often expands the

    expectations for student learning, creating a parallel need toredefine how we collect the evidence that assures both usand our students that these expectations are in fact beingmet. The default assessment strategy of the typical large,introductory, college-level science course, the multiple-choice (fixed response) exam, when used to best advantagecan provide feedback about what students know and recallabout key concepts. Leaving aside the difficulty inherent indesigning a multiple-choice exam that captures deeper un-derstandings of course material, its limitations become par-ticularly notable when learning objectives include what stu-dents are able to do as well as know as the result of timespent in a course. If we want students to build their skill atconducting guided laboratory investigations, developing

    reasoned arguments, or communicating their ideas, othermeans of assessment such as papers, demonstrations (thepractical exam), other demonstrations of problem solving,model building, debates, or oral presentations, to name afew, must be enlisted to serve as benchmarks of progressand/or in the assignment of grades. What happens, how-ever, when students are novices at responding to theseperformance prompts when they are used in the context ofscience learning, and faculty are novices at communicatingto students what their expectations for a high-level perfor-mance are? The more familiar terrain of the multiple-choiceexam can lull both students and instructors into a false senseof security about the clarity and objectivity of the evaluationcriteria (Wiggins, 1989) and make these other types of as-

    sessment strategies seem subjective and unreliable (andsometimes downright unfair) by comparison. In a worst-case scenario, the use of alternatives to the conventionalexam to assess student learning can lead students to feel thatthere is an implicit or hidden curriculumthe private cur-riculum that seems to exist only in the minds eye of a courseinstructor.

    Use of rubrics provides one way to address these issues.Rubrics not only can be designed to formulate standards forlevels of accomplishment and used to guide and improve

    performance but also they can be used to make these stan-dards clear and explicit to students. Although the use ofrubrics has become common practice in the K12 setting(Luft, 1999), the good news for those instructors who findthe idea attractive is that more and more examples of the useof rubrics are being noted at the college and university level,with a variety of applications (Ebert-May, undated; Ebert-May et al., 1997; Wright and Boggs, 2002; Moni et al., 2005;Porter, 2005; Lynd-Balta, 2006).

    WHAT IS A RUBRIC?

    Although definitions for the word rubric abound, for thepurposes of this feature article we use the word to denote a

    type of matrix that provides scaled levels of achievement orunderstanding for a set of criteria or dimensions of qualityfor a given type of performance, for example, a paper, anoral presentation, or use of teamwork skills. In this type ofrubric, the scaled levels of achievement (gradations of qual-ity) are indexed to a desired or appropriate standard (e.g., tothe performance of an expert or to the highest level ofaccomplishment evidenced by a particular cohort of stu-dents). The descriptions of the possible levels of attainmentfor each of the criteria or dimensions of performance aredescribed fully enough to make them useful for judgment of,or reflection on, progress toward valued objectives (Hubaand Freed, 2000).

    A good way to think about what distinguishes a rubricfrom an explanation of an assignment is to compare it witha more common practice. When communicating to studentsour expectations for writing a lab report, for example, weoften start with a list of the qualities of an excellent report toguide their efforts toward successful completion; we mayhave drawn on our knowledge of how scientists report theirfindings in peer-reviewed journals to develop the list. Thischecklist of criteria is easily turned into a scoring sheet (toreturn with the evaluated assignment) by the addition ofcheckboxes for indicating either a yes-no decision aboutwhether each criterion has been met or the extent to which

    DOI: 10.1187/cbe.06060168Address correspondence to: Deborah Allen ([email protected]).

    CBELife Sciences EducationVol. 5, 197203, Fall 2006

    2006 by The American Society for Cell Biology 197

  • 8/6/2019 alhmadulillah

    2/7

    it has been met. Such a checklist in fact has a number offundamental features in common with a rubric (Bresciani etal., 2004), and it is a good starting point for beginning toconstruct a rubric. Figure 1 gives an example of such ascoring checklist that could be used to judge a high schoolstudent poster competition.

    However, what is referred to as a full rubric is distin-guished from the scoring checklist by its more extensivedefinition and description of the criteria or dimensions ofquality that characterize each level of accomplishment. Ta-

    ble 1 provides one example of a full rubric (of the analyticaltype, as defined in the paragraph below) that was developedfrom the checklist in Figure 1. This example uses the typicalgrid format in which the performance criteria or dimensionsof quality are listed in the rows, and the successive cellsacross the three columns describe a specific level of perfor-mance for each criterion. The full rubric in Table 1, in con-trast to the checklist that only indicates whether a criterionexists (Figure 1), makes it far clearer to a student presenterwhat the instructor is looking for when evaluating studentwork.

    DESIGNING A RUBRIC

    A more challenging aspect of using a rubric can be finding arubric to use that provides a close enough match to a par-ticular assignment with a specific set of content and processobjectives. This challenge is particularly true of so-calledanalytical rubrics. Analytical rubrics use discrete criteria toset forth more than one measure of the levels of an accom-plishment for a particular task, as distinguished from holis-tic rubrics, which provide more general, uncategorized(lumped together) descriptions of overall dimensions ofquality for different levels of mastery. Many users of ana-

    lytical rubrics often resort to developing their own rubric tohave the best match between an assignment and its objec-tives for a particular course.

    As an example, examine the two rubrics presented inTables 2 and 3, in which Table 2 shows a holistic rubric andTable 3 shows an analytical rubric. These two versions of arubric were developed to evaluate student essay responsesto a particular assessment prompt. In this case the prompt is

    a challenge in which students are to respond to the state-ment, Plants get their food from the soil. What about thisstatement do you agree with? What about this statement doyou disagree with? Support your position with as muchdetail as possible. This assessment prompt can serve as

    both a preassessment, to establish what ideas students bringto the teaching unit, and as a postassessment in conjunctionwith the study of photosynthesis. As such, the rubric isdesigned to evaluate student understanding of the processof photosynthesis, the role of soil in plant growth, and thenature of food for plants. The maximum score using eitherthe holistic or the analytical rubric would be 10, with 2points possible for each of five criteria. The holistic rubricoutlines five criteria by which student responses are evalu-

    ated, puts a 3-point scale on each of these criteria, andholistically describes what a 0-, 1-, or 2-point answer wouldcontain. However, this holistic rubric stops short of definingin detail the specific concepts that would qualify an answerfor 0, 1, or 2 points on each criteria scale. The analyticalrubric shown in Table 3 does define these concepts for eachcriteria, and it is in fact a fuller development of the holisticrubric shown in Table 2. As mentioned, the development ofan analytical rubric is challenging in that it pushes theinstructor to define specifically the language and depth ofknowledge that students need to demonstrate competency,and it is an attempt to make discrete what is fundamentallya fuzzy, continuous distribution of ways an individual couldconstruct a response. As such, informal analysis of studentresponses can often play a large role in shaping and revising

    an analytical rubric, because student answers may holdconceptions and misconceptions that have not been antici-pated by the instructor.

    The various approaches to constructing rubrics in a sensealso can be characterized to be holistic or analytical. Thosewho offer recommendations about how to build rubricsoften approach the task from the perspective of describingthe essential features of rubrics (Huba and Freed, 2000; Arterand McTighe, 2001), or by outlining a discrete series of stepsto follow one by one (Moskal, 2000; Mettler, 2002; Brescianiet al., 2004; MacKenzie, 2004). Regardless of the recom-mended approach, there is general agreement that a rubricdesigner must approach the task with a clear idea of thedesired student learning outcomes (Luft, 1999) and, perhaps

    more importantly, with a clear picture of what meeting eachoutcome looks like (Luft, 1999; Bresciani et al., 2004). If thispicture remains fuzzy, perhaps the outcome is not observ-able or measurable and thus not rubric-worthy.

    Reflection on ones particular answer to two critical ques-tionsWhat do I want students to know and be able todo? and How will I know when they know it and can doit well?is not only essential to beginning construction ofa rubric but also can help confirm the choice of a particularassessment task as being the best way to collect evidenceabout how the outcomes have been met. A first step in

    Figure 1. An example of a scoring checklist that could be used to judge a high school student poster competition.

    D. Allen and K. Tanner

    CBELife Sciences Education198

  • 8/6/2019 alhmadulillah

    3/7

    Table 1. A full analytical rubric for assessing student poster presentations that was developed from the scoring checklist (simple rubric) fromFigure 1

    Criteria

    Level of achievement

    High (3) Medium (2) Low (1)

    Scientific approach(1.0)

    Purpose: The research questionor problem is well-defined and

    connected to prior knowledgein the chosen area of study.

    Purpose: The question or problem isdefined adequately, but may lack a

    clear rationale or purpose that stemsfrom prior knowledge.

    Purpose: The study shows evidence offocus within a given topical area, but

    the search for new knowledge does notseem to be guided by an overlyingquestion; there may be little or nostated connection to prior knowledge.

    Design: The experimentaldesign is appropriate to theproblem; it is efficient,workable, and repeatable. Ifappropriate to the design,important variables areidentified and contrasted to thestandard conditions. The designallows for a sufficient numberof comparisons of variables andof tests to provide meaningfuldata.

    Design: The design is appropriate to thequestion or problem, but may fail toidentify an important variable or toaccount for all important aspects ofstandard conditions. Or, it may lackenough comparisons or tests to obtaindata that have a clear meaning.

    Design: There may be some evidence ofan experimental design, but it may beinappropriate or not used well. Thedesign may fail to account for animportant variable or a major aspect ofstandard conditions. Anotherexperimenter would have difficultyrepeating the experiment.

    Data: The data are analyzedand expressed in an accurate

    way. Statistical analysis of datais present.

    Data: Most data are analyzedthoroughly and presented accurately,

    but with minor flaws. There may be noevident use of statistical analysis.

    Data: The analysis and presentationmay be inaccurate or incomplete.

    Conclusions: Inferences andconclusions are in all casesconnected to, and are consistentwith, the study findings.

    Conclusions: Draws conclusions thatare supported by the data, but may notdirectly connect the conclusions to therelevant evidence.

    Conclusions: Reported inferences andconclusions are not supported by thedata.

    Presentation (0.75) Overall Appearance: The posteris visually appealing; it drawsthe viewer in for a closer look.

    Overall Appearance: The poster isvisually appealing, but may contain toomuch information, and use font sizesthat are difficult to read.

    Overall Appearance: The poster consistsof text only or may appear to have

    been hastily assembled.

    Layout: The material is laid outin a clear and consistent way.The flow of ideas is concise andcohesive.

    Layout: The material is organized in anappropriate way, but in places maylack clarity or consistency. There may

    be extraneous material.

    Layout: There may be little evidence ofa cohesive plan for the layout anddesign of the poster. Ideas seem

    jumbled and/or disconnected.

    Figure and Tables: The figuresand/or tables are appropriatelychosen and well-organized;data trends are illuminated.

    Figures and Tables: General trends inthe data are readily seen from thefigures and tables; in some cases, tablesand figures may provide redundant

    information, or raw data.

    Figures and Tables: Data may berepresented inaccurately or in aninappropriate format.

    Writing: There are nogrammatical or spelling errorsthat detract from readability;use of technical terminology isappropriate.

    Writing: Minor errors in grammar andspelling may be present, but they donot detract from the overall readability;there may be one or two misuses oftechnical language.

    Writing: The readability may beseriously limited by poor grammar,spelling, or word usage.

    Fielding of Questions:Responses to the judgesquestions exhibit soundknowledge of the study and theunderlying science concepts;the presenter exhibits poise,good grace, and enthusiasm.

    Fielding of Questions: Responses to the judges questions show familiarity withthe study design and conduct, but maylack clear or accurate connections to

    basic science concepts. The presenterexhibits enthusiasm, but shows signs ofdiscomfort with some of the questions.

    Fielding of Questions: The presentermay show difficulty in responding toquestions or responses may lackinsight.

    Originality (0.5) Creative Expression: Theresearch topic and design arenew to the presenter; theanswers to the researchquestion posed by the presenter

    do not represent readilyresearched, commonknowledge. The project isunlike any other in this or prioryears competitions.

    Creative Expression: The research topicand design are new to theexperimenter, but the research findingsmay overlap in significant ways withreadily researched, common

    knowledge. The project may be in thesame general topical area as projectsfrom prior years competitions, but hasan original design.

    Creative Expression: The presenter hasevidenced some original expression inthe design of the study and the posterpresentation, but the basic ideas haveappeared in a science fair project book

    or in other projects in the competition.

    Acknowledgment of Assistance:Major aspects of the projectwere carried out by thepresenter without assistance; ifassistance was needed to learna new technique or for theprovision of materials, theassistance is acknowledged.

    Acknowledgment of Assistance: Majoraspects of the project require methodsand/or equipment that is not standardfor a high school science lab setting,

    but this use of other resources andassistance may not be acknowledged.

    Acknowledgment of Assistance: Thepresenter was involved in the design ofthe study, but major aspects werecarried out by another individual; thisassistance may or may not beacknowledged.

    Approaches to Biology Teaching and Learning

    Vol. 5, Fall 2006 199

  • 8/6/2019 alhmadulillah

    4/7

    designing a rubric, the development of a list of qualities thatthe learner should demonstrate proficiency in by completingan assessment task, naturally flows from this prior rumina-tion on outcomes and on ways of collecting evidence thatstudents have met the outcome goal. A good way to getstarted with compiling this list is to view existing rubrics fora similar task, even if this rubric was designed for youngeror older learners or for different subject areas. For example,

    if one sets out to develop a rubric for a class presentation, itis helpful to review the criteria used in a rubric for oralcommunication in a graduate program (organization, style,use of communication aids, depth and accuracy of content,use of language, personal appearance, responsiveness toaudience; Huba and Freed, 2000) to stimulate reflection onand analysis of what criteria (dimensions of quality) alignwith ones own desired learning outcomes. There is techni-cally no limit to the number of criteria that can be includedin a rubric, other than presumptions about the learnersability to digest and thus make use of the information that isprovided. In the example in Table 1, only three criteria wereused, as judged appropriate for the desired outcomes of thehigh school poster competition.

    After this list of criteria is honed and pruned, the dimen-sions of quality and proficiency will need to be separatelydescribed (as in Table 1), and not just listed. The extent andnature of this commentary depends upon the type of ru-

    bricanalytical or holistic. This task of expanding the crite-ria is an inherently difficult task, because of the requirementfor a thorough familiarity with both the elements compris-ing the highest standard of performance for the chosen task,and the range of capabilities of learners at a particular de-velopmental level. A good way to get started is to thinkabout how the attributes of a truly superb performance

    could be characterized in each of the important dimen-sionsthe level of work that is desired for students to aspireto. Common advice (Moskal, 2000) is to avoid use of wordsthat connote value judgments in these commentaries, suchas creative or good (as in the use of scientific terminol-ogy language is good). These terms are essentially sogeneral as to be valueless in terms of their ability to guide alearner to emulate specific standards for a task, and al-

    though it is admittedly difficult, they need to be defined ina rubric. Again, perusal of existing examples is a good wayto get started with writing the full descriptions of criteria.Fortunately, there are a number of data banks that can besearched for rubric templates of virtually all types (ChicagoPublic Schools, 2000; Arter and McTighe, 2001; Shrock, 2006;Advanced Learning Technologies, 2006; University of Wis-consin-Stout, 2006).

    The final step toward filling in the grid of the rubric is tobenchmark the remaining levels of mastery or gradations ofquality. There are a number of descriptors that are conven-tionally used to denote the levels of mastery in addition tothe conventional excellent-to-poor scale (with or without ac-companying symbols for letter grades), and several examples

    from among the more common of these are listed below:

    Scale 1: Exemplary, Proficient, Acceptable, Unacceptable Scale 2: Substantially Developed, Mostly Developed, De-

    veloped, Underdeveloped Scale 3: Distinguished, Proficient, Apprentice, Novice Scale 4: Exemplary, Accomplished, Developing, Begin-

    ning

    In this case, unlike the number of criteria, there might be anatural limit to how many levels of mastery need this ex-

    Table 2. Holistic rubric for responses to the challenge statement: Plants get their food from the soil

    No. Criteria 2 points 1 point 0 points

    1 Demonstrates an understanding that . . .Food can be thought of as carbon-rich molecules including sugarsand starches.

    Demonstrates completeunderstanding of the conceptwith no misconceptions

    Addresses the concept,but in an incompleteway and/or with oneor more

    misconceptions

    Does not address conceptin answer

    2 Demonstrates an understanding that . . .Food is a source of energy forliving things.

    Demonstrates completeunderstanding of the conceptwith no misconceptions

    Addresses the concept,but in an incompleteway and/or with oneor moremisconceptions

    Does not address conceptin answer

    3 Demonstrates an understanding that . . .Photosynthesis is a specific processthat converts water and carbondioxide into sugars.

    Demonstrates completeunderstanding of the conceptwith no misconceptions

    Addresses the concept,but in an incompleteway and/or with oneor moremisconceptions

    Does not address conceptin answer

    4 Demonstrates an understanding that . . .The purpose of photosynthesis isthe production of food by plants.

    Demonstrates completeunderstanding of the conceptwith no misconceptions

    Addresses the concept,but in an incompleteway and/or with oneor moremisconceptions

    Does not address conceptin answer

    5 Demonstrates an understanding that . . .Soil may provide things other thanfood that plants need.

    Demonstrates completeunderstanding of the conceptwith no misconceptions

    Addresses the concept,but in an incompleteway and/or with oneor moremisconceptions

    Does not address conceptin answer

    D. Allen and K. Tanner

    CBELife Sciences Education200

  • 8/6/2019 alhmadulillah

    5/7

    panded commentary. Although it is common to have mul-tiple levels of mastery, as in the examples above, someeducators (Bresciani et al., 2004) feel strongly that it is notpossible for individuals to make operational sense out ofinclusion of more than three levels of mastery (in essence, athere, somewhat there, not there yet scale). As expected,the final steps in having a usable rubric are to ask bothstudents and colleagues to provide feedback on the firstdraft, particularly with respect to the clarity and gradationsof the descriptions of criteria for each level of accomplish-ment, and to try out the rubric using past examples ofstudent work.

    Huba and Freed (2000) offer the interesting recommenda-tion that the descriptions for each level of performanceprovide a real world connection by stating the implica-tions for accomplishment at that level. This description ofthe consequences could be included in a criterion calledprofessionalism. For example, in a rubric for writing a labreport, at the highest level of mastery the rubric could state,this report of your study would persuade your peers of thevalidity of your findings and would be publishable in apeer-reviewed journal. Acknowledging this recommenda-tion in the construction of a rubric might help to steer

    students toward the perception that the rubric represents thestandards of a profession, and away from the perceptionthat a rubric is just another way to give a particular teacherwhat he or she wants (Andrade and Du, 2005).

    As a further help aide for beginning instructors, a numberof Web sites, both commercial and open access, have toolsfor online construction of rubrics from templates, for exam-ple, Rubistar (Advanced Learning Technologies, 2006) andTeAch-nology (TeAch-nology, undated). These tools allowthe would-be rubrician to select from among the varioustypes of rubrics, criteria, and rating scales (levels of mas-tery). Once these choices are made, editable descriptions fallinto place in the proper cells in the rubric grid. The rubricsare stored in the site databases, but typically they can bedownloaded using conventional word processing or spread-sheet software. Further editing can result in a rubricuniquely suitable for your teaching/learning goals.

    ANALYZING AND REPORTING INFORMATIONGATHERED FROM A RUBRIC

    Whether used with students to set learning goals, as scoringdevices for grading purposes, to give formative feedback to

    Table 3. An analytical rubric for responses to the challenge statement, Plants get their food from the soil

    No. Criteria 2 points 1 point 0 points

    1 Demonstrates anunderstanding that . . .Food can be thought ofas carbon-rich molecules

    including sugars andstarches.

    Defines food as sugars,carbon skeletons, orstarches or glucose.

    Attempts to define foodand examples of food,but does not includesugars, carbon skeletons,

    or starches.

    Does not address what couldbe meant by food or onlytalks about plants eating orabsorbing dirt.

    Must go beyond use ofword food.

    2 Demonstrates anunderstanding that . . .Food is a source ofenergy for living things.

    Describes food as anenergy source anddiscusses how livingthings use food.

    Discusses how livingthings may use food, but does not associatefood with energy.

    Does not address the role offood.

    3 Demonstrates anunderstanding that . . .Photosynthesis is aspecific process thatconverts water andcarbon dioxide intosugars.

    Discusses photosynthesisin detail, including adescription of thereactantswater andcarbon dioxide, theirconversion with energyfrom sunlight to formglucose/sugars, and theproduction of oxygen.

    Partially discussesprocess ofphotosynthesis and maymention a subset of thereactants and products,but does notdemonstrateunderstanding ofphotosynthesis as aprocess.

    Does not address the processof photosynthesis.

    May say that plants needwater and sunlight.

    4 Demonstrates anunderstanding that . . .The purpose ofphotosynthesis is theproduction of food byplants.

    Discusses the purpose ofphotosynthesis as themaking of food and/orsugar and/or glucose byplants.

    Associatesphotosynthesis withplants, but does notdiscuss photosynthesisas the making of foodand/or sugar and/orglucose and/or starch.

    Does not address thepurpose of photosynthesis.

    5 Demonstrates anunderstanding that . . . Soilmay provide thingsother than food thatplants need.

    Discusses at least twoappropriate roles for soilfor some plants. Possibleroles include theimportance of minerals(N, P, K), water, andstructural support fromthe soil.

    Discusses at least oneappropriate role for soil.Possible roles includethe importance ofminerals (N, P, K),vitamins, water, andstructural support fromthe soil.

    Does not address anappropriate role for soil. Theuse of the word nutrientwithout further elaborationis insufficient for credit.

    Approaches to Biology Teaching and Learning

    Vol. 5, Fall 2006 201

  • 8/6/2019 alhmadulillah

    6/7

    students about their progress toward important courseoutcomes, or for assessment of curricular and course inno-vations, rubrics allow for both quantitative and qualitativeanalysis of student performance. Qualitative analysis couldyield narrative accounts of where students in general fell inthe cells of the rubric, and they can provide interpretations,conclusions, and recommendations related to student learn-ing and development. For quantitative analysis the various

    levels of mastery can be assigned different numerical scoresto yield quantitative rankings, as has been done for thesample rubric in Table 1. If desired, the criteria can be givendifferent scoring weightings (again, as in the poster presen-tation rubric in Table 1) if they are not considered to haveequal priority as outcomes for a particular purpose. The totalscores given to each example of student work on the basis ofthe rubric can be converted to a grading scale. Overallperformance of the class could be analyzed for each of thecriteria competencies.

    Multiple-choice exams have the advantage that they canbe computer or machine scored, allowing for analysis andstorage of more specific information about different contentunderstandings (particularly misconceptions) for each item,

    and for large numbers of students. The standard rubric-referenced assessment is not designed to easily provide thistype of analysis about specific details of content understand-ing; for the types of tasks for which rubrics are designed,content understanding is typically displayed by some formof narrative, free-choice expression. To try to capture boththe benefits of the free-choice narrative and generate anin-depth analysis of students content understanding, par-ticularly for large numbers of students, a special type ofrubric, called the double-digit, is typically used. A large-scale example of use of this type of scoring rubric is given bythe Trends in International Mathematics and Science Study(1999). In this study, double-digit rubrics were used to codeand analyze student responses to short essay prompts.

    To better understand how and why these rubrics are

    constructed and used, refer to the example provided inFigure 2. This double-digit rubric was used to score andanalyze student responses to an essay prompt about ecosys-tems that was accompanied by the standard sun-tree-birddiagram (a drawing of the sun, a tree, and other plants;various primary and secondary consumers; and some notwell-identifiable decomposers, with interconnecting arrowsthat could be interpreted as energy flow or cycling of mat-ter). A brief narrative, summarizing the big ideas that could

    be included in a complete response, along with a sample re-sponse that captures many of these big ideas accompanies theactual rubric. The rubric itself specifies major categories ofstudent responses, from complete to various levels of incom-pleteness. Each level is assigned one of the first digits of the

    scoring code, which could actually correspond to a conven-tional point total awarded for a particular response. In theexample in Figure 2, a complete response is awarded a maxi-mum number of 4 points, and the levels of partially completeanswers, successively lower points. Here, the incompleteand no response categories are assigned first digits of 7 and9, respectively, rather than 0 for clarity in coding; they can beconverted to zeroes for averaging and reporting of scores.

    The second digit is assigned to types of student responsesin each category, including the common approaches andmisconceptions. For example, code 31 under the first partial-

    response category denotes a student response that talksabout energy flow and matter cycling, but does not mentionloss of energy from the system in the form of heat. Thesample double-digit rubric in Figure 2 shows the code num-

    bers that were assigned after a first pass through a rela-tively small number of sample responses. Additional codeswere later assigned as more responses were reviewed andthe full variety of student responses revealed. In both cases,the second digit of 9 was reserved for a general descriptionthat could be assigned to a response that might be unique toone or only a few students but nevertheless belonged in aparticular category. When refined by several assessments of

    Figure 2. A double-digit rubric used to score and analyze studentresponses to an essay prompt about ecosystems.

    D. Allen and K. Tanner

    CBELife Sciences Education202

  • 8/6/2019 alhmadulillah

    7/7

    student work by a number of reviewers, this type of rubriccan provide a means for a very specific quantitative andqualitative understanding, analysis, and reporting of thetrends in student understanding of important concepts. Ahigh number of 31 scores, for example, could provide amajor clue about deficiencies in past instruction and thusgoals for future efforts. However, this type of analysis re-mains expensive, in that scores must be assigned and en-

    tered into a data base, rather than the simple collection ofstudent responses possible with a multiple-choice test.

    WHY USE RUBRICS?

    When used as teaching tools, rubrics not only make theinstructors standards and resulting grading explicit, butthey can give students a clear sense of what the expectationsare for a high level of performance on a given assignment,and how they can be met. This use of rubrics can be mostimportant when the students are novices with respect to aparticular task or type of expression (Bresciani et al., 2004).

    From the instructors perspective, although the time ex-pended in developing a rubric can be considerable, once

    rubrics are in place they can streamline the grading process.The more specific the rubric, the less the requirement forspontaneous written feedback for each piece of studentworkthe type that is usually used to explain and justifythe grade. Although provided with fewer written commentsthat are individualized for their work, students neverthelessreceive informative feedback. When information from ru-

    brics is analyzed, a detailed record of students progresstoward meeting desired outcomes can be monitored andthen provided to students so that they may also chart theirown progress and improvement. With team-taught coursesor multiple sections of the same course, rubrics can be usedto make faculty standards explicit to one another, and tocalibrate subsequent expectations. Good rubrics can be crit-ically important when student work in a large class is beinggraded by teaching assistants.

    Finally, by their very nature, rubrics encourage reflectivepractice on the part of both students and teachers. In partic-ular, the act of developing a rubric, whether or not it issubsequently used, instigates a powerful consideration ofones values and expectations for student learning, and theextent to which these expectations are reflected in actualclassroom practices. If rubrics are used in the context ofstudents peer review of their own work or that of others, orif students are involved in the process of developing therubric, these processes can spur the development of theirability to become self-directed and help them develop in-sight into how they and others learn (Luft, 1999).

    ACKNOWLEDGMENTSWe gratefully acknowledge the contribution of Richard Donham(Mathematics and Science Education Resource Center, University ofDelaware) for development of the double-digit rubric in Figure 2.

    REFERENCESAdvanced Learning Technologies, University of Kansas (2006).Rubistar. http://rubistar.4teachers.org/index.php (accessed 28May 2006).

    Andrade, H., and Du, Y. (2005). Student perspectives on rubric-referenced assessment. Pract. Assess. Res. Eval. 10. http://pareonline.net/pdf/v10n3.pdf (accessed 18 May 2006).

    Arter, J. A., and McTighe, J. (2001). Scoring Rubrics in the Class-room: Using Performance Criteria for Assessing and ImprovingStudent Performance, Thousand Oaks, CA: Corwin Press.

    Bresciani, M. J., Zelna, C. L., and Anderson, J. A. (2004). Criteria andrubrics. In: Assessing Student Learning and Development: A Hand-

    book for Practitioners, Washington, DC: National Association ofStudent Personnel Administrators, 2937.

    Chicago Public Schools (2000). The Rubric Bank. http://intranet.cps.k12.il.us/Assessments/Ideas_and_Rubrics/Rubric_Bank/rubric_bank.html (accessed 18 May 2006).

    Ebert-May, D., Brewer, C., and Allred, S. (1997). Innovation in largelecturesteaching for active learning. Bioscience 47, 601607.

    Ebert-May, D. (undated). Scoring Rubrics. Field-tested LearningAssessment Guide. http://www.wcer.wisc.edu/archive/cl1/flag/cat/catframe.htm (accessed 18 May 2006).

    Huba, M. E., and Freed, J. E. (2000). Using rubrics to providefeedback to students. In: Learner-Centered Assessment on CollegeCampuses, Boston: Allyn and Bacon, 151200.

    Luft, J. A. (1999). Rubrics: design and use in science teacher educa-

    tion. J. Sci. Teach. Educ. 10, 107121.Lynd-Balta, E. (2006). Using literature and innovative assessmentsto ignite interest and cultivate critical thinking skills in an under-graduate neuroscience course. CBE Life Sci. Educ. 5, 167174.

    MacKenzie, W. (2004). Constructing a rubric. In: NETSS Curricu-lum Series: Social Studies Units for Grades 912, Washington, DC:International Society for Technology in Education, 2430.

    Mettler, C. A. (2002). Designing scoring rubrics for your classroom.In: Understanding Scoring Rubrics: A Guide for Teachers, ed. C.Boston, University of Maryland, College Park, MD: ERIC Clearing-house on Assessment and Evaluation, 7281.

    Moni, R., Beswick, W., and Moni, K. B. (2005). Using student feed-back to construct an assessment rubric for a concept map in phys-iology. Adv. Physiol. Educ. 29, 197203.

    Moskal, B. M. (2000). Scoring Rubrics Part II: How? ERIC/AEDigest, ERIC Clearinghouse on Assessment and Evaluation. EricIdentifier #ED446111. http://www.eric.ed.gov (accessed 21 April2006).

    Porter, J. R. (2005). Information literacy in biology education: anexample from an advanced cell biology course. Cell Biol. Educ. 4,335343.

    Shrock, K. (2006). Kathy Shrocks Guide for Educators. http://school.discovery.com/schrockguide/assess.html#rubrics (accessed5 June 2006).

    TeAch-nology, Inc (undated). TeAch-nology. http://teach-nology.com/web_tools/rubrics (accessed 7 June 2006).

    Trends in International Mathematics and Science Study (1999). Sci-ence Benchmarking Report, 8th Grade, Appendix A: TIMSS Design

    and Procedures. http://timss.bc.edu/timss1999b/sciencebench_report/t99bscience_A.html (accessed 9 June 2006).

    University of WisconsinStout (2006). Teacher Created Rubrics forAssessment. http://www.uwstout.edu/soe/profdev/rubrics.shtml(accessed 7 June 2006).

    Wiggins, G. (1989). A true test: toward more authentic and equitableassessment. Phi Delta Kappan 49, 703713.

    Wright, R., and Boggs, J. (2002). Learning cell biology as a team: aproject-based approach to upper-division cell biology. Cell Biol.Educ. 1, 145153.

    Approaches to Biology Teaching and Learning

    Vol. 5, Fall 2006 203