Asee peer logo

Assessment Tools Based On Bloom's Taxonomy Of Educational Objectives

Download Paper |

Conference

1999 Annual Conference

Location

Charlotte, North Carolina

Publication Date

June 20, 1999

Start Date

June 20, 1999

End Date

June 23, 1999

ISSN

2153-5965

Page Count

11

Page Numbers

4.100.1 - 4.100.11

DOI

10.18260/1-2--7785

Permanent URL

https://peer.asee.org/7785

Download Count

9733

Request a correction

Paper Authors

author page

Nanette Veilleux

Download Paper |

Abstract
NOTE: The first page of text has been automatically extracted and included below in lieu of an abstract

Session 2530

Assessment Tools based on Bloom’s Taxonomy of Educational Objectives

Nanette Veilleux Boston University

Abstract

Fair and useful assessment of student abilities is often a difficult task. Ideally, evaluation instruments should assess how well the student has understood material directly presented (knowledge and comprehension), how well the student can apply this information to a new problem (application), how well a student can distinguish and relate the component parts of a topic or argument (analysis) and how well a student can extend his/her learning to new areas (synthesis and design). These four skills represent different, progressive, levels of understanding, that fall along an abridged hierarchy as that outlined in Bloom’s Taxonomy of Educational Objectives1.

This paper describes a method of designing in-class exams and take-home projects for a freshman computer science course. Here, the design of the test questions and project requirements makes explicit use of this abridged version of Bloom’s Taxonomy of Educational Objectives. The in-class tests described in this work evaluate the depth of a student’s understanding by incorporating a planned variety of questions, ranging from those easily answered by a student who has understood basic lectures and reading to problems requiring novel application of basic tools. Based on the degree of difficulty of the questions answered, students are graded according to a deterministic criterion (as opposed to, e.g., scaling based on class averages). The take-home projects also employ a deterministic criterion that indicates precisely what is expected of the student for each of three performance levels: passing (C), good (B) and excellent (A). An important feature of this design is that the instructions for the lower level work are more detailed and require less student innovation, whereas the instructions for A level projects offer less direction.

I. Introduction

Developing useful and fair evaluation instruments such as tests, projects and papers are often cited in informal discussions among faculty as the most difficult task in teaching. On the pre-college level, tests are traditionally graded according to a strict numerical mapping from percentage correct to a letter grade. In college, it is more typical to convert the percentage correct to a letter grade, based on the distance from a mean, loosely (and usually incorrectly) referred to grading on a bell-shaped curve. While this serves to provide a distribution of grades by normalizing performance to the context of the immediate class rather than assigning an a priori mapping (e.g. 87 is a B), this method is empirically as well as statistically suspect. In practice, “grading on a curve” is often based on a small sample size and the variance of the sample is disregarded. Even if the statistical analysis is correct, normalizing for all bias factors

Veilleux, N. (1999, June), Assessment Tools Based On Bloom's Taxonomy Of Educational Objectives Paper presented at 1999 Annual Conference, Charlotte, North Carolina. 10.18260/1-2--7785

ASEE holds the copyright on this document. It may be read by the public free of charge. Authors may archive their work on personal websites or in institutional repositories with the following citation: © 1999 American Society for Engineering Education. Other scholars may excerpt or quote from these materials with the same citation. When excerpting or quoting from Conference Proceedings, authors should, in addition to noting the ASEE copyright, list all the original authors and their institutions and name the host city of the conference. - Last updated April 1, 2015