> Student Learning Assessment Measures
Assessing Student Learning
Best practice requires the assessment of student learning to incorporate multiple methods of assessing student performance. Therefore, in developing the framework for program-level student learning assessment in its outcomes assessment plan, the academic business unit must identify at least two different measures of student learning that will be used to assess the intended learning outcomes for each program to be included in the accreditation review. While the academic business unit may use indirect measures of student learning, at least one assessment measure in each program must be a direct measure. The differences between direct and indirect learning assessment measures are as follows:
Examples of Direct Measures of Student Learning:
- End-of-Program Comprehensive Examinations (either standardized, normed, national exams or locally-developed exams; written or oral)
- Score Gains (indicating the ‘value added’ to students’ learning experiences)
- Capstone Projects (evaluating content knowledge/skills/competencies at the end of a program, e.g., case studies, business plans, research papers, etc.)
- Simulations (e.g., Capsim, GloBus)
- Portfolios of Student Work
- Internships or Other Professionally-Related Field Experiences
- Other Performance-Based Projects or Experiences
Note: All direct learning assessment measures must actually assess the intended student learning outcomes that they are designed to measure (i.e., they must contain required student performance components or tasks that are directly related to each of the intended learning outcomes assessed by the measures). In the case of objective-type comprehensive examinations that are being used as direct learning assessment measures, the exams must contain subsets of questions that are directly and explicitly aligned with, related to, or mapped to each of the intended learning outcomes that the exams are designed to measure.
Examples of Indirect Measures of Student Learning:
- Student Satisfaction Surveys
- Course Evaluations
- Exit Surveys
- Alumni Surveys
- Self-Evaluations of Field Experiences or Other Performance-Based Projects or Tasks
- Exit Interviews
- Focus Groups
Note: In order for these instruments to be used as indirect measures of student learning, they must include items, questions, or components that are directly tied back, mapped, or related to each of the intended student learning outcomes in the program that the instruments are designed to assess.
Examples of survey and evaluation forms that can be used as both indirect measures of student learning and operational assessment tools are provided below.
Evaluation Rubrics and Performance Objectives for Learning Assessment
In addition to learning assessment measures, the academic business unit’s outcomes assessment plan must also identify appropriate evaluation rubrics and performance objectives for those measures.
In assessing the extent to which students are achieving the intended learning outcomes in their programs of study, the criteria for evaluating their performance on the learning assessment measures must be clear and explicit. In other words, faculty and other evaluator judgments regarding student work and performance must be captured as clearly as possible in explicit language. This is accomplished by developing and using evaluation rubrics for all direct measures of student learning (except in the case of multiple-choice, objective-type comprehensive examinations).
Evaluation rubrics articulate in writing the criteria and performance standards that faculty and other evaluators use in assessing student work. They translate evaluator judgments of student performance into ratings on a scale and allow different evaluators to assess student work in reasonably similar ways.
In order for rubrics to be used for the purpose of program-level assessment, they must contain evaluation criteria that are related to the programmatic intended student learning outcomes that will be assessed by the rubrics’ associated learning assessent measures. These criteria must be aligned with, mapped to, or identical to the programmatic intended student learning outcomes. In this way, the evaluators of student performance on a particular assessment instrument will be able to document directly the extent of student learning on each intended outcome.
A guide that identifies the components of an evaluation rubric that can be used for assigning a grade or mark to a particular assignment or task and for program assessment is provided below:
For more information on evaluation rubrics and program-level assessment, see Designing Evaluation Rubrics.
Performance Objectives for Student Learning
Once intended student learning outcomes have been articulated, learning assessment measures have been determined, and evaluation rubrics have been developed, the academic business unit must specify the performance objectives associated with the assessment measures. Performance objectives for student learning can be defined as follows:
Whereas intended learning outcomes are expressed in terms of the specific knowledge, skills, abilities, and competencies that students are expected to acquire, performance objectives on the other hand are the concrete quantitative targets for the assessment instruments used to measure the extent of achievement of the outcomes.
Performance Objectives for Direct Measures of Student Learning
Except in the case of objective-type comprehensive examinations, the performance objectives for direct measures of student learning must be expressed in terms of desired performance ratings/targets on each of the evaluation criteria in the measures’ associated rubrics that are related to the intended student learning outcomes that the measures are designed to assess.
In the case of objective-type comprehensive examinations, the exams must contain subsets of questions that are tied backed, mapped, or related to each of the intended learning outcomes that the exams are designed to measure, and the performance objectives for the exams must be expressed in terms of desired performance levels on the subsets of exam questions associated with each of the intended learning outcomes in the program that the exams are designed to measure.
Overall grades, percentage scores, or marks on any direct learning assessment instrument cannot be used as performance objectives for that instrument, i.e., they cannot be used as direct measures of student learning, inasmuch as they are too highly aggregated and do not measure outcome-specific learning. In addition, they may include non-learning components (e.g., attendance, class participation, etc.).
Performance Objectives for Indirect Measures of Student Learning
The performance objectives for indirect measures of student learning must be expressed in terms of desired results on each of the items, questions, or components in the instruments that are related to the intended student learning outcomes that the measures are designed to assess.
The performance objectives for these instruments cannot be expressed in terms of items, questions, or components in the instruments relating to student satisfaction with instructors, teaching, courses, programs, etc., i.e., these satisfaction-related items, questions, or components are not measures of student learning.