Chapter 6: Building an effective learning environment
6.8 Assessment of learning
‘I was struck by the way assessment always came at the end, not only in the unit of work but also in teachers’ planning….Assessment was almost an afterthought…
Teachers…are being caught between competing purposes of …assessment and are often confused and frustrated by the difficulties that they experience as they try to reconcile the demands.’
6.8.1 Learner assessment in a digital age
Because assessment is a huge topic, it is important to be clear that the purpose of this section is:
(a) to look at one of the components that constitute an effective and comprehensive learning environment, and;
(b) briefly to examine the extent to which assessment is or should be changing in a digital age.
Assessment again is discussed throughout the book, but particularly in:
However, assessment requires a section on its own. Probably nothing drives the behaviour of students more than how they will be assessed. Not all students are instrumental in their learning, but given the competing pressures on students’ time in a digital age, most ‘successful’ learners focus on what will be examined and how they can most effectively meet the assessment requirements (which for most students means in as little time as possible). Therefore decisions about methods of assessment will in most contexts be fundamental to building an effective learning environment.
6.8.2 The purpose of assessment
There are many different reasons for assessing learners. It is important to be clear about the purpose of the assessment, because it is unlikely that one single assessment instrument will meet all assessment needs. Here are some reasons (you can probably think of many more):
- to improve and extend students’ learning;
- to assess students’ knowledge and competence in terms of desired learning goals or outcomes;
- to provide the teacher/instructor with feedback on the effectiveness of their teaching and how it might be improved;
- to provide information for employers about what the student knows and/or can do;
- to filter students for further study, jobs or professional advancement;
- for institutional accountability and/or financial purposes.
I leave it to you to decide the order of importance of these reasons for creating an effective learning environment.
6.8.3 Methods of assessment
The form the assessment takes, as well as the purpose, will be influenced by the instructors’ or examiners’ underlying epistemology: what they believe constitutes knowledge, and therefore how students need to demonstrate their knowledge. The form of assessment should also be influenced by the knowledge and skills that students need in a digital age, which means focusing as much on assessing skills as on assessing knowledge of content. Thus continuous or formative assessment will be as important a consideration as summative or ‘end-of-course’ assessment.
There is a wide range of possible assessment methods. I have selected just a few to illustrate how technology can change the way we assess learners in ways that are relevant to a digital age:
184.108.40.206 No assessment
A question to be considered is whether there is a need for assessment of learning in the first place. There may be contexts, such as a community of practice, where learning is informal, and the learners themselves decide what they wish to learn, and whether they are satisfied with what they have learned. In other cases, learners may not want or need to be formally evaluated or graded, but do want or need feedback on how they are doing with their learning. ‘Do I really understand this?’ or ‘How am I doing compared to other learners?’
However, even in these contexts, some informal methods of assessment by experts, specialists or more experienced participants could help other participants extend their learning by providing feedback and indicating the level of competence or understanding that a participant has achieved or has yet to accomplish. Lastly, students themselves can extend their learning by participating in both self-assessment and peer assessment, preferably with guidance and monitoring from a more knowledgeable or skilled instructor.
220.127.116.11 Computer-based multiple-choice tests
This method is good for testing ‘objective’ knowledge of facts, ideas, principles, laws, and quantitative procedures in mathematics, science and engineering etc., and is cost-effective for these purposes. This form of testing though tends to be limited for assessing higher-level intellectual skills, such as complex problem-solving, creativity, and evaluation, and therefore less likely to be useful for developing or assessing many of the skills needed in a digital age.
18.104.22.168 Written essays or short answers
This method is good for assessing comprehension and some of the more advanced intellectual skills, such as critical thinking, but it is labour intensive, open to subjectivity, and not good for assessing practical skills.
Experiments are taking place with automated essay marking, using developments in artificial intelligence, but so far automated essay marking still struggles to identify valid semantic meaning, especially at a higher education level. For more discussion of automated essay marking, see Chapter 9.4.
22.214.171.124 Peer assessment
This is a very large and specialised topic, which I touched on in Chapter 5, Section 4.6.2. There are three main advantages of peer assessment:
- if conducted properly, it can be an excellent pedagogical benefit to student learning as it requires students to think critically about what they have learned in order to judge other students’ work. It enables them to see other students’ perspectives on the concepts and ideas, thus widening and deepening their understanding;
- it enables learner support to be scaled up, allowing instructors to handle larger numbers of students;
- it develops a core 21st century skill of peer evaluation that will be critical when working in a digital society.
However, if not done properly, peer assessment can have disastrous consequences. I am not a specialist in this area but I have used peer assessment in online learning, but only at a graduate level. These are some of the lessons I learned:
- There must be an intrinsic benefit to students doing the assessment. They must see how this will be useful to their own learning.
- The instructor must give clear criteria or rubrics for assessment, preferably with examples of good or poor answers.
- Students should be rewarded either with marks or praise by the instructor for excellent peer reviews.
- Students must know that the instructor will not only monitor the peer assessments but also will take responsibility for final decisions on student-awarded grades or marks and will over-rule poor assessments by students.
- Don’t put all your eggs in one basket. It is wise to have a parallel or independent method of assessment, such as multiple-choice tests or having half the total course assessment done in more traditional ways.
Thus there are best practices that must be followed. Anyone intending to use peer assessment should prepare themselves properly by looking carefully into the literature. Macdonald (2015) or Topping (2018) offer guides for teachers. For an example of the successful use of peer assessment at a post-secondary level, see Peer Evaluation as a Learning and Assessment Strategy at the School of Business at Simon Fraser University
126.96.36.199 Project work
Project work encourages the development of authentic skills that require understanding of content, knowledge management, problem-solving, collaborative learning, evaluation, creativity and practical outcomes. Designing valid and practical project work needs a high level of skill and imagination from the instructor, and the assessment process can be labour-intensive, but project work is one of the best ways to assess the high level skills needed in a digital age.
‘Assessing student project work‘ by Melinda Kolk on The Creative Educator web site provides an excellent guideline on assessing student project work. Although intended for k-12 teachers, it is also very appropriate for post-secondary educators.
188.8.131.52 e-Portfolios (an online compendium of student work)
E-portfolios enable self-assessment through reflection, knowledge management, recording and evaluation of learning activities, such as teaching or nursing practice, and recording of an individual’s contribution to project work (as an example, see the use of e-portfolios in Visual Arts and Built Environment at the University of Windsor.); e-portfolios are usually self-managed by the learner but can be made available or adapted for formal assessment purposes or job interviews.
184.108.40.206 Simulations, educational games (usually online) and virtual worlds
These enable the practice and evaluation of skills, such as:
- complex and real time decision-making,
- operation of (simulated or remote) complex equipment,
- the development of safety procedures and awareness,
- risk taking and decision-making in a safe environment, activities that require a combination of manual and cognitive skills (see the training of Canadian Border Service officers at Loyalist College, Ontario).
Simulations and serious or educational games (discussed more extensively in Chapter 9.2) are currently expensive to develop, but cost-effective with multiple use, where they replace the use of extremely expensive equipment, where operational activities cannot be halted for training purposes, or where available as open educational resources. Because students’ actions and decision-making are recorded, authentic assessment is embedded in the process.
220.127.116.11 Use of online proctoring
Online proctoring uses cameras and software to monitor the online performance of students when taking examinations away from a school or campus. These tools are offered by commercial companies such as ProctorU or Proctorio. Their use became particularly popular during the Covid-19 pandemic when students had to study from home. For an overview, see Kimmons and Veletsianos, 2021.
There are various ways these tools work, but most involve software that tracks students’ online activity – such as using a search platform during the exam – and are often combined with a camera or cameras that record a student’s activity during the online exam.
Online proctoring has received a lot of criticism for being intrusive and violating student privacy (see, for instance, Balash et al, 2021). The need though for online proctoring can be avoided by using a different assessment strategy from tests or even essay writing. Students’ online work through a learning management system is automatically recorded and can be reviewed over time by an instructor, making continuous assessment a possibility. Students can be asked to create electronic portfolios of their work to demonstrate the application of their learning in real world contexts through video and other forms of recording. However, online proctoring is another possible tool for monitoring more traditional exams.
6.8.4 In conclusion
Nothing is likely to drive student learning more than the method of assessment. At the same time, assessment methods are rapidly changing and are likely to continue to change. It can be seen that some of these assessment methods are both formative, in helping students to develop and increase their competence and knowledge, as well as summative, in assessing knowledge and skill levels at the end of a course or program. In a digital age, assessment and teaching will become even more closely integrated and contiguous. There is an increasing range of digitally based tools that can enrich the quality and range of student assessment. Therefore the choice of assessment methods, and their relevance to other components, are vital elements of any effective learning environment.
Balash, D. et al. (2021) Examining the Examiners: Students’ Privacy and Security Perceptions of Online Proctoring Services USENIX Proceedings of the Seventeenth Symposium on Usable Privacy and Security, August 9
Earle, L. (2003) Assessment as Learning Thousand Oaks CA: Corwin Press
Kimmons, R. and Veletsianos, G. (2021) Proctoring Software in Higher Ed: Prevalence and Patterns, EDUCAUSE Review, February 23
Macdonald, B. (2015) Peer assessment that works: A guide for teachers Lanham MD: Rowan and Littlefield
Topping, K. (2108) Using Peer Assessment to Inspire Reflection and Learning London UK: Routledge
Activity 6. 8 What assessments work in a digital age?
- Are there other methods of assessment relevant to a digital age that I should have included?
- There is still a heavy reliance on computer-based multiple-choice tests in much teaching, mainly for cost reasons. However, although there are exceptions, I would argue in general that these really don’t assess the high level conceptual skills needed in a digital age. Do you agree?
- Are there other methods that are equally as economical, particularly in terms of instructor time, that are more suitable for assessment in a digital age? For instance, do you think automated essay grading is a viable alternative?
- Would it be helpful to think about assessment right at the start of course planning, rather than at the end? Is this feasible?
- In Scenario D, ‘Developing historical thinking‘, did the instructor use assessment to help develop and assess the skills needed in a digital age in an effective manner? If so, how and if not, why not?
For my comments on this activity, click on the podcast below: