The last key ‘fundamentals’ of quality teaching and learning in a digital age are evaluation and innovation: assessing what has been done, and then looking at ways to improve on it (for a more in-depth discussion of the issues involved in evaluating online learning, see Gunawardena et al., 2000).
13.11.1 Course evaluation
18.104.22.168 Why evaluation is important
For tenure and promotion, it is important if you are teaching to be able to provide evidence that the teaching has been successful. New tools and new approaches to teaching are constantly coming available. They provide the opportunity to experiment a little to see if the results are better, and if we do that, we need to evaluate the impact of using a new tool or course design. It’s what professionals do. But the main reason is that teaching is like golf: we strive for perfection but can never achieve it. It’s always possible to improve, and one of the best ways of doing that is through a systematic analysis of past experience.
22.214.171.124 What to evaluate: summative
In Step 1, I defined quality very narrowly:
teaching methods that successfully help learners develop the knowledge and skills they will require in a digital age.
It will be clear from reading this book that I believe that to achieve these goals, it will be necessary to re-design most courses and programs. So it will be important to know whether these redesigned courses are more effective than the ‘old’ courses. One way of evaluating these new courses is to see how they compared with the older courses, for instance:
- completion rates will be at least as good if not better for the new version of the course(s)
- grades or measures of learning will be at least as good if not better for the new version.
The first two criteria are relatively easily measured in quantitative terms. We should be aiming for completion rates of at least 85 per cent, which means of 100 students starting the course, 85 complete by passing the end of course assessment (unfortunately, many current courses fail to achieve this rate, but if we value good teaching, we should be trying to bring as many students as possible to the set standard).
The second criterion is to compare the grades. We would expect at least as many As and Bs in our new version as in the old classroom version, while maintaining the same (hopefully high) standards or higher.
However, to be valid the evaluation will also would need to define the knowledge and skills within a course that meet the needs of a digital age, then measuring how effective the teaching was in doing this. Thus a third criterion would be:
- the new design(s) will lead to new and different learning outcomes that are more relevant to the needs of a digital age.
This third criterion is more difficult, because it suggests a change in the intended learning goals for courses or programs. This might include assessing students’ communication skills with new media, or their ability to find, evaluate, analyze and apply information appropriately within the subject domain (knowledge management), which have not previously been (adequately) assessed in the classroom version. This requires a qualitative judgement as to which learning goals are most important, and this may require endorsement or support from a departmental curriculum committee or even an external accreditation body.
With a new design, and new learning outcomes, it may be difficult to reach these standards immediately, but over two or three years it should be possible.
126.96.36.199 What to evaluate: formative
However, even if we measure the course by these three criteria, we will not necessarily know what worked and what didn’t in the course. We need to look more closely at factors that may have influenced students’ ability to learn. We have laid out in steps 1-8 some of these factors. Some of the questions for which you may want answers are as follows:
- Were the learning outcomes or goals clear to students?
- What learning outcomes did most students struggle with?
- Was the teaching material clear and well structured?
- Were the learning materials and tools students needed easily accessible and available 24 x 7?
- What topics generated good discussion and what didn’t?
- Did students draw appropriately on the course materials in their discussion forums or assignments?
- Did students find their own appropriate sources and use them well in discussions, assignments and other student activities?
- Which student activities worked well, and which badly? Why?
- Of the supplied learning materials, what did students make most and least use of?
- Did the assignments adequately assess the knowledge and skills the course was aiming to teach?
- Were the students overloaded with work?
- Was it too much work for me as an instructor?
- If so, what could I do to better manage my workload (or the students’) without losing quality?
- How satisfied were the students with the course?
- How satisfied am I with the course?
I will now suggest some ways that these questions can be answered without again causing a huge amount of work.
188.8.131.52 How to evaluate factors contributing to or inhibiting learning
There is a range of resources you can draw on to do this. Indeed, there are more resources for evaluating online learning than for evaluating traditional face-to-face courses, because online learning leaves a traceable digital trail of evidence:
- individual student participation rates in online activities, such as self-assessment questions, discussion forums, podcasts;
- qualitative analysis of the discussion forums, for instance the quality and range of comments, indicating the level or depth of engagement or thinking;
- student e-portfolios, assignments and exam answers;
- student questionnaires;
- focus groups;
- student grades.
However, before starting, it is useful to draw up a list of questions as in the previous section, and then look at which sources are most likely to provide answers to those questions.
At the end of a course, I tend to look at the student grades, and identify which students did well and which struggled. This depends of course on the number of students in a class. In a large class I might sample by grades. I then go back to the beginning of the course and track their online participation as far as possible (learning analytics make this much easier, although it can also be done manually if a learning management system is used). I find that some factors are student specific (e.g. a gregarious student who communicates with everyone) and some are course factor specific, for example, related to learning goals or the way I have explained or presented content. This qualitative approach will often suggest changes to the content or the way I interacted with students for the next version of the course. I may for instance determine next time to manage more carefully students who ‘hog’ the conversation.
Many institutions have a ‘standard’ student reporting system at the end of each course. These are often useless for the purposes of evaluating courses with an online component. The questions asked need to be adapted to the mode of delivery. However, because such questionnaires are used for cross course comparisons, the people who manage such evaluation forms are often reluctant to have a different version for online teaching. Secondly, because these questionnaires are usually voluntarily completed by students after the course has ended, response rates are often notoriously low (less than 20 per cent). Low response rates are usually worthless or at best highly misleading. Students who have dropped out of the course won’t even get the questionnaire in most cases. Low response rates tend to be heavily biased towards successful students. It is the students who struggled or dropped out that you need to hear from.
I find small focus groups work better than student questionnaires, and for this I prefer either face-to-face or synchronous tools such as Zoom. I will deliberately approach 7-8 specific students covering the full range of achievement, from drop-out to A, and conduct a one hour discussion around specific questions about the course. If one selected student does not want to participate, I try to find another in the same category. If you can find the time, two or three such focus groups will provide more reliable feedback than just one.
Usually I spend quite a bit of time at the end of the first presentation of a redesigned course evaluating it and making changes in the next version, usually working with a trusted instructional designer. After that I concentrate mainly on ensuring completion rates and grades are at the standard I have aimed for.
What I am more likely to do in the third or subsequent offerings is to look at ways to improve the course that are the result of new external factors, such as new software (for instance. an e-portfolio package), or new processes (for instance, student-generated content, using mobile phones or cameras, collecting project-related data). This keeps the course ‘fresh’ and interesting. However, I usually limit myself to one substantive change, partly for workload reasons but also because this way it is easier to measure the impact of the change.
It is indeed an exciting time to be an instructor. In particular, the constant evolution of mobile phone apps, new, instructor-focused ‘lightweight’ LMSs such as Instructure/Canvas, open educational resources, new hardware such as VR headsets, MOOCs, and emerging technologies such as serious games, virtual and augmented reality and artificial intelligence, all offer a wide variety of opportunities for innovation and experiment. These can be either be integrated within the existing LMS and existing course structure, or designs can be more radical. Chapters 3 to 5 discuss a wide range of possible designs.
However, it is important to remember that the aim is to enable students to learn effectively. We do have enough knowledge and experience to be able to design ‘safe’, effective learning around standard LMSs. New is not always better. Thus for instructors starting in online learning, I would urge caution. Follow the experienced route, then gradually add and evaluate new tools and new approaches to learning as you become more experienced.
Lastly, if you do make an interesting innovation in your course, make sure you properly evaluate it as suggested above, then share these findings with colleagues and help them either include the innovation within their own course, or help them make the innovation even better through their own modifications. That way we can all learn from each other.
Gunawardena, C., Lowe, C. & Carabajal, K. (2000) Evaluating Online Learning: models and methods in D. Willis et al. (eds.), Proceedings of Society for Information Technology & Teacher Education International Conference 2000 San Diego CA
Activity 13.11 Evaluating your course or program
1. Design and conduct an evaluation of your course using the questions in section 184.108.40.206 and the data and methods suggested in section 220.127.116.11. What changes, if any, will you make as a result?
There is no feedback provided for this activity.