(Adapted and edited from on "Evaluation of E-Learning Courses", by Jara, M., Mohamad, F. and Cranmer, S. (2008). WLE Centre Occassional Paper 4 and Dale, V. (2014) "UCL E-Learning Evaluation Toolkit", with additional materials.)
The purpose of this wiki/resource is not to provide a detailed look at course evaluation methodologies, but to give an overview or lead-in to some of things that you should be thinking about with regards to course evaluation.
The main aim of any course should be to provide students (undergraduate, post graduate feepaying and non fee-paying) with the best teaching and learning experience as they are the course's best ambassadors. Evaluation of the student experience should therefore be core to any course. It provides feedback on the courses strengths and weaknesses as well as providing information to senior stakeholders about the course and it's place in the wider context of things.
The evaluation of a course should however consider the collection of feedback from all stakeholders involved in the design and running of the course. This should include, in addition to students, the collection of data from tutors, administrators, and technical support staff. In addition to these you may want to include in your evalution the opinion of other individuals less closely related to your course:
Evaluations typically measure learner reactions to the course/resource (e.g. what did they like/dislike? How confident do they feel as a result of using it?). One way of looking at this is Kirkpatrick's (1976) popular (hierarchy of evaluation other measures may also look at whether learning has taken place, or even whether the learners are able to implement the knowledge acquired into practice and greater still does it have a demonstrable impact on practice or employability.
Figure1 : Levels of evaluation by Kirkpatrick (1967)
However, Kirkpatricks's model has some fundemental limitations and this therefore implies potential risks for evaluation clients and stakeholders (Bates, R (2004)). Bates argues the model focuses on outcome data that is collected after the training and therefore suggests that pre-course measures of learning or job performance measures are not essential for determining the effectiveness of the course. Secondly that each higher level is more informative than the lower levels, but that there is no suitable basis for this assumption in practice. A third point, Bates also argues, is that the model also implies there are much fewer variables that are of concern and therefore does not account for the complex network of factors involved with the training process. It is popular because is defined training in a systematic way and that results in the workplace is the mose valuable/descriptive information. A good fit for training professionals with a competative profit oriententation.
Evaulation is a form research and while there are numerous research designs, you will find that the ones used for evaluation fall into 4 broad categories.
Post-intervention: The most commonly used method for evaluation which is to use a post-test design which obtains feedback after the course.
Pre-post research design: Frequently used research design where you take a set of measures before the course and then again at the end of the course. Disadvantge is that the longer there is between the pre and post measures the harder it is to attribute any changes to what you are hoping to evaluate.
Test-control group research design: Comparison of a test group (e.g. those who have taken the course) against a comparable group who have not had the intervention (e.g. course) of done something different. Ideally the control group should be group of individuals who have not experienced any intervention.
Longitudinal research design: Good for evaluating the long term impact of any intervention. Evaluation of learners at various points in time before, during, after at various predefined times. Each time using the same measures.
Survey tool that may incorporate different types of questions (e.g. yes/no, multiple choice, Likert-type scale and open questions). Usually largely quantitative.
Structured, semi-structured or unstructured conversation with a single participant about their views and experiences.
Typically a semi-structured conversation with a small group of participants. Here, the participants are encouraged to reflect on each other’s contributions.
Similar to a focus group, except that the interviewer is in sole control of the conversation; typically, each participant will be invited to provide a response to each question rather than engage in group discussion as they would in a focus group.
An ethnographic research method in which the researcher observes the participants. The researcher may be overt (visible to the group as an outsider) or overt (an insider within the group).
System logs (within Moodle, for example) will provide information about which students accessed which resources, when they accessed them and for how long. This method, when applied to large data sets forms the basis of learning analytics, where big data may be used to predict students’ performance.
Content analysis may be performed on a document or collection of documents produced by stakeholders. These may include personal memos or narratives, institutional or departmental strategies, course documentation, and may exist in a variety of formats (text, images, video).
Learners’ performance data can be correlated with specific interventions to determine whether the intervention has had a significantly positive effect on performance.
Social network analysis
Analysis and visualisation of online network connections in social media.
While traditionally with face-to-face courses collection of feedback from staff (tutors, administrators and support-staff) has commonly been through team meetings, it is not always feasible to do this for e-learning courses where staff may be geographically very distant from each other, and some staff might be part-time, further limiting their ability to attend any meetings. One strategy taken by Swinglehurst (2006) was to arrange for tutors in an online course to meet once a month and analyse specific teaching episodes described and presented by one of the tutors. The course teams found these structured meetings valuable and allowed them analyse their teaching practice, learn from other's experiences and practices and an opportunity to agree on changes and improvements.
Evaluation of student feedback should be an integral part of the activities of the course, and include collection of feedback during the run of the course as well as at the end of it. A simple but effective approach developed by Daly et al. (2006) was to include evaluation activities as part of the course design , encouraging students at pre-determined moments to reflect on their learning experience and how the design/materials/activities had been supportive (or not).
This was implemented by posing a question to the students to prompt their feedback online through discussion forums or in face-to-face groups depending on what formats the course delivery allowed. The question needed to be carefully designed to be sufficiently open that is allows students to express their particular concerns and issues. Examples of such questions can be found in App 4. An alternative is the use of online learning diaries that run through the course, in which students are encourged through brief questions to post their thoughts on the learning process and how the course has supported them.
The main benefits of obtaining feedback from students during the course is the possibility of identifying the issues students are having difficulties with while they are actually experiencing them, as well as the opportunity to explore students' experiences of the course.
Many courses use the simple but effective strategy of using an end of course questionnaire to get feedback on a wide range of aspects of the course. Because such questionnaires constitute part of the internal quality assurance mechanisms of most higher education systems it is possible to find a wide range of options regarding questionnaires, questions, modes of application, etc. However, research suggests that the effectiveness of the student questionnaire is highly affected by the online features of the course (Jara, 2007).
Aspects to carefully consider to overcome potential difficulties when building an end-of-course questionnaire are:
Closed/open questions, number of questions, relevance, topics covered and language used. There is no 'one' best way as it depends on what you aim to evaluate and what your student body is like.
Mode of application:
Depending on course modality (fully online/blended) you should consider what the most effecient way to collect feedback from students is. E.g. online or a paper-based questionnaire. Each has it benefits and limitations.
Although questionnaires are the most common strategy for collection of student feedback, other strategies such as focus-groups are also very effective and easy to implement particularly for blended courses where face-to-face sessions are planned.
There are a number of ways in which feedback can be collected from students and tutors, both face-to-face and online. Such as focus groups, questionnaires, team meetings and online discussion spaces. In addition there is usually the possibility on e-learning courses to collect data from the computer logs.
Basic statistics such as last login date, number of messages sent by users, areas of content and discussion boards/forums visited by users are examples of the ongoing monitoring that tutors could easily carry out within a VLE.
These statistics do not provide indications of the quality of the student/tutor participation or of a satisfactory online experience. They are however a very useful tool for monitoring online presence, to obtain an overall picture of the ongoing activity, as well as to detect problems that users may be experiencing in accessing/participating in the online environment.
Evaluating e-learning requires all aspects of the course and its components to be reviewed with the aim of identifying strengths and weaknesses, and methods of improvement. It is not appropriate to over concentrate on specific aspects of the course, however, the literature suggests approaching evaluation holistically including the learning and teaching processes and the specific elearning aspects, such as the technology and its support (CAP, 2006).
There is a wide range of aspects that could be included in an evaluation of e-learning and these depend on the context and on the objectives and audience of the evaluation (CAP, 2006).
Relevant issues that should be considered:
There are also different evaluation questions which arise at different points in the life cycle of the course. These are covered in the Appendix: Evaluation of Online Courses
Bates, R. (2004) "A critical analysis fo evaluation practice: the Kirkpatrick model and the principle of beneficence" Elsevier. Evaluation and Program Planning 27 (2004) 341-347
Daly,C., Pachler,N., Pickering, J. and Bezemer, J. (2006), "A study of e-learners’ experiences in the mixed-mode professional degree programme, the Master of Teaching." Project Report: Executive Summary: Available at: http://www.cde.london.ac.uk/support/awards/file3272.pdf (last accessed 23 April 2008).
Jara,M. (2007), "Assuring and Enhancing the Quality of Online Courses: Exploring Internal Mechanisms in Higher Education Institutions in England." Unpublished PhD Thesis. UCL Institute of Education, University of London, London.
Kirkpatrick, D. L. (1976). Evaluation of training. In R. L. Craig (Ed.), "Training and development handbook: A guide to human resource development." New York: McGraw Hill.
Swinglehurst,D. (2006), "Peer Observation of Teaching in the Online Environment: an action research approach" Available at: http://www.cde.london.ac.uk/support/awards/file3281.pdf (last accessed 23 April 2008).