This page contains a summary of the questions and answers discussed at UCL's Assessment and Feedback Special Interest Group mythbusting panel, 24th June 2015 2.00-3.30pm. Participants posed questions in advance using Moodle Hot Questions which were considered by panel members:
On this page:
The Assessment Regulations are being rewritten to give more support for staff proposing new forms of assessment including, for example, capstone group work projects.
The Assessment Regulations require summative assessment worth 20% or more to be second marked.
The Registry has in the past required students to submit independent work - this is no longer the case.
It is fine for a high proportion of a module assessment to be group work as long as this is part of a balanced assessment diet across the programme.
Given the importance of group work in the workplace, some element of assessed group work may be desirable.
Staff have a choice between allocating a common mark for all group members, or individual marks recognising individual contributions, or a combination; the important thing is to make the marking transparent in advance and make the case to the students.
The key thing is to discuss with the external examiner what to do. Perhaps this will be audio recording and slides which the external can sample. Perhaps there will be two people assessing at once, depending on departmental capacity.
See UCL's Marking Policy - the point of principle is that no final module mark should be based on the opinion of one person, but this allows for sampling rather than full second marking.
Double blind second marking for everything is very onerous; one approach is to double blind second mark a sample and discuss those as a standardisation procedure (and a chance to evaluate the marking criteria).
Local practices may not have foundation in the UCL regulations - if you want to negotiate a change, refer to those as well as the disciplinary norms.
Well designed MCQs can be a valid form of assessment in some circumstances; there are no regulations for or against multiple choice examinations.
Validity means that students have learned what they were intended to learn. Good MCQ design captures the right level of understanding, problem-solving, &c.
Whether or not there is need for second marking depends on the MCQs; if there is an objectively correct answer which has been validated in advance there is no need for second marking.
For paper exams, UCL has an optical mark reading service called UCL OMR. However this is expensive to resource and boring for the people administrating it.
Moodle has a wide range of questions and is used in some UCL modules in invigilated circumstances.
In the world of work students will be constantly judged; peer assessment encourages good behaviours and promotes good skills and sophisticated judgement.
It's doubtful that students would accept 100% of the module mark being based on peer assessment; we are still discussing appeal processes and student eligibility to mark in a standardised equitable way.
It is crucial to make the case to the students about the value of peer marking.
Students need to be guided, developed and supported; formative peer marking is invaluable to developing students’ understanding of the criteria.
Second marking has a role in validating the marking.
At the heart of the debate is the question, who is the examiner? The examiner is the person or people who take ownership of the overall process.
There is no hard and fast rule; we need to think flexibly about it.
The aggregate mark might be a straightforward split between peer and academic mark, but it might be a collaborative, negotiated mark - the endpoint of a dialogue between academics as module leaders and peer markers.
Does the student have the right to do this? If staff want to do this, do they need to seek explicit consent, or can we require students to do this up front before they begin the programme or module?
Our contract with students states that students' work is their own intellectual property, i.e. it is owned by students not UCL; therefore students' explicit consent needs to be given and they need to be cited.
This is a matter of respect: we need to think of students as potential partners in academic endeavour, and seeking consent respects that relationship.
If sharing work is an expectation of participation in the learning community, then ensure the students are aware before they commit.
However, it is doubtful that this could be a legally-upholdable or ethical condition of their acceptance onto the programme or module. Compelling students would not be true consent because it requires students to sign away their IP rights before the IP has been created, and because of the power imbalance between the student and institution imposing the requirement.
Consent can be negotiated within the learning community, at the time of producing the materials in question.
Moodle Glossary allows individuals or groups to author definitions of words - it seems tailor-made for this purpose.
The TESTA audit tool is helpful here; there are strong indications that UCL is generally over-assessing students.
Managing the assessment packages in Portico reveals extreme over-assessment in some programmes.
Summative assessment can squeeze out opportunities for formative assessment.
There are variations in local practice, but since assessment is perceived to exert so much influence on students' attention, efforts, and esteem of the course, reducing the amount of assessment cannot be unilateral - it needs to be a multilateral decision across the modules of a programme.
One reason given for over-assessment is that we don't really trust assessment and like any experiment we don't trust, we seek to replicate the results by repeating it. But Tansy Jessop (keynote at UCL Teaching & Learning Conference) experimented with removing individual pieces of assessment and discovered that 117 could be reduced to around 20 before the reductions began to make a difference to the grade.
There won't be regulation about this, since although there is plenty of evidence to support reducing assessment, the measures will be discipline-specific.
Still to discuss: can we have assessment norms (i.e not rules)? Can they be based on student time as a proportion or the overall time they have available?
As practice and engagement with the module there may be a strong case for making formative assessment compulsory, in which case it could be a condition of completion.
However, converting summative assessment to formative-only does not in itself address the problem of over-assessment.
In assessing e.g. portfolios flexibility is fine but the research flags up the importance of clarity about how the work should be structured, the components, how they should fit together and what each is worth.
This is probably a case of marker time rather than amount of material; we may want to ask students to direct markers' attention to particular areas.
In some cases assessment may be competency-based; this is fine within a varied assessment diet.
We need to be clear about our rationale for any assessment - do the summative assessment.
Some forms of assessment can be cumulative over an extended period of time, giving students plenty of opportunities to demonstrate competence.
There are precedents at UCL e.g. group project reports in Engineering, but those specify who did what in the report.
As part of a balanced assessment diet this is fine.
Moodle is a safe place to manage assessment records.
ISD has a project to ease the transfer of student information from Moodle to Portico.
See the UCL Retention Schedule and if you have any questions, the Records Managers are happy to help.
There are circumstances in which this could work.
We need to think about the qualities of a supervisor. It may be that students doing this first bit of serious, high stakes research should be supported by a more experienced academic whom they look up to. Perceptions are important.
Postgrads and postdocs may need a trajectory of involvement in supervision; they need to be suitably trained, aware and sensitised to good practice in this area - if we think that simple allocation to a job role inevitably leads to competence in that role, we're living in fantasy land.
Postgrads and postdocs involved in assessment should undertake an Arena Programme.
There's a further question about who is the examiner and whether postdocs can be examiners - the regulations will be clarified after thinking that through.
If the work is for a credit-bearing assessment and we want students to develop it further after it has been assessed e.g. to respond to feedback, can they do this?
There is a period of work which stops at a certain point after which assessment happens; there does need to be a definitive record but this can be a snapshot stored separately freeing up the original work for continuing development.
Discuss this with Digi Ed