I've been asked to be part of the Observation of Learning and Teaching team at my workplace, alongside acting as mentor for some new teachers. I love watching other teachers strut (or fail to strut) their stuff in class - it's always fascinating to see different group dynamics, and an opportunity to pick up ideas for teaching that I may nebver have considered, or indeed be reminded of things that I've forgotten or not used in a while. Last year's observations for my DELTA were great, although I (probably fortunately) didn't see any car crash jobs. A shame in a way, as a lesson Where All Your Shit Goes Tits Up makes for excruciatingly fun viewing, as well as giving a salutary example inasmuch as a) things can go belly up for anyone and b) things going wrong are often a better teacher of technique than things going right. My favourite buggered-up lesson involved a teacher losing the plot, getting sidetracked, then eventually pretending to be a seagull before perching on a table, squawking.
Anyway, being an observer inevitably involves paperwork, and feedback. Oh, lovely feedback: that moment when someone smiles at you as you both sit down in a room lit by fluorescent strip lighting, smiles even more brightly and says, 'So, how did YOU think your lesson went?', the you say what you thought, and hope that you aren't going to get a Shit Sandwich* back.
A lot of these feedback sessions involve someone saying, 'on a scale of one to ten, how would you rate that?', or something similar. B.O.R.I.N.G.
I was thinking about this the other day, and started wondering whether you could scale things in a different way. After all, getting teachers to be imaginative can only be a good thing, right?
So why not ask them something like this?
'On a scale of monday morning to saturday night, how good was that lesson?'
When you start to think about it, this makes a lot of sense, in a leftfield kind of way. An observed lesson should, in theory at least, be a typical lesson - how the teacher delivers the class on average, rather than the manically prepared, coiffed, perfumed, shaven, dolled-up confection of a lesson that is the norm when you know that someone's coming in to take notes. I'd say, on this scale, anywhere between thursday at 11.30 a.m. and friday at 7 pm (with another bump on saturday between 1 and 6.30 pm) means it's a good lesson. Anything before thursday would be dull or below par; anything after friday 9.30 pm would be loud and overblown; and saturday at 3 a.m. would be a lesson that's hoiking kebabs into the gutter.
Anyway, I'm going to give this a try, and devise some for marking student essays.
Probably something along the lines of
'On a scale of Beaker from the Muppets to Animal, how incomprehensible is this essay?'.
*Shit Sandwich: the act of an appraiser giving you feedback where they start and finish with positive comments, filling the bit in between with a steaming pile of criticism, invective and bile, as in:
'Well, I liked your class file. Unfortunately, your lesson went down worse than a drunken whore, and you are, quite frankly, a stain upon the entire teaching profession. However, I was glad to see that you have a nice tie on.'
Studies, theories, ideas, notes from the workface and occasional bits of stupidity.
Showing posts with label marking. Show all posts
Showing posts with label marking. Show all posts
Thursday, 20 September 2012
Friday, 2 April 2010
Mind the Gap.
Here's a question for you - what do the following all have in common?
So why am I bringing it up?
The point is this: do the ones about language actually test a knowledge of language, or do they in fact only test an ability to solve a problem? In other words, it strikes me that many of the tasks in student workbooks are not real tests of language knowledge whatsoever, but exercises in learning skills.
Let me give an example task to you.
Here are three rules.
If a sequence of numbers is 3 digits and ends in 9, follow it with 12.
If a sequence of numbers is 3 digits and ends in 7, change the 7 to 9 and add 12.
If a sequence of numbers is 4 or more digits, it must be preceded by 21.
and here are some sequences:
Now change the rules to those describing how to make comparative adjectives in English.
The simple fact is that many of the exercises we do with our students do not in fact test their understanding of language, but their cognitive and problem-solving capacities. Someone who can understand a logic problem, as long as the problem and a model solution is clearly presented, should be able to solve any given issue. Now, while it may be useful for someone to comprehend a given set of rules, it does not necessarily follow that that person is in fact capable of using the language in a way that is comprehensible, simply because languages have a nasty habit of not following their own laws. This is why, whenever we do level testing, we should always look at a suite of abilities rather than rely on the good old grammar test prior to deciding a language level. It also explains, by the way, why EFL students tend to score higher on formalised language tests which are generally problem-solving based tasks, than ESOL students.
- A cryptic crossword puzzle
- A cloze (aka gap-fill) exercise
- A sudoku puzzle
- A grammar exercise where you have to write the correct form of the verb
- An algebra exercise
- one of those team-building things where you have to work out how to cross a river using a piece of string and two dead dogs or something
- a reading task asking you to identify words and phrases in context that mean the same as a given set of synonyms
So why am I bringing it up?
The point is this: do the ones about language actually test a knowledge of language, or do they in fact only test an ability to solve a problem? In other words, it strikes me that many of the tasks in student workbooks are not real tests of language knowledge whatsoever, but exercises in learning skills.
Let me give an example task to you.
Here are three rules.
If a sequence of numbers is 3 digits and ends in 9, follow it with 12.
If a sequence of numbers is 3 digits and ends in 7, change the 7 to 9 and add 12.
If a sequence of numbers is 4 or more digits, it must be preceded by 21.
and here are some sequences:
- 329
- 5437
- 777
- 919
- A427
- 3424245539
Now change the rules to those describing how to make comparative adjectives in English.
The simple fact is that many of the exercises we do with our students do not in fact test their understanding of language, but their cognitive and problem-solving capacities. Someone who can understand a logic problem, as long as the problem and a model solution is clearly presented, should be able to solve any given issue. Now, while it may be useful for someone to comprehend a given set of rules, it does not necessarily follow that that person is in fact capable of using the language in a way that is comprehensible, simply because languages have a nasty habit of not following their own laws. This is why, whenever we do level testing, we should always look at a suite of abilities rather than rely on the good old grammar test prior to deciding a language level. It also explains, by the way, why EFL students tend to score higher on formalised language tests which are generally problem-solving based tasks, than ESOL students.
Labels:
cognitive tasks,
EFL,
English,
ESOL,
Learning,
marking,
problem solving
Tuesday, 15 July 2008
SATs, contractors and doing it on the cheap
Off the topic of ELT, but still educationally related - The SATs(Standard Attainment Tests), sat by 11 and 14-year-olds in the UK. There's been a bit of a kerfuffle, to put it mildly, at the delay in releasing the results this year. There's been more disgruntlement at schools, where teachers have been resending marked papers because of inaccuracies in the scoring. So, what's going on?
the papers have been sent off to be marked to a company contracted to do it, presumably the cheapest one available.
Its name?
ETS Europe.
The marking of the SATs has been subcontracted to a company not based in the UK. In other words, British students' work is being checked and marked for accuracy in somewhere other than the UK.
I can't begin to describe how shocking I find this. It's wrong on so many levels. Now, if it was just a case of a multiple choice paper being fed into a computer, I could accept that. If it was just checking the result of a maths paper, where you can only have a correct or wrong answer, I could just about accept that too. But it seems (and honestly, I couldn't be more glad if I were wrong on this point) that the entire lot is being sent off. OK, you can make a point abour objectivity in marking: The examiner will have a set number of descriptors against which he/she will check the submitted work, and based on that assign a mark. However, I can see so many ways that marking will be inaccurate.
Just a few examples:
Orthography. The way that UK kids are taught to write is significantly different from the way it is done in other EU countries. This is a fairly neat example:
, and it's written by an adult! Imagine an 11-year-old's being deciphered by an examiner.
Cultural and Social mileau. Taken out of context, how can anything relating to a culture or society be accurately interpreted, let alone assigned a score in an exam?
Examiner's L1/L2 competency. No matter how good the examiner's English may be, nevertheless they will be marking and interpreting at one remove - that is, they will have to decode the information, recode into L1, interpret according to two sets of cultural and possibly sociolinguistic filters, then assign a mark and re-encode into English. There is no way it can be done entirely fairly, as any examiner in this situation will use affective filters in the process.
All I can say is that it's WRONG. Totally bloody WRONG.
the papers have been sent off to be marked to a company contracted to do it, presumably the cheapest one available.
Its name?
ETS Europe.
The marking of the SATs has been subcontracted to a company not based in the UK. In other words, British students' work is being checked and marked for accuracy in somewhere other than the UK.
I can't begin to describe how shocking I find this. It's wrong on so many levels. Now, if it was just a case of a multiple choice paper being fed into a computer, I could accept that. If it was just checking the result of a maths paper, where you can only have a correct or wrong answer, I could just about accept that too. But it seems (and honestly, I couldn't be more glad if I were wrong on this point) that the entire lot is being sent off. OK, you can make a point abour objectivity in marking: The examiner will have a set number of descriptors against which he/she will check the submitted work, and based on that assign a mark. However, I can see so many ways that marking will be inaccurate.
Just a few examples:
Orthography. The way that UK kids are taught to write is significantly different from the way it is done in other EU countries. This is a fairly neat example:
, and it's written by an adult! Imagine an 11-year-old's being deciphered by an examiner.
Cultural and Social mileau. Taken out of context, how can anything relating to a culture or society be accurately interpreted, let alone assigned a score in an exam?
Examiner's L1/L2 competency. No matter how good the examiner's English may be, nevertheless they will be marking and interpreting at one remove - that is, they will have to decode the information, recode into L1, interpret according to two sets of cultural and possibly sociolinguistic filters, then assign a mark and re-encode into English. There is no way it can be done entirely fairly, as any examiner in this situation will use affective filters in the process.
All I can say is that it's WRONG. Totally bloody WRONG.
Subscribe to:
Posts (Atom)
Labels
Motivation
(12)
ESOL
(11)
Methodology
(8)
Acquisition
(7)
Learning
(7)
Portfolios
(5)
Dip TESOL
(4)
blended learning
(4)
dogme
(4)
EFL
(3)
FE
(3)
language citizens
(3)
language commuters
(3)
language denizens
(3)
language tourists
(3)
learner attitudes
(3)
linguistic hierarchy
(3)
marking
(3)
technology
(3)
#eltchat
(2)
English
(2)
Hierarchy of needs
(2)
L1
(2)
Maslow
(2)
Natural Approach
(2)
SATs
(2)
SLA
(2)
Silent Way
(2)
Speaker and listener roles
(2)
The Language City
(2)
Turkish
(2)
VLEs
(2)
attitudes
(2)
differentiation
(2)
elt
(2)
handling and manipulating
(2)
iPad
(2)
language and depression
(2)
language at intermediate level
(2)
language city model
(2)
lesson
(2)
lesson planning
(2)
moodle
(2)
phonology and phonetics
(2)
smart phones
(2)
speaking
(2)
teaching
(2)
ALTE
(1)
Arabic
(1)
CEFR
(1)
CLL
(1)
Cadbury's Creme Eggs
(1)
Classroom activity
(1)
Communication
(1)
DTLLS
(1)
ELT Unplugged
(1)
ETS
(1)
French As An Evil Language
(1)
GLAW profilies
(1)
Higher level students
(1)
L1 context
(1)
Language Interaction
(1)
Observations
(1)
P4C
(1)
Steve Krashen
(1)
Syllabus
(1)
TPR
(1)
actuive vocabulary
(1)
advice
(1)
affective filter
(1)
ambiguous language
(1)
approaches
(1)
apps
(1)
articulator
(1)
aspect
(1)
blockbuster
(1)
boardwork
(1)
bullying
(1)
childhood acquisition
(1)
citizen
(1)
citizenship
(1)
city guide
(1)
classroom techniques
(1)
cognitive tasks
(1)
conjunctions
(1)
copyright
(1)
creating content
(1)
curating content
(1)
diagram
(1)
digital literacy
(1)
dimension
(1)
disruption
(1)
distance learning
(1)
e-learning
(1)
easter
(1)
encoding
(1)
english uk
(1)
examiner
(1)
experiments
(1)
failure
(1)
fossilization
(1)
future forms
(1)
grade scales
(1)
grading
(1)
grammar
(1)
group work
(1)
handedness
(1)
holistic learning
(1)
integration
(1)
interlanguage
(1)
l2
(1)
lesson ideas
(1)
lexis
(1)
listening
(1)
literacy
(1)
manager
(1)
meaningful interaction
(1)
mindfulness
(1)
mondays
(1)
neologism
(1)
online content
(1)
page o rama
(1)
passive grammar
(1)
passive vocabulary
(1)
podcast
(1)
politics
(1)
power law distributions
(1)
presentation
(1)
problem solving
(1)
provider
(1)
register
(1)
research
(1)
resolutions
(1)
routine
(1)
sentence structure
(1)
silent running
(1)
skills and systems
(1)
stereotypes
(1)
style
(1)
suggestopedia
(1)
teacher talk time
(1)
tense
(1)
tenses
(1)
total bloody genius
(1)
tutorial aids
(1)
tutors
(1)
twitter
(1)
using IT
(1)
validity
(1)
varieties of English
(1)
web profiles
(1)
world englishes
(1)
writing
(1)