As a rule I'm too busy to blog regularly but a few people have asked about
the assessment levels on LearningComputing.co.uk so this is a
quick summary of where they came from and how I implement them.
Why the change?
When the curriculum changed a couple of years ago I decided I
wasn't happy with lots of the alternative level descriptions being put forward.
Most of them seemed very content heavy and focused on specific knowledge. To my
mind level descriptors like this can read like a league table of facts. In one
example I found “Knows what a router is” appeared at Level 4 with “Knows what a
relational database is” appearing at Level 6. Why? Is it easier to know that a
router is used to connect two networks together than to know that a relational
database has multiple tables linked by common fields? Is one piece of
information more important or more useful than the other? I think the answer to
all these questions is no, in which case why should a student receive more
credit for learning one fact than the other?
Where do the levels come from?
My solution to this problem was to go back and look at Bloom's
Digital Taxonomy. In Bloom's Digital Taxonomy both the “Know” levels given
earlier would belong at the base of the taxonomy in the “Remembering” section,
clearly labelled a “Lower Order Thinking Skill”.
Importantly Blooms
Digital Taxonomy allows for a fairly sensible comparison of a student’s
performance across very different topics. This in turn makes it much easier for
students to demonstrate progress across the year. For example; a student who
“Knows the names of hardware e.g. hubs, routers, switches” can said to be at
the equivalent level as student who “Knows what a relational database is” but
behind a student who can “Explain the need for a router” or a student who can
“Explain the need for a relational database”.
The levels on LearningComputing.co.uk are all
related to Blooms Digital Taxonomy but with the key words changed to Define
(Remember), Explain (Understand), Apply, Link (Analyse), Innovate (Create). I
changed the key words partly to reflect common exam questions (define &
explain), partly because I like the idea of innovation being the highest order
skill and partly because the acronym (DEAL!) is easy for students to remember.
Applying the levels
For every unit a student takes with they will get two levels, one
for a piece of class work and one their final assessment. To mark the class
work I pick the best blog
post (see
blogs
instead of exercise books) they have produced and level it against the objectives
for that lesson. This level is shared with the students via a comment on their blog which
always follows the same format:
STRENGTH - Something the student has done well followed by their level.
TARGET - A broad description (related to DEAL!) of what the student needed
to do to reach the next level.
ACTION - A specific activity the student should attempt to try and reach
the level described in their target.
RESPONSE - An instruction to the student telling them to respond to my
comment explaining what they have done (nearly always their action) and level
they now think they have as a result (hopefully their target). I also often try
to add a separate post related to literacy (usually capital ‘i’).
The penultimate lesson of the unit (before the assessment) is
always the “Feedback” lesson. In this lesson students read the comments posted
by their teacher, carry out the action and make their response. The aim of
having the “Feedback lesson” prior to the assessment is that students can use
the feedback they receive on their class work to help them prepare for their
assessment.
The second level a student receives is based on their assessment.
To be honest these are hard to calibrate (particularly in exams). After trying
many different techniques the best fit I’ve found with target grades\classwork
levels is to set an exam with some Define, Explain, Apply, Link, Innovate
questions then use the overall number of marks to set grade boundaries. These
boundaries seem to give fairly consistent results across different classes.
Finally we moderate a couple of pieces of students work during
department meetings to try and ensure consistency across the department.
Both these levels are then recorded on our tracking sheet.
Problems I've
encountered
There are a couple of
problems with this method (feel free to post suggestions as comments). Boys in
particular would rather apply their knowledge to create a Scratch game than
explain how the game works. This means that targets are often retrospective eg STRENGTH: Excellent! You've created
a very complex game! This could be a level 5a IF you can explain how it works. The other big problem is in
relating technical skills to each other for Apply. For example; creating a game
in Scratch is much easier than creating a similar game in Python. The best
solution I can come up with here is to apply arbitrary limits to
the level a student can get in a particular module. A student who creates a
simple guessing game in Python could get a 5b but a student would have to
create a pretty complex maze game in Scratch to get the same level.
This isn't a great solution but at least marks across the school for
a particular unit are consistent.