I’ve written about my usual SBG scheme here. It works fine and many students take advantage of learning at a slightly different pace but still getting credit for what they know, once they know it. However, I’m interested in keeping small quizzes primarily in the formative domain, yet using an assessment tool that is based on clear learning objectives, re-testable and flexible. This post talks about a possible transition from using a few dozen learning objectives in quizzes to a new, larger goal assessment tool.
qui2024-outline
This year was the first year that I had to put some restrictions on the number of re-assessments students could take. In Term 1 I was getting inundated with re-test requests, I estimate that I had somewhere between 300% – 400% increase in requests this year. It was crazy. I had to stop offering re-tests for a while and then gradually bring them back. I was actually surprised that students didn’t complain about me stopping them, or say that it was unfair. I think this was partly due to them recognizing that the system overall was quite fair.
qui2024-outline
This year had some more challenges of course, because of the pandemic and moving to online learning. Unlike many of my colleauges, I didn’t even attempt to give out anything that could be considered a summative assessment. In other words, nothing was for marks. As far as I could tell, for most students this didn’t affect their overall effort. I say this both through my observations of student work and through a survey I gave students.
It was also pretty obvious to me that we had no way of accounting for summative assessments in the VSB secondary schools while online since there was no chain of custody or invigilation with the assessments. We send out an assessment and at some point it would be returned. What happened between those two events is anyone’s guess. I happen to think that most students would be honest with their work but that’s just a gut feeling.
Looking to next year, I was thinking that my typical SBG quizzes could still be done but kept formative. It then dawned to me that this is really the best use of them anyways. I’ve never liked having to turn a bunch of SBG quizzes into a grade. This of course leads to the next question then: what evidence should be used for a grade?
qui2024-outline
魔道祖师小说软件免费-魔道祖师小说结局-爪游控:2021-6-12 · 魔道祖师小说软件免费-魔道祖师小说结局 2021-06-12 10:38 来源:网络 作者:投稿 “魏无羡死了。大快人心!” 乱葬岗大围剿刚刚结束,未及第二日,这个消息便插翅一般飞遍了整个修真 ...
Another possibility for determining a grade is to be deliberate with using “performance tasks.” When I think of performance tasks in physics, I think of assessment tools that are comprehensive and could be somewhat open-ended. A typical test could be a performance task but these types of tests usually don’t work well with SBG – they are not linked to learning objectives and the selection of questions and how they are graded can result in very unreliable data. To see how this can happen, look at page 12 of this presentation. When I run ProD workshops on this topic, I get the participants to grade this assessment using points. Without exception, in each workshop the assessment is scored with a range from 45% to 82%.
魔道祖师小说软件免费-魔道祖师小说结局-爪游控:2021-6-12 · 魔道祖师小说软件免费-魔道祖师小说结局 2021-06-12 10:38 来源:网络 作者:投稿 “魏无羡死了。大快人心!” 乱葬岗大围剿刚刚结束,未及第二日,这个消息便插翅一般飞遍了整个修真 ...
It then dawned on me that this could be the type of assessment tool I could use with my students. For learning intentions, there would be a big shift. Currently I divide the course into separate units and each unit has maybe 6 learning intentions. For example, for Constant Acceleration:
While I would keep these learning intentions for clarity and practice with SBG quizzes, the SBG assessment would look more like this where the main assessment goal is the performance task CA:
We would still do on-going quizzes with kin.1 to kin.5, but they would be faster and part of entry or exit slips in the class. Progress would still be recorded as before. For summative assessment, I would use one problem (performance task) like the one written by Knight, shown above. The “score” would be based on how the student performed on the performance task.
qui2024-outline
安卓全能扫描王v4.8.5 OCR文字识别软件 | 菜鸟IT资源网:2021-6-12 · 所属分类:手机软件 应用平台:安卓 资源版本:V4.8.5 最后更新:2021年6月12日 21:51 安卓全能扫描王v4.8.5 OCR文字识别软件 软件截图:
My initial thought is that a student would have to repeat their performance twice in a row to obtain that grade. For example, if a student got an “Extending” on the CA task and then two weeks later gets an “Applying”, their score for CA would be Applying. “Applying” in week 1 and then “Extending” in week 2 would also be a score of “Applying.” However, in this case the student could ask to do a re-test and if they got an “Extending” that time, their score for the learning objective would be “Extending.” Overall grades would be determined using the proficiency scale for each unit/performance task, and SBG quizzes can be used for supporting evidence.
There are a couple advantages to this scheme:
- The SBG quizzes should be a lot faster because students don’t need to endlessly pontificate whether or not they can do the question
- The SBG quizzes still give good feedback to me and the students on what they need to improve on.
- I can give SBG quizzes online during COVID and not have to worry about cheating, copying, lates, etc.
- The grades will be based on overall understanding of a learning objective / unit goal through the performance tasks.
- The performance task is truly indicative of what I am interested in students being able to do.
- Doing each performance task twice spirals the curriculum and reinforces the idea that “learning is a transfer of knowledge to long-term memory” and this long-term memory refers to (perhaps) at least two weeks. You can’t just cram for an assessment once and claim to know it.
There are some challenges with this scheme:
- Even if the SBG quizzes are faster, is this too much time set aside for assessment?
- Since we have 6 units (I think?), each performance task is relatively high stakes
- Will I be endlessly giving out re-tests to students so they can get two in a row? Will students give up on an objective if they get two “Applying” in a row, meaning that they will need to do at least two re-tests to increase their score?
qui2024-outline
So far this post has been focused on assessment and knowledge based curriculum. I’m not ignoring curricular competencies, and I do assess them. What I’ve been leaning towards is setting up tasks that are linked to curricular competencies and assessing them and scoring them the same way as other learning objectives. I no longer have separate categories for the tasks, the students’ grade lumps all the scores together. Perhaps I’ll write more about this in another post.
qui2024-outline
I also teach math (this coming year I will be teaching Pre-Calculus 11) and I’m not sure how this new thinking lines up with it. I haven’t taught Math 11 but I have taught Math 10. Looking back to it, I can’t think of very many comprehensive tasks. I seem to recall there being a lot of separate and related tasks but the learning objectives rarely led into a larger, more comprehensive task. I don’t know how this will shake out for Math 11.