Sunday, March 3, 2013

Assessment, Miller's triangle and the importance of reflection #edcmooc

For EDC MOOC, the assessment was to create a digital artefact and to peer review at least three other digital artefacts.

I thought that the creation of a digital artefact was a meaningful and authentic way to assess this course.  For those not familiar with Miller's triangle of assessment (which is often referred to in medical education): creating an artefact ranks towards the top of the triangle as it "shows how" in a simulated environment.  The next step is "does" which means that one has incorporated what one has learned in this course into their work/life.  For those of us who are educators, time will tell if participation in this class changes what we do in e-learning and e-teaching- I suspect it will.
           
                         Miller's Triangle  (http://pmj.bmj.com/content/80/940/63.full)

 

Creating an artefact was a way for us to experiment with different digital modalities to express ideas from the course that were meaningful to us and to demonstrated an understanding of at least some of the course material.

In my job I am involved in work-based assessment- the evaluation of clinical skills.  We categorize assessment as low-stakes or high stakes, sometimes called formative and summative feedback.  High- stakes assessment is an assessment that you must pass to move on in your profession.  An example would be the having to pass a clinical skills evaluation in order to be eligible for a medical license in the US.  With high stakes feedback great care must be taken to be sure that it measures what you intend to evaluate, it is reliable (different evaluators would give the same score) and that it is free of bias. 

Low stakes feedback is given primarily to improve the learner's performance (though, of course, learning should happen with any assessment).  The evaluation of the EDC MOOC digital artefact is a low-stakes assessment.  A certificate is given regardless of the grade received, so the main purpose of the assessment is for both the creator of the digital artefact and the evaluators to learn from the experience. A specific rubric was given for evaluation.  There were also explicit instructions regarding the purpose of the feedback and how to learn from feedback.

One of the difficulties in assessing the artefacts and giving meaningful comments was that we viewed the artefacts in isolation and didn't know the author's learning goals for the artefact. I wonder if it would be better if one of the course requirements for the artefact should have been a self- reflection/self- evaluation as well as the artefact itself. It would have been easier to give specific feedback about the artefact- what worked well, how perhaps it could be improved if one knew the author's intent for the artefact.  I suspect that a number of people were "stepping out of their comfort and experience zones" in creating their digital artefact, I certainly was. 

Formative feedback can be both positive and corrective.  It should be specific in nature, stimulate reflection (and perhaps an action plan for the future) and should be in a supportive environment.   My personal experience with the official feedback was that 2 evaluators gave thoughtful and specific feedback.  The other two evaluators wrote less than a sentence, with one of them giving me the "feedback kiss of death" which is "very good".    What's very good? How could it be better?  On the other hand, I did get constructive feedback on my artefact from the Google + group.  I do know that many people spent a lot of time giving thoughtful feedback and evaluating more than the three mandatory artefacts. 

One thing that we all should have learned from this course is that human expression has many forms and perspectives.  We all come from different cultures and experiences.  While I may not have fully understood your artefact, I should not assume you have nothing to say.  I may have been entertained by your artefact, but did I also learn from it? 












2 comments:

  1. Dale, I found this discussion of assessment extremely helpful in a number of levels, the most relevant of which is in assessing my own artifact. I didn't get one completed in time for formal assessment, but I posted it in the forum and on G+ and receive some excellent feedback, feedback which was gratifying both because it showed me that at least some people did "get" what I was trying to say (because I was definitely posing at the boundaries of my skill set), and because it helped me think with greater clarity about where I failed to meet the rubric standards and how I could do better in the next iteration. Your discussion of Miller's Triangle adds yet another layer to my understanding.

    In the rubric, I did well at demonstrating "Know" - that I had taken in and wrestled with the themes of the course - and "Knows How" - by linking my topic to those themes. Because I struggled to make a clear, substantive statement about how we should practice digital education in light of my artifact (I think the artifact leave viewers with a bit of a "so what?" feeling), I failed to reach the "Shows How" level of Miller's Triangle - so, you see, it seems to me that this assessment does venture into the realm of formative assessment in the third criterion of the rubric - and I did not have the opportunity to demonstrate "Do" in this project because, as you point out, this assessment did not really demand it.
    I would argue, however, that I could and have in other forums (specially parenting, but also as a digital educator) demonstrated that I do know how to "Do" what I was trying to say we as digital educators should all think about doing. Moreover, had I thought about that aspect of my project, I would have had an easier time working out a statement about the practical implications of the juxtapositions I offer in the artifact.

    I'm very much in favor of self-evaluation or -reflection in assessment and have used it quite a bit in my capacity as an educator. I think students of this course would benefit from its finding a place in the assessment project. Because it was so helpful for me for assessors to "reflect back" what they saw me trying to say, I think a self-reflection element to the assessment revealed to the assessors before they assess might diminish the value of the assessment, at least in some cases. But adding the self-reflection element as a final step in the assessment might help students work out how they might apply what they have to say about digital education to their own practice of educating in digital modalities.

    As a side note, I found your comments here helpful in thinking about the primary project in my professional life, but I f won't go into details about that here.

    ReplyDelete
    Replies
    1. Because the assessments I do are in the workplace and in realtime, it is interesting to think about what might be different in the on-line environment (which is often assynchronous and may be anonymous). After I watch a student/patient interaction, I always ask the student for their feedback about their performance- what they thought went well and what they think could have been improved. It's a useful way to start the conversation and I can get a sense of their ability to self-assess. You may be right that adding a self-reflection element on-line, might change how assessors focus on an artefact and might dimish quality of feedback. That's something I hadn't considered. Assessment seems harder on-line to me but I'm also new to it.

      One of the things that I have really enjoyed about this course is all that I have learned from the others in the class. Thank you.

      Delete