For EDC MOOC, the assessment was to create a digital artefact and to peer review at least three other digital artefacts.
I thought that the creation of a digital artefact was a meaningful and authentic way to assess this course. For those not familiar with Miller's triangle of assessment (which is often referred to in medical education): creating an artefact ranks towards the top of the triangle as it "shows how" in a simulated environment. The next step is "does" which means that one has incorporated what one has learned in this course into their work/life. For those of us who are educators, time will tell if participation in this class changes what we do in e-learning and e-teaching- I suspect it will.
Miller's Triangle (http://pmj.bmj.com/content/80/940/63.full)
Creating an artefact was a way for us to experiment with different digital modalities to express ideas from the course that were meaningful to us and to demonstrated an understanding of at least some of the course material.
In my job I am involved in work-based assessment- the evaluation of clinical skills. We categorize assessment as low-stakes or high stakes, sometimes called formative and summative feedback. High- stakes assessment is an assessment that you must pass to move on in your profession. An example would be the having to pass a clinical skills evaluation in order to be eligible for a medical license in the US. With high stakes feedback great care must be taken to be sure that it measures what you intend to evaluate, it is reliable (different evaluators would give the same score) and that it is free of bias.
Low stakes feedback is given primarily to improve the learner's performance (though, of course, learning should happen with any assessment). The evaluation of the EDC MOOC digital artefact is a low-stakes assessment. A certificate is given regardless of the grade received, so the main purpose of the assessment is for both the creator of the digital artefact and the evaluators to learn from the experience. A specific rubric was given for evaluation. There were also explicit instructions regarding the purpose of the feedback and how to learn from feedback.
One of the difficulties in assessing the artefacts and giving meaningful comments was that we viewed the artefacts in isolation and didn't know the author's learning goals for the artefact. I wonder if it would be better if one of the course requirements for the artefact should have been a self- reflection/self- evaluation as well as the artefact itself. It would have been easier to give specific feedback about the artefact- what worked well, how perhaps it could be improved if one knew the author's intent for the artefact. I suspect that a number of people were "stepping out of their comfort and experience zones" in creating their digital artefact, I certainly was.
Formative feedback can be both positive and corrective. It should be specific in nature, stimulate reflection (and perhaps an action plan for the future) and should be in a supportive environment. My personal experience with the official feedback was that 2 evaluators gave thoughtful and specific feedback. The other two evaluators wrote less than a sentence, with one of them giving me the "feedback kiss of death" which is "very good". What's very good? How could it be better? On the other hand, I did get constructive feedback on my artefact from the Google + group. I do know that many people spent a lot of time giving thoughtful feedback and evaluating more than the three mandatory artefacts.
One thing that we all should have learned from this course is that human expression has many forms and perspectives. We all come from different cultures and experiences. While I may not have fully understood your artefact, I should not assume you have nothing to say. I may have been entertained by your artefact, but did I also learn from it?