Retrieval in Korean Language Classroom

This is a targeted comment to “Revisiting the Value of Tests: Learning through Retrieval” by Donggil Song  & Eun Young Oh for R695-3

There is a lot of self referencing going on – remember that you are internal to the research process and therefore it becomes super important to be explicit.  For example you cite a plethora of studies about retrieval cues. Remember that you know about all the articles you cited but your readers might not be familiar with them. So it becomes important to categorize and describe explicitly what you are referring to and what parts, of other studies, are relevant to your own.

In your literature review you talk about the purpose of assessment and firmly categorize assessments as either formative or summative – for this conversation you use only 3 articles, all nearly a decade old. It is important to remember that practitioners and researchers might categorize assessments in a variety of categories that might include interim, structured and even non-formal assessments. Within the walls of a classroom, teachers tend to utilize specific assessment approaches repeatedly – sometimes it is a comfort-level thing and at other times top-down approaches mandate assessments.

Expand your literature search to find more than one article that support your varying arguments.  A couple of your paragraphs are support by singular research articles and your arguments can be enhanced greatly by finding a couple of recently published works that support your conclusions.  The same can be said for using the same set of articles repeatedly – spice things up.  Much of the work used to define the constructs of assessment is referred to Taras (2005) who evolved her work off Scriven who established theories in social science evaluations, not specifically in classroom contexts.

When conducting “international” research it is imperative to consider the comparable nature of the literature used to support your theoretical framework.  For example, when conducting an experiment in a foreign language classroom the same approaches cannot be used as in science classrooms (Minstrell & van Zee, 2003) or transfer  directly from face-to-face to online instruction (Cassady & Gridley, 2005).

As your study was only looking at verbal visual cues and not auditory or speaker specific assessments it becomes imperative to distinguish between the variety of retrieval cues.  While I understand that the retrieval cue condition (IV) and student performance (DV) can be statistically liked there needs to be a through discussion eliminating external factors as they relate to student aptitude and performance.

While you have done a good job at statistically linking two factors you have not established causality. There is also no commentary on how student demographics allowed for group-wise comparison. And finally, when you use a statistical test you can strengthen your argument by stating how you met the underlying assumptions.

 

References

Cassady, J. C., & Gridley, B. E. (2005). The effects of online formative and summative assessment on test anxiety and performance. Journal of Technology, Learning, and Assessment, 4(1), 1-31.

Minstrell, J., & van Zee, E. (2003). Using questioning to assess and foster student thinking. In J. M. Atkin & J. E. Coffey (Eds.), Everyday assessment in the science classroom (pp. 61-73). Arlington, VA: National Science Teachers Association Press.

Taras, M. (2005). Assessment–summative and formative–some theoretical reflections. British Journal of Educational Studies, 53(4), 466-478.

 

 

 

Uncertainty Principle, Education (research) Style.

\*\ ^_^  /*/  <– Do a little happy dance!

 

Before I was an educationalist, I was a researcher in the Sciences and we had this idea that whatever we created and “discovered” needed to be revalidated.  Basically when we were expected to publish results any lab should be able to replicate the results.  I grew as an experimentalist, under the guise and care of my faculty.

However, as I shifted gears, my paradigm and perspectives on research design didn’t move as fluidly.  Traditionally I have carried about a more, aligned, constructionist epistemological where I believed that meanings and understanding were influenced by surroundings. In qualitative approaches, in education, this generally means that social phenomena are engineered within social contexts.  And, while these phenomena may seem natural they are, in reality, influenced in design, artifact and constrained by the environment around them.

Think of a cell, yes like the kind in your body.  They change form when influenced by cold temperature or wet environments. Sometimes if you poke at them, they bounce back and sometime your interactions and puncture the fine membrane…and then you can clean up the mess. Researching in educational environments is kind of the same.

As soon as you start watching how someone is interacting with a system, they react to your inspection.  When you attempt to change a participants environment or try interventions, sometime it is merely the presence of the observer that is enough to solicit a reaction.

So then how are we, as research, supposed to create systematic research/date that is replicable?  Honestly, sometimes…most times, I don’t know.  I think the beauty of qualitative methods is that they give space for representing unique contexts as “whole pictures” often expressive from the perspective of the participants. However, often when reading research papers that take a holistic approach I question how the role of the researcher impacts the nature of the study.

This all reminds me of the Heisenburg Uncertainty Principle, where you can’t measure both a particle’s position and it’s velocity in the same instance.  The simple act of trying to capture information is disruptive. In fact the more accurately you measure position, the more inaccurately you measure velocity (and vice versa).  The very nature of interventions is disruptive to the natural state of participants, even in educational research.  Some call it observer bias; others call it a threat to internal validity.

While there are methods and designs that can control for such things as the Hawthorne Effect they are more often than not, not incorporated into descriptive qualitative research design.  This measures are not taken because the researcher is meant to be a part of the interpretative instrument (s), but then how does that make this *waves hands around in the air* a) representative, b) accurate, c) maintain naturalistic integrity, and d) replicable?

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 12 other subscribers

Calendar

May 2024
M T W T F S S
 12345
6789101112
13141516171819
20212223242526
2728293031