Sunday, November 1, 2009

Research methodology: case study

Case study


As discussed in a previous article (Research paradigms, methodologies and methods), methodology is intertwined with or an aspect of a paradigm. Methodologies or approaches include case study, ethnography, action research and discourse analysis. The focus of this article is the case study methodology.

The case study methodology is an approach that is in agreement with a range of paradigms, such as phenomenology, interpretivism and even post-positivism (Niglas, 2001; Torrance in Somekh & Lewin, 2005), and is a “detailed investigation of a specific person, place or thing” (Kervin, Vialle, Herrington & Okely, 2006, p. 70). However, depending on the paradigm, the role of the researcher differs; for example, the post-positivist researcher would implement case study methodology as a distant observer, while the interpretivist researcher would more likely provide a perspective from the ‘inside’ (Torrance in Somekh & Lewin).

Historical development and contemporary applications


The case study methodology emerged in educational research in the 1970s in response to the psycho-statistical framework or quasi-experimental evaluation designs, shifting the focus away from ‘the case’ or samples to the social construction of the case in situ (Elliott & Lukes, 2008; Torrance in Somekh & Lewin, 2005). It also has a social anthropological background that emphasized participant observation (Elliott & Lukes, 2008), such as the long-term and immersed study of particular social groups or educational settings, where the aim was to get an insider’s perspective and contribute to social theory (Torrance in Somekh & Lewin, 2005). This approach continues today in case studies within the enthnographic methodology, which often aims to provide detailed descriptions of particulars or to establish why things are the way they are (Elliott & Lukes, 2008; Kervin et al., 2006). Another common contemporary application of case study methodology is in evaluation research, such as evaluating the impact of an existing or innovative programme with the aim of improving decision-making, policy or practice (Elliott & Lukes, 2008; Kervin et al., 2006; Torrance in Somekh & Lewin, 2005).

Generalisation


One of the main criticisms of interpretivism and case studies is that the findings are not generalisable, which is a major goal of research (Hammersley, n.d.; Torrance in Somekh & Lewin, 2005). However, Robert Stake (1978, in Elliott & Lukes, 2008) argued that good portrayals stimulate the reader to gain a richer understanding of their own situation and they can then generalise from the case, rather than requiring the case being representative of the whole population. Therefore, ‘thick descriptions’ (Geerz, 1983 in Cousin, 2005) are essential to give the reader the semblance of being there, experiencing and interpreting the case alongside the researcher. Lawrence Stenhouse also addressed this concern by asserting that ‘retrospective generalisations’ are possible, that is, “generalisations can be cumulatively constructed from cases retrospectively rather than taking the form of general principles that enable people to predict in advance how events will unfold in the cases they cover” (Stenhouse, 1979, p. 7 in Elliott & Lukes, 2008). These responses to the argument of generalisation are compelling and hence I believe case studies can generate unique as well as universal understandings (Simons, 1996).

Purpose


Case studies have difference purposes; they can be purely descriptive, explanatory, evaluative or to contribute to social or educational theory (Babbie, 2008; Kervin et al., 2006; Torrance in Somekh & Lewin, 2005). Using the scenario of researching a learning environment, examples of how the focus would change depending on the purpose of the case study include the following:
  • descriptive – a detailed description of the learning environment in your work context to enable others to understand that specific context
  • explanatory – insight into how students are learning based on a detailed description of the learning environment compared to their academic performance
  • evaluative – how well the existing learning environment is contributing to the academic success of the students
  • contributing to theory – are students with personal learning environments more academically successful than those who just rely on the virtual learning environment?

Minimising subjectivity


Although the researcher’s subjectivity is accepted within the interpretivism paradigm, there are some general principles that help minimise this:
  • reflexivity – be aware of, reflect on and critically analyse your own subjectivity and how that might impact on the research (Cousin, 2005; Somekh in Somekh & Lewin, 2005)
  • triangulation – implement a wide range of research methods to collect evidence (Cousin, 2005; Niglas, 2000)
  • thick description – ensure there is enough detail so the reader can share in your interpretation (Geerz, 1983 in Cousin, 2005)
  • collaboration – share with stakeholders your provisional analysis for their comment (Cousin, 2005).

References

  • Babbie, E. (2008). The basics of social research, (4th edn). Belmont: Thomson Wadsworth.
  • Cousin, G. (2005, November). Case study research. Journal of Geography in Higher Education, 29(3), 421–427.
  • Elliott, J. and Lukes, D. (2008). Epistemology as ethics in research and policy: The use of case studies. Journal of Philosophy of Education, 42(S1), 87-119.
  • Geerz, C. (1983). Local Knowledge: Further Essays in Interpretive Anthropology. New York: Basic Books.
  • Hammersley, M. (n.d.). An outline of methodological approaches. Retrieved August 9, 2009, from http://www.tlrp.org/capacity/rm/wt/hammersley/hammersley4.html.
  • Kervin, L., Vialle, W., Herrington, J. & Okely, T. (2006). Research for educators. South Melbourne: Thomson Social Science Press.
  • Niglas, K. (2000, September 20–23). Combining quantitative and qualitative approaches. Paper presented at the annual European Conference on Educational Research, Edinburgh, UK. Retrieved September 7, 2009 from http://www.leeds.ac.uk/educol/documents/00001544.htm.
  • Niglas, K. (2001, September 5–8). Paradigms and methodology in educational research. Paper presented at the annual European Conference on Educational Research, Lille, France. Retrieved August 9, 2009, from http://www.leeds.ac.uk/educol/documents/00001840.htm.
  • Simons, H. (1996). The Paradox of Case Study, Cambridge Journal of Education, 26(2), 225–240.
  • Somekh, B. (2005) in Somekh, B., Burman, E., Delamont, S., Meyer, J., Payne, M. and Thorpe, R., ‘Research communities in the social sciences’. In Somekh, B. and Lewin, C. (eds). Research methods in the social sciences. London: Sage Publications.
  • Stake, R.E. (1978). The Case Study Method in Social Inquiry, Educational Researcher, 7(2), 5–8. In Elliott, J. and Lukes, D. (2008). Epistemology as ethics in research and policy: The use of case studies. Journal of Philosophy of Education, 42(S1), 87-119.
  • Stenhouse, L. (1979). ‘The Problem of Standards in Illuminative Research’. Lecture given to the Scottish Educational Research Association at its Annual General Meeting, Stenhouse Archive, Norwich: University of East Anglia. In Elliott, J. and Lukes, D. (2008). Epistemology as ethics in research and policy: The use of case studies. Journal of Philosophy of Education, 42(S1), 87-119.
  • Torrance, H. (2005) in Torrance, H. and Stark, S., ‘Case study’. In Somekh, B. and Lewin, C. (eds). Research methods in the social sciences. London: Sage Publications.

Saturday, October 10, 2009

Research paradigms: positivism, interpretivism, critical approach and poststructuralism

As discussed in a previous article (Research paradigms, methodologies and methods), paradigms determine the criteria for research (Dash 2005) and, in this article, some key paradigms are outlined. As an introduction, Lather (2006) maps the following four paradigms as follows:
  • Positivism: predicts
  • Interpretivism: understands
  • Critical orientation: emancipates
  • Poststructurialism: deconstructs.

Positivism


Positivism began with Auguste Comte in the nineteenth century (Lather, 2006) and asserts a deterministic and empiricist philosophy, where causes determine effects, and aims to directly observe, quantitatively measure and objectively predict relationships between variables (Hammersley, n.d.; Mackenzie & Knipe, 2006). It assumes that social phenomena, like objects in natural science, can be treated in the same way.

One major criticism of positivism is the issue of separating the researcher from what is being researched. The expectation that a researcher can observe without allowing values or interests interfering is arguably impossible (Hustler in Somekh & Lewin, 2005). As a result, positivism today, also known as post-positivism, acknowledges that, even though absolute truth cannot be established, there are knowledge claims that are still valid in that they can be logically inferred; we should not resort to epistemological sceptisim or relativism (Hammersley, n.d.). Positivist research methods include experiments and tests, that is, particularly those methods that can be controlled, measured and used to support a hypothesis.

Interpretivism


Wilhelm Dilthey in the mid-twentieth century was influential in the interpretivist paradigm or hermeneutic approach as he highlighted that the subject matter investigated by the natural sciences is different to the social sciences, where human beings as opposed to inanimate objects can interpret the environment and themselves (Hammersley, n.d; Onwuegbuzie, 2000). In contemporary research practice, this means that there is an acknowledgement that facts and values cannot be separated and that understanding is inevitably prejudiced because it is situated in terms of the individual and the event (Cousin, 2005; Elliott & Lukes, 2008). Researchers recognise that all participants involved, including the researcher, bring their own unique interpretations of the world or construction of the situation to the research and the researcher needs to be open to the attitudes and values of the participants or, more actively, suspend prior cultural assumptions (Hammersley, n.d.; Mackenzie & Knipe, 2006). These principles are particularly important in ethnographic methodology (Elliott & Lukes, 2008; Hustler in Somekh & Lewin, 2005). Some interpretivist researchers also take a social constructivist approach, initiated by Lev Vygotzky (also around the mid-twentieth century), and focus on the social, collaborative process of bringing about meaning and knowledge (Kell in Allen, 2004). The case study research methodology is suited to this approach (Elliott & Lukes, 2008; Torrance in Somekh & Lewin, 2005). Interpretivist research methods include focus groups, interviews, research diaries, that is, particularly methods that allow for as many variables to be recorded as possible.

One of the criticisms of interpretivism is that it does not allow for generalisations because it encourages the study of a small number of cases that do not apply to the whole population (Hammersley, n.d.). However, others have argued that the detail and effort involved in interpretive inquiry allows researchers to gain insight into particular events as well as a range of perspectives that may not have come to light without that scrutiny (Macdonald, Kirk, Metzler, Nigles, Schempp & Wright, 2000; McMurray, Pace & Scott, 2004). A more detailed defence of interpretivism is provided in a separate article (Research methodology: case study).

Critical research


Critical educational research has its origins in critical theory, attributed to Georg Hegel (eighteenth century) and Karl Marx (nineteenth century), and critical pedagogy, a key figure being Paulo Freire (twentieth century). These influential figures focused on eliminating injustice in society and critical researchers today also aim to transform society to address inequality, particularly in relation to ethnicity, gender, sexual orientation, disability, and other parts of society that are marginalised (Hammersley, n.d.; Mackenzie & Knipe, 2006).

Similar to interpretivist researchers, critical researchers recognise that research is not value free, but they go further in that the goal of the research is to actively challenge interpretations and values in order to bring about change. This leads to a common criticism of critical research, that the aim is to support a political agenda (Hammersley, n.d.). However, others argue that this is a necessary consequence because politics and inquiry are intertwined or inseparable and, by having an agenda of reform, all participants’ lives can be transformed for the better (Creswell, 2003) – this is why the critical approach is sometimes known as the transformative paradigm.

An example of a research methodology that is in agreement with the critical paradigm is action research (Lather, 2006). Research methods used in critical research include interviews and group discussions, that is, methods that allow for collaboration and can be carefully deployed in a way that avoids discrimination (Mackenzie & Knipe, 2006).

Poststructuralism


Key figures in the inception of poststructuralism include Michel Foucault and Jacques Derrida in the twentieth century (Lather, 2006). Poststructuralism is also interested in investigating individuals and social relations but focuses more on selves as constructs and how they are formed through language and gain meaning within specific relations of power (Macdonald et al., 2000). This relationship between meaning and power is embodied in the term discourse, which encapsulates not only what is said and thought but also who has the authority to speak (Ball, 1990 in Macdonald et al., 2000). This means that in contemporary poststructuralist research, there is a strong emphasis on examining language, which provides indicators of power-knowledge relationships.

An example of a research methodology that a poststructuralist researcher is most likely to use is discourse analysis. Typical research methods include observations and audio or visual recordings of interactions that focus on what is said or not said, how the participants position themselves and the social and cultural consequences of the observations.

A criticism of poststructuralism is that it undermines self agency, that, beyond their control, people are constructs of their society (Hammersley, n.d.). However, others argue that because individuals are enmeshed in the complex web of social relations, it is essential to interrogate discourses to reveal those power relationships in order to help those individuals (Macdonald et al., 2000).

Quantitative-qualitative continuum


I have consciously avoided discussion about quantitative and qualitative research approaches to minimise the scope of this article but will note here that I found the quantitative-qualitative continuum idea attractive because, rather than dividing paradigms into two separate groups (e.g. positivism is quantitative; interpretivism is qualitative), it asserts that there is no ‘right paradigm’ (Niglas, 1999; 2000; 2001; 2007; Onwuegbuzie, 2000).

References


  • Creswell, J.W. (2003). Research design: Qualitative, quantitative, and mixed methods approaches, (2nd edn). Thousand Oaks: Sage.
  • Cousin, G. (2005, November). Case study research. Journal of Geography in Higher Education, 29(3), 421–427.
  • Dash, N. K. (2005). Module: Selection of the research paradigm and methodology. Retrieved August 9, 2009, from http://www.celt.mmu.ac.uk/researchmethods/Modules/Selection_of_methodology/index.php.
  • Elliott, J. and Lukes, D. (2008). Epistemology as ethics in research and policy: The use of case studies. Journal of Philosophy of Education, 42(S1), 87-119.
  • Hammersley, M. (n.d.). An outline of methodological approaches. Retrieved August 9, 2009, from http://www.tlrp.org/capacity/rm/wt/hammersley/hammersley4.html.
  • Hustler, D. (2005) in Goldbart, J. and Hustler, D., ‘Ethnography’. In Somekh, B. and Lewin, C. (eds). Research methods in the social sciences. London: Sage Publications.
  • Kell, P. (2004) ‘A teacher’s tool kit: Sociology and social theory explaining the world’. In Allen, J. (ed). Sociology of education: Possibilities and practices. South Melbourne: Thomson Social Science Press.
  • Lather, P. (2006, January–February). Paradigm proliferation as a good thing to think with: teaching research in education as a wild profusion. International Journal of Qualitative Studies in Education, 19(1), 35–57.
  • Macdonald, D., Kirk, D., Metzler, M., Nigles, L.M., Schempp, P. and Wright, J. (2002). It's all very well, in theory: Theoretical perspectives and their applications in contemporary pedagogical research. QUEST, 54, 133–156.
  • Mackenzie, N. and Knipe, S. (2006, October). Research dilemmas: Paradigms, methods and methodology. Issues in Educational Research, 16(2), 193–205. Retrieved August 9, 2009, from http://www.iier.org.au/iier16/mackenzie.html.
  • Niglas, K. (1999, September 22–25). Quantitative and qualitative inquiry in educational research: Is there a paradigmatic difference between them? Paper presented at the annual European Conference on Educational Research, Lahti, Finland. Retrieved September 7, 2009 from http://www.leeds.ac.uk/educol/documents/00001487.htm.
  • Niglas, K. (2000, September 20–23). Combining quantitative and qualitative approaches. Paper presented at the annual European Conference on Educational Research, Edinburgh, UK. Retrieved September 7, 2009 from http://www.leeds.ac.uk/educol/documents/00001544.htm.
  • Niglas, K. (2001, September 5–8). Paradigms and methodology in educational research. Paper presented at the annual European Conference on Educational Research, Lille, France. Retrieved August 9, 2009, from http://www.leeds.ac.uk/educol/documents/00001840.htm.
  • Niglas, K. (2007). ‘Introducing the quantitative-qualitative continuum: An alternative view on teaching research methods courses’. In Murtonen, M., Rautopuro, J., & Väisänen, P. (eds). Learning and teaching of research methods at university. Research in Educational Sciences. Turku: Finnish Educational Research Association.
  • Onwuegbuzie, A.J. (2000, November 18). Positivists, post-positivists, post-structuralists, and post-modernists: Why can’t we all get along? Towards a framework for unifying research paradigms. Paper presented at the annual meeting of the Association for the Advancement of Educational Research, Ponte Vedra, Florida.
  • Torrance, H. (2005) in Torrance, H. and Stark, S., ‘Case study’. In Somekh, B. and Lewin, C. (eds). Research methods in the social sciences. London: Sage Publications.

Saturday, October 3, 2009

Paradigms, methodologies and methods in educational research

There is great debate about how to define paradigms, methodologies and methods. In this article, I have attempted to make sense of some of these debates and present a description of these concepts and related issues.

Paradigms


The concept of paradigms has been attributed to Thomas Khun from the natural sciences, who controversially proposed that paradigms are a collection of “concepts, variables and problems attached with corresponding methodological approaches and tools” (Dash, 2005) and, in time, paradigms are overturned by other paradigms (Pajares, n.d.). However, in the social sciences, paradigms are generally not discarded as others emerge because they represent different frameworks that reflect different points of view (Babbie, 2008) that are selected to suit the researcher or area of research (Somekh in Somekh and Lewin, 2005).

More recently, the term paradigm in educational research has come to mean a framework that determines the way knowledge is studied and interpreted and the motivation and goal of the research (Mackenzie and Knipe, 2006). Egon Guba expands this concept by outlining that paradigms are shaped by epistemological (the nature of knowledge), ontological (the nature of existence) and methodological (how the inquirer should go about finding out knowledge) questions (Gough, 2000).

Key paradigms are positivism, interpretivism, critical theory and poststructuralism. These paradigms are discussed in further detail in a separate article. Briefly, paradigms can be simplistically grouped into two categories: positivism, where knowledge is observable and measurable; and anti-positivism, where meaning is generated from the process of knowing and interpreting phenomena (Dash, 2005).

Methodology


Methodology is intertwined with or an aspect of a paradigm, as mentioned above. In this sense, methodology can also be defined as a conceptual framework (Gale, 1998) but specific to how research is approached and guided, that is, it provides the rationale for the research (Gough, 2000). It is the aspect of a paradigm that emphasises the question of how the research should proceed, not the theory of knowledge or existence, and is influenced by the researcher’s worldview (Gale, 1998; Gough, 2000). Methodologies or approaches include case study, ethnography, action research and discourse analysis.

Methodology is often used interchangeably with the term ‘method’. However, in the following section, a separate explanation of method is provided, distinguished from ‘methodology’.

Methods


Some authors distinguish between ‘method(s)’ and ‘techniques’ (e.g. Gale, 1998), where methods are concerned with how research is conducted and techniques are the instruments that collect and analyse data. However, in this article, I have presented that ‘methodology’ rather than ‘method’ captures how research should proceed. I also follow the explanation chosen by Gough (2000), that the terms ‘method’ and ‘technique’ can be used interchangeably, where a research method is the practical technique for data collection and analysis. Methods include tests, surveys, interviews, focus groups and observations. For example, within the case study methodology, common research methods include interviews and observation.

References


  • Babbie, E. (2008). The basics of social research, (4th edn). Belmont: Thomson Wadsworth.
  • Dash, N. K. (2005). Module: Selection of the research paradigm and methodology. Retrieved August 9, 2009, from http://www.celt.mmu.ac.uk/researchmethods/Modules/Selection_of_methodology/index.php.
  • Gale, T. C. (1998). Methodological ‘maps’ and key assumptions: A framework for understanding research. Unpublished paper. Rockhampton, Qld: Faculty of Education, Central Queensland University.
  • Gough, N. (2000, October 8). Methodologies under the microscope. Paper presented at the Deakin University Postgraduate Association research students’ conference, Deakin University, Geelong, Vic.
  • Mackenzie, N. and Knipe, S. (2006, October). Research dilemmas: Paradigms, methods and methodology. Issues in Educational Research, 16(2), 193–205. Retrieved August 9, 2009, from http://www.iier.org.au/iier16/mackenzie.html.
  • Pajares, F. (n.d.). Thomas Khun. Retrieved September 6, 2009 from http://www.des.emory.edu/mfp/Kuhnsnap.html.
  • Somekh, B. (2005) in Somekh, B., Burman, E., Delamont, S., Meyer, J., Payne, M. and Thorpe, R., ‘Research communities in the social sciences’. In Somekh, B. and Lewin, C. (eds). Research methods in the social sciences. London: Sage Publications.

Monday, May 11, 2009

Assessment for learning

Many education institutions are still assessing learning rather than assessing for learning, resulting in a poor assessment experience for learners.

"Students can, with difficulty, escape from the effects of poor teaching, they cannot (by definition if they want to graduate) escape the effects of poor assessment." (Boud, D in Knight 1995, p. 35)

One way of defining poor assessment is to describe the opposite, i.e. to outline what high quality assessment is. There are many principles to guide high quality assessment, some of which are listed here (Land, in QAA 2005; McMillan 2000; Pitman 1999). Assessments should be:
  • beneficial and sustainable
  • authentic
  • fair and transparent
  • reliable and valid
  • constructively aligned
  • managed efficiently.

A selection of principles is discussed below.

Beneficial and sustainable


High quality assessment should directly influence the learning process in a positive way (Boud 1995; Land, in QAA 2005). Examples of adverse effects include the weakening of morale and motivation (Drew 2001; Land, in QAA 2005) but they also include detrimental effects on learning, such as inadvertently rewarding memorisation, focusing on topics that are easy to assess at the expense of more important learning outcomes, and encouraging learners to focus on content that is assessable or assessments that are weighted more heavily (Boud 1990). Instead of teaching this kind of shallow learning, assessments should be encouraging learners to apply critical thinking and become more autonomous or self-determining learners (Boud 1990).

Another way of stating this is that assessments should “meet the needs of the present without compromising the ability of students to meet their own future learning needs” (Boud 2000, p. 15), that is, assessments should be sustainable. Sustainable assessments encourage learners to engage in their own learning, interact with others and focus their attention on the process of learning. This implies an emphasis on formative assessment (for learning), because of the importance of feedback in supporting active learning, over summative assessment (for certification) (Scriven, in Boud 1995). Summative assessment also influences learning but in a way that is not sustainable because it authoritatively states what is important and takes responsibility away from the learner (Boud 2000).

Learners should ultimately be able to determine by themselves if they have met standards appropriate for whatever task is required and seek feedback from their environment (Boud 2000) and even from themselves through self assessment. They should not have to limit themselves to shallow rote learning or learning that is determined by others.

Authentic


Assessments should be authentic, meaning they should reflect a realistic context outside of the course itself (Boud 1998). At a deeper level, authentic assessments are “contextualised complex intellectual challenges, not fragmented and static bits or tasks” (Wiggins 1989 p. 711, in Boud 1998). With the shift away from norm-referenced assessments (that focus on ranking) to criterion-referenced assessments (that focus on assessing a learner’s performance against learning outcomes rather than against other learners) (Pitman 1998), assessments are less fragmented and less contrived (that is, more authentic). Together, criterion-referenced and authentic assessments lead to more independent learners who are better able to apply skills learned in higher education in the professional context (Boud 1995).

Constructively aligned


High quality assessment is integrated with learning (Biggs 2002) so that it is part of a teaching system. Having an integrated system helps close the gap between learners who have a surface approach to learning and those who naturally grasp the importance of deep learning (Biggs 1999). This integrated system requires educators to be clear about what learners should actively learn by stating intended learning outcomes (ILOs) and then set up all aspects of the system so that they are aligned according to those outcomes – this kind of system is known as ‘constructive alignment’ (Biggs 1999; Biggs and Tang 2007). It is ‘constructive’ because learners construct their own outcomes, and ‘alignment’ refers to how outcomes, teaching/learning activities and assessments need to be aligned (Biggs and Tang 2007). Regarding constructivism, the previous sections on sustainability and authenticity have already discussed how learners should be active and autonomous. Therefore, this section will focus more on how teaching and assessment can be aligned.

The constructive alignment process is as follows (Biggs and Tang 2007):
  1. The ILOs cannot simply state what topics learners need to know about but rather they need to identify the activities and change in behaviour required to achieve the outcomes. They need to be expressed in the form of a verb to describe the learning activity, its object or content and also specify the context and standard required.
  2. The learning environment containing activities need to address the verb and bring about the ILO.
  3. The assessments should also contain that verb and learners’ performances can be judged against the criteria.
  4. The judgments can be transformed into standard grading criteria.

Biggs (1999) has also devised a hierarchy of verbs to help devise ILOs called the structure of the observed learning outcome (SOLO) taxonomy. Examples of verbs from the SOLO taxonomy are ‘memorise’ and ‘define’, which are at the knowledge or quantitative end of the hierarchy, and ‘theorise’ and ‘reflect’, which are at the deep understanding or qualitative end. Getting these verbs right has the flow-on effect of guiding the activities and assessments (Osborne, in QAA 2005) as well as ensuring the outcomes at the institutional, programme and course levels are met.

Efficient


As stated above, amongst other principles, high quality assessments need to be beneficial, sustainable, authentic and constructively aligned. Adhering to these principles requires time, effort and resources that academics and institutions cannot afford, especially due to the ‘massification’ phenomenon (Ross, in QAA 2005). Therefore, it is important to also ensure that assessments are efficient in their delivery.

Ross (in QAA 2005) suggests two strategic options for gaining efficiency, based on the work by Gibbs and Jenkins (1982): ‘control’ and ‘independence’ strategies. Control strategies include multiple-choice questions, fewer assessments and shorter assessments. Independence strategies include ‘front-end loading’ (where effort is spent at the beginning of the assessment process, unpacking, engaging and negotiating with the criteria (Hornby, in QAA 2005)), self assessment and peer assessment. By encouraging independent assessment strategies, efficiencies are gained because responsibility is shifted to the learner. Boud also strongly supports self assessment and peer learning because they foster self-determination and lifelong learning skills (Boud 1998; Boud, Cohen and Sampson 1999; Boud and Falchikov 2005). A combination of control and independence strategies can be effective and efficient.

In closing


High quality assessments promote learning. The focus of assessments should not be on measuring one student's knowledge of content against another student's knowledge. This kind of assessment is subject matter expert or content focused, not learner focused, and mostly encourages shallow learning (e.g. memorisation). Instead, assessments should be, amongst many other things, beneficial, sustainable, authentic and constructively aligned so that learners are self determined and equipped for lifelong learning.

References

  • Biggs, J. (1999). What the student does: Teaching for enhanced learning, Higher Education Research & Development, 18(1), 57-75.
  • Biggs, J. (2002). Aligning the curriculum to promote good learning, Constructive alignment in action: Imaginative curriculum symposium, 4 November, LTSN Generic Centre. Retrieved April 21, 2009 from http://www.palatine.ac.uk/files/1023.pdf.
  • Biggs, J. and Tang, C. (2007). Teaching for quality learning at university, 3rd edn, The Society of Research into Higher Education and Open University Press, McGraw Hill Education, Berkshire.
  • Black, P. and Wiliam, D. (1998). Assessment and classroom learning, Assessment in Education: Principles, Policy & Practice, 5(1), 7-74.
  • Boud, D. (1990). Assessment and the promotion of academic values, Studies in Higher Education, March, 15(1), 101-11.
  • Boud, D. (1995). “Assessment and learning: Contradictory or complementary?”, in P. Knight (ed) (1995), Assessment for Learning in Higher Education, Kogan Page, London, pp. 35-48. Retrieved April 21, 2009 from http://www.education.uts.edu.au/ostaff/staff/boud_publications.html.
  • Boud, D. (1998). Assessment and learning – unlearning bad habits of assessment, Presentation to the Conference ‘Effective Assessment at University’, 4-5 November, University of Queensland.
  • Boud, D. (2000). Sustainable assessment: rethinking assessment for the learning society, Studies in Continuing Education, 22(2), 151-167. Retrieved April 21, 2009 from http://www.education.uts.edu.au/ostaff/staff/publications/db_28_sce_00.pdf.
  • Boud, D., Cohen, R. and Sampson, J. (1999). Peer learning and assessment, Assessment & Evaluation in Higher Education, 24(4), 413-426.
  • Boud, D. and Falchikov, N. (2005). “Redesigning assessment for learning beyond higher education”, in A. Brew and C. Asmar (eds) Research and Development in Higher Education 28, HERDSA, Sydney, pp. 34-41.
  • Drew, S. (2001). Student perceptions of what helps them learn and develop in higher education, Teaching in Higher Education, 6(3), 309-331.
  • Gibbs, G. and Jenkins, A. (1992). Teaching Large Classes in Higher Education, RoutledgeFalmer, London, in Ross, D.A. (2005). “Streamlining assessment – how to make assessment more efficient and more effective – An overview”, in Quality Assurance Agency for Higher Education (QAA), Enhancing practise: Reflections on assessment Vol 1, Scotland. Retrieved May 2, 2009 from http://www.enhancementthemes.ac.uk/publications/Default.asp.
  • Hornby, W. (2005). “Dogs, stars, Rolls Royces and old double-decker buses: efficiency and effectiveness in assessment”, in Quality Assurance Agency for Higher Education (QAA), Enhancing practise: Reflections on assessment Vol 1, Scotland. Retrieved May 2, 2009 from http://www.enhancementthemes.ac.uk/publications/Default.asp.
  • Land, R. (2005). "Streamlining assessment: making assessment more efficient and more effective", in Quality Assurance Agency for Higher Education (QAA), Enhancing practise: Reflections on assessment Vol 1, Scotland. Retrieved May 2, 2009 from http://www.enhancementthemes.ac.uk/publications/Default.asp.
  • McMillan, J.H. (2000). Fundamental assessment principles for teachers and school administrators, Practical Assessment, Research & Evaluation, 7(8). Retrieved March 24, 2009 from http://PAREonline.net/getvn.asp?v=7&n=8.
  • Osborne, M. (2005). “Constructive alignment of learning outcomes to assessment methods – An overview”, in Quality Assurance Agency for Higher Education (QAA), Enhancing practise: Reflections on assessment Vol 1, Scotland. Retrieved May 2, 2009 from http://www.enhancementthemes.ac.uk/publications/Default.asp.
  • Pitman, J., O’Brien, J.E. and McCollow, J.E. (1999). High-quality assessment: We are what we believe and do; A presented by John Pitman at the IAEA conference, Bled, Slovenia, May, Queensland Board of Senior Secondary School Studies, Brisbane.
  • Ross, D.A. (2005). “Streamlining assessment – how to make assessment more efficient and more effective – An overview”, in Quality Assurance Agency for Higher Education (QAA), Enhancing practise: Reflections on assessment Vol 1, Scotland. Retrieved May 2, 2009 from http://www.enhancementthemes.ac.uk/publications/Default.asp.
  • Scriven, M. (1967). "The methodology of evaluation", in R.W. Tyler et al. (eds), Perspectives of Curriculum Evaluation, American Educational Research Association Monograph, Rand McNally, Chicago, cited in D. Boud (1995). Assessment and learning: Contradictory or complementary?, in P. Knight (ed) (1995), Assessment for Learning in Higher Education, Kogan Page, London, pp. 35-48. Retrieved April 21, 2009 from http://www.education.uts.edu.au/ostaff/staff/boud_publications.html.