Understanding research and evidence: a guide for teachers

Susan Feez and Robyn Cox

In recent decades, there have been demands for teachers to implement evidence-based decision-making and practice in their classrooms (Centre for Education Statistics and Evaluation, 2014; Matters 2006), and even to restrict their practice to those approaches which are identified as evidence-based (Bruniges, 2005). It is no coincidence perhaps that at the same time claims about specific educational approaches being evidence-based are proliferating in the research literature, and in policy documents and commercial teaching resources.

Just what is this evidence? What exactly does evidence mean? How do we gather the evidence? This PETAA paper explores what is meant by terms such as evidence, data, research and methodology; and it will consider how teachers might best use research evidence to contribute to the goal of achieving improved language and literacy outcomes for all Australian children.

Evidence-based teaching is teaching in which decisions made by the teacher are informed by research evidence. There is, however, much debate about what counts as evidence, and how teachers should apply evidence to their practice. This PETAA paper provides teachers with knowledge they can use to:

  • reflect on what counts as evidence in relation to English education
  • evaluate the strength of evidence derived from different research methodologies to their own practice
  • reassess their current ideas about how to implement the English curriculum
  • extend their interpretations of current debates in relation to research evidence

Teachers have become accustomed to enthusiasts promoting new approaches they claim will lead to gains in student achievement. For this reason, teachers need to evaluate critically claims that a particular practice is ‘evidence-based’ to ensure that these claims are not merely contributing to the apparently endless stream of novel trends that are routinely visited on teachers and their students.

Evidence-based teaching is typically promoted by those who argue that teaching programs should be based on evidence derived from research designed using scientific experimental methods (for example, double-blind studies and randomised controlled trials) in order to generate reliable evidence that can be used to improve decision-making and practice (Hempenstall, 2006; Wyatt-Smith & Gunn, 2007).

Teacher and student

The value of this type of evidence for teachers, however, has been called into question by those who argue that experimental methods designed to answer questions about natural processes are less valid and reliable when applied to cultural and social processes such as teaching and learning in contexts where double blind studies, random selection of participants and control of variables are rarely possible, unless they are used to measure the development of limited, or constrained, sets of skills or knowledge which can be learnt relatively quickly (Freebody, 2007, 2010; Paris & Luo, 2010; Taber, 2013).

A consensus, nevertheless, is emerging among educators that ‘a sound basis for action is converging evidence from multiple sources and different perspectives’ because ‘[e]vidence-based convergence lends strength to findings as no single study, methodology, finding or view is considered, in and of itself, a sufficient basis for action’ (Wyatt-Smith, Elkins, & Gunn, 2011, page 3). This view is supported by Myhill, who argues that: ‘the most robust evidence comes from an accumulation of studies, preferably including replication studies conducted by others’ (Myhill, 2016, page 321).

  • How might classroom teachers evaluate claims that a specific teaching practice is based on evidence?
  • How might they then determine the extent to which this approach will be effective, even appropriate, to meet the specific learning needs of the students in their classroom?
  • How should classroom teachers respond to pressure to restrict teaching practices to those claimed to be evidence-based?

The term evidence is used routinely, but not always precisely, to promote particular teaching practices, alongside terms such as research, data and methodology. These terms are explored below.

What is evidence?

The word evidence has its origin in a Latin word which means ‘to see’, the same origin as words such as visible and vision. The e- at the beginning of the word is from the Latin prefix, ex-, meaning ‘out of’. So, evidence emerges out of what we can see. Because evidence is something that has been seen, and so verified, it can be used to support a claim or assertion that something is true.

In the legal system, during a criminal trial in a court of law, lawyers for the prosecution will present evidence, perhaps physical evidence gathered by investigators or evidence provided by witnesses, to support their case that a convicted person, the defendant, is guilty of a crime. Lawyers representing the defendant will produce evidence to support the opposite view. The task of the trial judge and jury is to weigh up the evidence before coming to a decision about the guilt or otherwise of the defendant. A comparable task falls to teachers when they are asked to weigh up the evidence before deciding whether to apply a particular practice in their classrooms.

In the field of science, scientists answer their questions and test their theories about natural phenomena by gathering empirical evidence. Empirical evidence is evidence that can be perceived by the senses. It is made up of data collected through observation, experiment and/or measurement. The data are interpreted through statistical analysis, and evidence obtained in this way can be further tested, and ideally, reproduced. Techniques for collecting, analysing and interpreting data to produce evidence vary from one field of academic inquiry to the next. Part of the task for teachers when weighing up evidence is to find out which techniques from which field of academic inquiry were used to analyse the data, and to make judgements about the relevance of these research techniques to their teaching situation.

What are data?

The word data is the plural form of the Latin word, datum, which means ‘something given’. In a scientific study, each piece of information, for instance each observation or each measurement, is one datum. When these pieces of information are collected together, they are called data, a word which in academic English is used as a plural noun indicating that each piece of information can be counted. In everyday English, however, the word data is commonly used as a singular noun, referring to information as a single ‘mass’ rather than as separate countable items.

In controlled experiments, data are collected by measuring, observing and/or recording some elements that can change (variables), while other potential variables are controlled so they do not change. Data collection can be undertaken by participants in the setting or activity under investigation, or by observers looking on, who are not participants.

Examples of data which can be collected in educational settings include:

  • formal or informal observations, recorded as field notes, in a diary, day-book or journal, in a graphic organiser, or against a checklist
  • photographs, audio or video recordings
  • artefacts such as work samples, models
  • surveys, questionnaires, interviews, conferences, focus groups
  • ‘maps’ or charts, for example, classroom or playground layout, student movement, sociogram of student interaction
  • measurement, such as of time taken, distance or area covered, tasks completed, attendance, classroom quiz, term assignment, end of year examination, or national test scores (for example, NAPLAN), scales (such as, of attitudes)

When patterns are found in the data, and these are interpreted, data become information. Information organised in an abstract, systematic way so that it can be generalised, transferred and/ or re-applied productively in other contexts becomes evidence.

It is interesting for teachers to consider which elements of their practice and of student progress can be counted or measured in some way, and which cannot. School attendance and test scores, for example, can be counted or measured, and expressed as quantitative data. Records of classroom observation, lesson study, student performance or interviews with students and parents, perhaps captured in the form of notes, photographs, audio-visual recordings or work samples, are qualitative data. This leads to questions about how teaching practice and student progress that cannot be easily measured or recorded might be evaluated.

What is research?

Research, according to Dörnyei (2007, page 15), simply means trying to find answers to questions’.

Researchers can find answers to their questions by undertaking an original investigation (primary research) or by searching for what other researchers have already found (secondary research).

Often an investigation begins with secondary research, such as literature review, to find out what is already known and where the gaps are in our knowledge, followed by primary research to gather data that will help answer questions to fill the knowledge gaps. Primary research can be an experimental and/or theoretical investigation to build new knowledge (pure basic research), to solve a practical problem (strategic basic research), or to build knowledge for a specific application or objective (applied research).

Most educational research is applied research, undertaken to improve learning outcomes for students. Applied research in education draws on theories from a range of academic disciplines, for example, psychology, sociology, linguistics and history. What counts as evidence, and how it is collected, analysed and interpreted, varies from one field of study to the next. This presents a challenge for teachers.

What is methodology?

A methodology is a systematic and principled approach, or framework, used by a researcher to design a study to answer research questions. This approach will usually include a theoretical framework and a set of techniques and methods, which can be either quantitative or qualitative. Quantitative and qualitative methods can be combined in mixed or multi method studies.

(1) The word ‘statistics’ comes from the Latin word status, which originally meant a place or arrangement. Statistics is the mathematics ‘of the state’ used to analyse large amounts of quantitative data to identify values that might be significant for planning or policy development.

Quantitative techniques are used to gather data, potentially very large quantities of data, which can be measured and analysed using statistics (1). In the sciences, this is achieved by designing experiments under laboratory conditions so variables can be controlled, the experiment can be repeated, and the findings can be generalised. Randomised controlled trials (RCT) are sometimes referred to as the ‘gold standard’ of research design. In the social sciences, quantitative research often involves the statistical analysis of large-scale survey responses and test results.

Qualitative techniques are used to gather data through observation, record-keeping and by interpreting what people participating in the research do or say. Because qualitative research investigates human experience, it is much more difficult to control variables. This problem can be overcome to some extent by using techniques also used to collect quantitative data, in which participants are randomly assigned to separate intervention and control groups, and by ensuring the study is double blind, that is neither the researcher nor the participants know who is in which group while the intervention is happening. If the research is being undertaken with one particular group, for example, a case study of students in one classroom, the study is considered to be quasi-experimental, and the findings not as reliable. Methodologies, research and data collection techniques used in some exemplar English language education studies are listed in Table 1 below.


METHODOLOGY — Ethnography
Research technique: Qualitative, Quantitative
Data collection: undertaking interviews, observations, documents to build descriptions of social practices
Exemplar studies: Brice Heath, S (1983) Ways with Words: language. Life and work in Communities and Classrooms. Cambridge: Cambridge University Press

Verbal protocols

Research technique: Qualitative, Quantitative
Data collection:recalling and thinking aloud while performing a task
Exemplar studies: Young, K.A. (2005) ‘Direct from the source: The value of “think aloud” data in understanding learning.’ Journal of Educational Enquiry, 6 (1) 19–33

Discourse analysis

Research technique: Qualitative, Quantitative, Mixed Methods
Data collection: analysing classroom discourse, multimodal and/or written texts
Exemplar studies: Fairclough, N (2003) Analysing Discourse: Text Analysis for Social Research. Oxon: Routledge

Action and design based research

Research technique: Qualitative, Quantitative, Mixed Methods
Data collection: investigating problems and intervening iteratively to resolve a problem
Exemplar studies: Edwards-Groves, C. & Davidson, C. (2017) Becoming a meaning maker: talk and interaction in the dialogic classroom, PETAA Newtown NSW

Case studies

Research technique: Qualitative, Quantitative, Mixed Methods
Data collection: including instrumental, collective and multiple case studies
Exemplar studies: Bissex, G. & Bullock, R. (1987) Seeing for ourselves: Case Study research by teachers of writing. Portsmouth NH: Heinemann

Meta-analysis

Research technique: Quantitative
Data collection: reviewing and collating the results of multiple quantitative studies
Exemplar studies: Hattie, J (2009) Visible learning: A meta-analysis of 800 meta-analyses relating to achievement. Oxon; Routledge

Random controlled trials

Research technique: Quantitative
Data collection: conducting double blind studies, large scale experiments
Exemplar studies: Buckingham, J., Beaman-Wheldall, R., & Wheldall, K. (2012). A randomised control trial of a MultiLit small group intervention for older low-progress readers. Effective Education, 4(1), 1-26

Standardised instruments

Research technique: Quantitative
Data collection:conducting surveys, tests, questionnaires
Exemplar studies: Invernizzi, M. Sullivan, A. Meier, J. & Swank, L (2004) The Phonological Awareness Literacy Screening for Preschool (PALS). Charlottesville, VA: University of Virginia

Table 1: Methodologies, research and data collection techniques used in some exemplar English language education studies

What counts as evidence?

How can teachers evaluate the evidence used to claim that a particular teaching approach, strategy or program is effective?

The first step is to consider whether the evidence is strong or weak. If a teacher tries out a strategy or implements a program once or twice, and the students are engaged and demonstrate improvement in their learning, the teacher is likely to share their personal experience of the success of that strategy or program with colleagues. Anecdotal evidence of this type, however, is considered to be weak.

While it is always worth learning from our colleagues’ personal accounts of successful professional practice, we cannot be sure how or why the practice was successful, or whether successful implementation of the practice could be repeated with different students by a different teacher at a different time.

Case studies

To examine more closely the potential effectiveness of a teaching practice in an actual teaching situation with a specific group of students, the implementation of the practice might be carefully and systematically recorded and described in detail, from the planning stage to final assessment of student achievement. A case study of this type often begins with a systematic description of the wider context in which the practice is being implemented, for example, an ethnographic description, which includes the cultural and social circumstances of the students and their families, the curriculum context and the teaching situation, the age of the students, their stage of development and their learning needs.

In educational research, the ‘case’ being studied might be a particular teaching practice or program, a teaching intervention, the progress made by a single student, one whole class or school of a particular profile (a single case), or a group of students or multiple classes or schools of comparable or diverse profiles (a multiple case).

A case study can include the use of quantitative and qualitative methods, and can be used to illustrate, evaluate, explain and/or explore a teaching practice or program of interest in a particular context. A case study can also be used as a pilot study in preparation for undertaking a study on a much larger scale (Mills, Durepos, & Wiebe, 2010; Yin, 2014).

Case study evidence is not considered to be strong evidence because it is not based on data drawn from a large population, and so cannot be used for statistical generalisation. Nevertheless, evidence based on a case studied in context has the potential to be transferred to other comparable contexts (Jensen, 2008). Case study evidence can also be used to challenge generalisations based on evidence emerging from large-scale experimental studies, and to question whether a general finding is applicable in specific contexts of interest.

Teacher supervising studentsStandardised testing

Evidence drawn from standardised testing of large student populations is considered to be strong evidence. In Australia, the National Assessment Program: Literacy and Numeracy (NAPLAN) comprises a collection of standardised tests designed to assess whether Australian students are achieving minimum outcomes in foundation skills of reading, writing, language and numeracy at key stages of schooling. So students are tested as consistently as possible, all aspects of the delivery of the test are standardised, including the design of the test items, the conditions under which students sit the tests, and the way student responses are scored and interpreted. NAPLAN results provide a snapshot of what each individual child, each class of students, each year group, school, sector, system or region, and the national cohort, can achieve on one day when tested for a limited set of skills and knowledge. Because the test is standardised, student performance at each of these levels, and across time, can be compared. The findings can be used by parents, teachers, schools, systems and funding providers to track progress and to identify where intervention and/or extra resources might be needed. NAPLAN is also promoted as a means for making teachers and schools more accountable.

Although NAPLAN tests are standardised and NAPLAN data are collected from a large population at regular intervals, the statistical reliability of the data, and comparability of the results from one year to the next, and from one individual, school or sector to the next, have been questioned. In other words, despite NAPLAN being a large-scale standardised test, there are differing opinions about the strength of the evidence supplied by NAPLAN, and how much it can be relied upon to guide decisions about teaching practice. Moreover, because NAPLAN is a single snapshot of a limited range of skills, it does not reflect student achievement across all areas of the curriculum, which raises questions about the validity of some of the ways NAPLAN data are used (for example, Wu, 2016). While the purpose of NAPLAN is to improve teaching and learning, because NAPLAN is becoming increasingly high stakes for teachers and schools, ‘much teaching is now aimed at improving NAPLAN scores’ (Lingard, et al. 2016, page 6), with the potential perverse effect of narrowing the curriculum and reducing the capacity of teachers and schools for differentiating the curriculum to meet the diverse learning needs of students.

Experimental methods

Evidence that is statistically the strongest is based on data collected using experimental methods such as randomised controlled trials. In this type of experiment participants are assigned randomly to separate test and control groups. All variables are controlled, and kept constant, with the exception of one, which, in the test group, is deliberately manipulated. The responses of participants in both groups are measured. Data collected from the two groups are compared using statistical analysis during which the effect size of manipulating the one variable is calculated for significance. Experiments of this type are designed so they can be repeated in exactly the same way, with the potential for strengthening the evidence even more. Experimental findings emerging from randomised controlled trials are promoted as delivering the strongest evidence for the effectiveness, or not, of a particular teaching strategy or practice (Kamil, 2012) because the evidence is considered to be generalisable.

The evidence becomes even stronger when results from multiple studies of a similar or related type are collected and collated. This is achieved when researchers undertake a systematic review or meta-analysis. A systematic review, for example, a literature review, is a review and synthesis of evidence gathered from a selection of related studies (for example, Chall, 1967; Freebody, 2007; Snow, Burns, & Griffin, 1998). A meta-analysis is the use of statistical methods to synthesise the results of related studies, often to compare the effect sizes of different treatments or interventions (for example, Hattie, 2009; Myhill & Watson, 2014).

Teachers as researchers

Assessment

To plan an effective program of teaching and learning, teachers collect information to identify what students already know (baseline assessment), as well as student learning needs and goals (diagnostic assessment). Then, during the implementation of the program, teachers monitor and keep track of student progress (formative assessment), giving feedback and adjusting their teaching as needed before assessing student achievement (summative assessment). Teachers use the evidence they gather from this cycle of assessment to evaluate and reflect on their teaching, and to program for subsequent teaching (Centre for Educational Statistics and Evaluation, 2015; Di Gregorio, 2016).

Teachers undertake these different types of assessment by drawing on a repertoire of assessment strategies. Depending on the assessment strategy used, the information collected can be represented as quantitative data (such as test scores, numerical marks, percentages, rankings) or as qualitative data (such as observation notes, annotated work samples, reports, portfolios). Teachers use these data to inform their programming decisions, to provide students with targeted feedback, and for reporting. So, in a sense, teachers who integrate assessment effectively into their program are already using evidence to inform their decision-making. In other words, they are already using evidence-based research to shape their practice.

For classroom assessment to be principled and ethical, it needs to be designed with some key features. First, the assessment must be transparent, so students are clear beforehand about what is expected of them and what is to be assessed; and practical, so that the burden on teachers and students in terms of time, effort and stress is proportionate to the educational benefit derived from collecting the information. Second, the assessment must be valid, that is, it assesses what it claims to assess, and the data collected is used in meaningful ways, and not to support unrelated or over-stated claims. At the same time, assessment must be reliable, so that assessment results are as consistent as possible from one student to the next, from one teacher to the next, and from one time period to the next. Finally, assessment tasks should be authentic, and useful in the context of real-world needs and demands, so as students prepare for the assessment, they engage with curriculum content that is meaningful and of value for meeting their long term educational needs and goals.

The principles that apply to the design of classroom assessment also apply to educational research design. These principles can be illustrated in the context of two methodologies for conducting research in classrooms, methodologies that can be considered in some ways as more rigorous and systematic versions of the assessment processes routinely undertaken by teachers in classrooms: action research and design-based research.

Action research

Action research is a form of enquiry teachers can use to understand their work practices and how their work impacts on their students and/or colleagues. Action research is a systematic way for teachers to reflect on their practice, and to investigate questions about how to improve their practice (Burns, 2010). It has been described by Kemmis, McTaggart and Nixon (2014, page 4) as a type of research ‘oriented to making improvements in practices and their settings by the participants themselves’.

Through action research, teachers can evaluate and improve their practice by examining successes to understand how and why they worked and by investigating challenges and problems with the goal of improving learning outcomes. The action research process begins when the teacher formulates a question, for example: When I do X, why does Y improve, but Z does not? How can I improve … ? In order to answer the question, the teacher designs an action research plan. The design of the plan will incorporate:

  • informed action, that is, a principled change to the teaching context or teaching practice with the goal of improving learning outcomes
  • procedures for collecting data before, during and afterwards
  • procedures for reflecting on and interpreting the data to revise the plan and to inform the next action in the cycle

The action research plan must be practical and achievable in the teaching context. The first step in the plan is the collection of baseline data, that is, data collected before any action is taken. The second step is to implement action, recording the effect. The teacher then reflects on what happened in order to revise and plan for the next action in the series. The data collected during each action – evaluation – reflection cycle is called incycle data. Action research findings can be used as evidence to support changes in classroom practice. These changes then need to be monitored to see if they continue to have a positive impact on the learning outcomes of the students in the class over time. The findings can be shared with colleagues, who might trial the same or similar actions, or the findings might be used to generate further questions to investigate.

When teachers undertake action research, they often collaborate with university researchers. University researchers can assist with the design of the action research and systematic data collection, as well as the use of theoretical concepts to analyse and interpret the data and to enhance the strength of the evidence emerging from the project. University researchers can also assist with the publication of findings, so these are shared more widely, and can become a starting point for classroom based innovation and research elsewhere.

Design-based research

Another methodology used for research collaborations between teachers and university researchers is design-based research, involving ‘small, pragmatic, planned and classroom data-informed interventions, designed with the intention of developing theory about and demonstrating evidence of effective literacy pedagogic practice’ (Comber, Freebody, Nixon, Carrington & Morgan, 2016, p. 316). This methodology has been developed to blend ‘empirical educational research with the theory-driven design of learning environments’, and it has been described as ‘an important methodology for understanding how, when and why educational innovations work in practice’, and the relation between theory and practice. The aim of this type of research is both to ‘create usable knowledge’, while at the same time advancing ‘theories of learning and teaching in complex settings’ (The Design-based Research Collective 2003, page 5).

[Design-based research] means that teachers collect baseline performance data on an area of students’ learning they are wanting to improve, design and implement a pedagogical intervention informed by theory and related research, and subsequently collect another set of student performance data to compare with the first. Further analysis occurs, and further intervention(s), in iterative cycles of amended practice and data collection, with teachers refining their interventions on the basis of their classroom inquiries and students’ work. Final data and analyses are used to inform theoretical understandings and for dissemination … (Freebody, Morgan, Comber, & Nixon, 2014, page 9).

 

When university researchers and teachers collaborate to improve practice and to generate evidence of effective practice, the design of the project must take into account both the ethics of using students as research participants, and the theory to be applied as a tool for organising, thinking about and interpreting the data.

Research ethics

The ethics of designing classroom-based research echo the principles that apply to the assessment of student learning in classrooms. In other words, classroom-based research must be transparent, practical, valid, reliable and authentic. In addition, researchers must account for the fact that the relationship between students and teachers is not an equal one, so they must ensure the research has ‘merit and integrity’, is justifiable and respectful, of potential benefit, in the best interests of the students, and safe, protecting the students’ physical, emotional and psychological wellbeing (National Statement on Ethical Conduct of Human Research, pages 29-31). Before participating in research, students, and their parents for those under 18, must give their consent. Because educators are in a position of authority, classroom-based research is designed in ways that ensure parental and student consent is informed and freely given. Parents and students need to be assured that there will be no negative consequences if they decline to participate or choose to withdraw at any time.

Research involving children and young people raises particular ethical concerns about:

 

  • their capacity to understand what the research entails, and therefore whether their consent to participate is sufficient for their participation;
  • their possible coercion by parents, peers, researchers or others to participate in research; and
  • conflicting values and interests of parents and children (National Statement, page 29)

Theory

In order to organise, think about and interpret data, researchers use theory. In everyday language, we often use the word ‘theory’ when we are talking about possibilities or guesses, for example: ‘In theory, we should be able to win this.’ Researchers, on the other hand, use the term ‘theory’ to label the conceptual framework, body of knowledge or model they use to organise, explain and interpret data. A theory is like a map the researcher uses to find their way around the research context and the data, and to locate different elements in relation to each other. Often research is designed to test the explanatory power of a theory in a specific context, with the result that the findings might modify or extend the theory.

Practitioners often dismiss theory as not relevant to resolving their day-to-day challenges and problems. Nevertheless, working from a sound theory or model enables practitioners to reflect on their practice systematically and critically, and also to generalise on the basis of reflective practice in order to sustain successful innovations over time and to implement innovative ideas emerging from their findings in different contexts.

The idea that good theory and good practice depend on each other has echoed through the centuries, as illustrated in the following maxims, the first by Leonardo da Vinci in the fifteenth century, and the second, in the twentieth century, by Kurt Lewin, who coined the term ‘action research’.

Those who fall in love with practice without science [knowledge/theory] are like a sailor who enters a ship without a helm or a compass, and who never can be certain whither he is going (The Notebooks of Leonardo da Vinci, Chapter XIX).


There’s nothing so practical as a good theory (Lewin, 1951, page 169).

Evidence-based practice in the classroom: debates and challenges

Particular types of instruction are often promoted by educational policy-makers and administrators, and commercial interests, as evidence-based in ways that imply the approach works for all students in all contexts. Even when a particular teaching strategy or practice is promoted as being ‘evidence-based’, however, classroom teachers remain responsible for making professional judgements about whether and how the strategy or practice should be applied to the design of teaching and learning programs for the students in their specific teaching situation.

To make such judgements, and drawing on the principles used to underpin high quality classroom assessment, teachers can ask questions such as the following:

Transparency — Can the practice be implemented in a way that ensures students are clear about what is expected of them and why?

Practicality — Does the benefit of implementing this practice outweigh any burden it might place on teachers and students in terms of time, effort and stress?

Validity — Is the evidence on which the claimed benefit of this practice is based meaningful in this specific teaching context, or are the claims unrelated or over-stated in relation to the needs of this particular group of students?

Reliability — Will the evidence on which the claimed benefit of this practice is based hold true for this group of students and in this teaching situation at this time?

Authenticity — Does the practice address the real-world demands of the curriculum and the learning needs of this group of students? Does it enable the students to engage meaningfully with curriculum content and will it help them meet their long-term educational needs and goals?

One of the biggest challenges facing teachers evaluating the effectiveness of a particular teaching practice or strategy is the number of variables in any teaching situation. Educational researchers may be able to control some of these variables in the design of experimental studies used to evaluate a particular teaching approach, but classroom teachers are rarely afforded this luxury (Freebody 2007; See also Fehring & Freebody, 2017; Proctor, Freebody & Brownlee, 20152). This raises questions such as the following.

  • Can variables controlled in experimental studies also be controlled in real-world classrooms?
  • Can teachers in real-world settings faithfully replicate the evidence-based teaching strategy?
  • Can teaching resources used in the experimental studies be made available and relevant, or be replicated accurately in real-world settings?
  • Has the evidence collected by researchers to underpin claims that a particular teaching practice or strategy is evidence-based been adjusted to account for diverse real-life classroom settings, for example, through accompanying ethnographic and field descriptions, systematic observation and/or text analysis?

When evidence derived from research designed using experimental methods is tempered by evidence emerging from research designed using ethnographic, case study and text analysis methods, the distinctive features of specific teaching situations and student profiles can be accounted for. Here are some examples.

  • Ethnographic studies, which document the social context in which the research takes place, may produce evidence which teachers find more accessible because it more closely reflects their classroom experience (Purcell-Gates, 2011, 2013).
  • While case study findings may not represent large populations, and so cannot be used statistically to make generalised claims, the findings of a case study that includes a detailed description of the teaching and learning context can be transferred to comparable contexts (Jensen, 2008).
  • When student work samples are analysed using principled frameworks (a theory) for describing consistently the way students respond to the language and literacy demands of the curriculum, the analysis can reveal which aspects of language and literacy development are key to meeting those demands and suggest teaching practices that might contribute to student progress. Frameworks of this type also provide shared tools for assessment, and for comparing the language and literacy demands of different educational contexts, identifying the distinct language and literacy demands of specialised learning areas, such as Science or History, and understanding the way multimodal and digital texts make meaning. (Callow, 2013; Derewianka 2011; Derewianka & Jones, 2016; Humphrey, Droga, & Feez, 2012; Polias, 2016 ; Rose & Martin, 2012).

With the heavy promotion of teaching practices labelled as ‘evidence-based’ by both policy-makers and commercial interests, teachers need to become critical consumers of this type of promotion. They need to apply their professional judgement in ways such as those described above in order to weigh up competing and contradictory claims, and to avoid ‘cherry-picking’ from the evidence to support a particular agenda, or one side of a debate. An area of English teaching in which teachers must use professional judgement to navigate competing research agendas and high-profile debates is the teaching of early literacy.

The experimental evidence is very strong that intensive systematic instruction in how the sounds of English (phonemic awareness) and how they correspond with the letters of the English alphabet (phonics) has a positive effect on the English word reading skills of children in the early years of school (e.g. Buckingham, Wheldall, & Beaman-Wheldall, 2013), and also to some extent prevents later reading difficulties (Ehri, Nunes, Stahl, & Willows, 2001). As a result, this type of instruction is now heavily promoted, with phonics screening tests soon to be mandated in Australia.

When the high-stakes spotlight is on teaching practices designed to teach sound-letter correspondence (code-breaking), there is a risk that less attention is paid to other critical aspects of learning to read, such as building vocabulary and reading for meaning, as well as using and interpreting the meanings socially and culturally (Freebody, 2007). To counteract this risk, teachers need to be mindful when designing literacy programs that the research evidence also shows the following:

  • Evidence supporting the effectiveness of intensive phonics instruction is ‘based on achievement measures that typically test letter-sound knowledge and word knowledge in isolation … not on reading whole texts and developing reading strategies and skills in the context of reading’ (Xue & Meisels, 2004, pages 194–195).
  • The ability of children to attend to speech units larger than phonemes (e.g. syllables, onset and rime) is a reliable indicator of future reading success (Goswami, 2006), Morphological awareness, awareness of the meaningful parts of words, is also associated with improved reading comprehension (Kirby, Deacon, Bowers, Izenberg, Wade- Woolley, & Parrila, 2012) .
  • To learn to read effectively, ‘children need a balanced instructional approach that includes learning to break the code and engaging in meaningful reading and writing’ (Xue & Meisels, 2004, page 222).
  • To be successful throughout the school years, students need to be taught explicitly and cumulatively how to meet the advanced literacy demands of each content area (Freebody & Morgan, 2014, Shanahan & Shanahan, 2014).
  • Research evidence that emphasises teaching letter-sound correspondence does not account for the ‘differences in the developmental trajectories of different reading skills’ (Paris & Luo, 2010, page 321), and the fact that constrained skills, such as decoding, are learned relatively quickly, while unconstrained skills, such as reading comprehension, develop much more slowly over the years of schooling and beyond (See also Luke & Woods 2008).
  • There is a wide array of research findings in relation to early literacy development, yet, as illustrated in Clark (2017), it is easy for researchers, policy makers and teachers working in this area to foreground evidence that supports a strongly held view, while ignoring evidence that might challenge or contradict that view. Clark (2017) surveys a wide array of research findings evaluating the implementation of a Year 1 phonics screening test in England, and, on the basis of this survey, argues that before a similar screening test is implemented in Australia, a more comprehensive review of the evidence is needed.

With a diverse array of research findings such as these, the task facing a teacher of early literacy is to survey the research findings in order to design, ‘a meaningful assemblage of practices’ that make ‘specific and particular sense for each group of students with whom they work’ (Honan, 2004, p. 37). This involves ‘carefully and thoughtfully’ making ‘a series of professional judgements about what and how to teach’ (p. 39). To assemble literacy teaching practices in a principled way, teachers can use the ‘four resources’ model (Honan, 2004, p. 45). The four resources model can be used to organise literacy resources and practices into four categories: resources used to break the code, resources used to comprehend and participate in the meanings of texts, resources for using texts to meet ‘sociocultural expectations’, and resources for analysing texts to reveal and challenge the predispositions and biases of the writer (Freebody 2004, page 7). This model is a conceptual tool teachers can use to fill ‘the gaps in the map of practices they have created’ (Honan, 2004, page 45) and, thus, to design a comprehensive and balanced literacy program.

Teachers also need to apply professional judgement, and account for the specific circumstances and needs of their students, when interpreting data emerging from all types of high stakes testing (Goss, Hunter, Romanes, & Parsonage, 2015; Taber, 2013). They need to consider what each high stakes test is designed to assess, how it is constructed, how test results are used and the effects of testing, both positive and negative, on their students and school community (Lingard, Thompson, & Sellar, 2016). Large scale data, collected through NAPLAN for example, can be useful for tracking national, state and sector trends in the development of narrow sets of skills, and where resources need to be targeted, but are less useful for understanding the educational progress and achievement of individual students and classes and evaluating the professional knowledge and skills of individual teachers. While NAPLAN data have ‘some positive uses’, classroom teachers also have to take into account evidence of potential ‘negative impacts on learning and on student well-being’ (Wyn, Turnbull, & Grimshaw, 2014, page 6). These include narrowing the curriculum, teaching to the test, reduced student motivation to learn, increased teacher and student stress, labelling students on the basis of deficits, a reduced emphasis on equity and social justice, and diminished capacity for teachers to differentiate their teaching to meet specific student needs (Lingard et al., 2016).

For the foreseeable future, teachers will be working in a context in which evidence-based teaching is foregrounded, and data collected through high stakes testing will be used to measure the performance of students, teachers, schools and school systems. This context requires teachers to be critical consumers of what counts as evidence, which means recognising both the strengths and limitations of educational research, high stakes testing, the evidence these produce and the ways this evidence is used (Lingard et al., 2016; Prain 2017; Taber, 2013). Globalisation combined with the advance of information technologies has led to what has been termed the globalised educational reform movement (GERM) (Sahlberg, 2011) and the ‘datafication’ of schooling (Lingard et al, 2016). These developments have been taken up enthusiastically be policy makers, educational administrators and commercial interests. Nevertheless, classroom teachers remain responsible for ensuring research evidence is used for the benefit of their students and their own professional learning.

Conclusion

So, what does this all mean for teachers? Ever since schools and teaching have been a constant in society, teachers and principals have been earnestly seeking the best outcomes for the learners in their care. To achieve these outcomes, the goal of teachers and schools has always been to reflect on and perfect teaching methods. However, in more recent times this process has been accelerated, mandated and now measured.

This paper provides an introduction to evidence-based pedagogy. It has defined key terms and unpacked some key processes underpinning educational research. The paper attempts to provide some critical knowledge and checkpoints for navigating the complex terrain of educational research and at the same time to show how teachers’ classroom practice, and the decisions made by school and system leaders, might be based on research evidence. To make this clearer Table 2 below takes five well-known primary English teaching strategies or practices and links them to some of the research on which these practices are based. While this table is not exhaustive, it shows some of the evidence that suggest these strategies work. As we have argued in this paper, when choosing strategies and practices to use in their classrooms, teachers must also weigh up the research evidence, both its merits and shortcomings, in relation to the complex configuration of variables that together characterise each unique teaching situation.

In conclusion, teachers and school leaders must be acknowledged for the professional knowledge and skills that enable them to navigate the complexities of educational systems every day, at the same time as they also strive to ensure the best educational outcomes for each learner. Increasingly, within this complex terrain, teachers are required to understand the work of educational researchers and ‘what counts as evidence’. The authors of this paper hope that we have provided some guidelines for English teachers as they undertake this demanding, but professionally rewarding, task.

Related Professional Learning Courses
 
CLASSROOM PRACTICE DERIVED FROM RESEARCH — 1


A rich preschool education provides a strong foundation for future literacy development. Teachers can impact on children’s literacy trajectories. Literacy development is not a neat, universal lockstep process. Some children need more time; some plateaued and then raced ahead.
RESEARCH METHODS

An Australian longitudinal case study including: standardised measures of literacy achievement; teachers as ethnographers; purpose developed literacy assessment tasks.
Importantly the case study data were ‘supported by quantitative analysis’

FINDINGS

Teachers could and did make a significant difference in the learning of children from poor communities and these children can achieve.
The study observed a small number of students who appeared to resist schooling and school literacies.
As an educational community, we need to find innovative ways that work to harness their creativity, problem solving and imaginative capacities.

RESEARCH WHICH PROVIDES EVIDENCE THAT THIS WORKS

Hill, S. Comber, B. Louden, B. Reid, J. and Rivalland, J. (1998) 100 Children go to School: Connections and Disconnections in Literacy Experience Prior to School and in the First Year of School, Department for Education, Employment, Training and Youth Affairs, Canberra.


CLASSROOM PRACTICE DERIVED FROM RESEARCH — 2

Collaborative classroom talk. Learners working with talking partners. Teachers plan for specific talking activities in classrooms.

RESEARCH METHODS

A British based study using: Systematic observation; Computer-based text analysis; Ethnographic analysis; Sociolinguistic discourse analysis; Conversation analysis
FINDINGS

We can identify specific ways in which classroom dialogue can promote learning and the development of understanding. Identification of specific types of talk are associated with the best learning outcomes .

RESEARCH WHICH PROVIDES EVIDENCE THAT THIS WORKS

Mercer, Neil (2010) The analysis of classroom talk: Methods and methodologies. British Journal of Educational Psychology, 80: 1–14.


CLASSROOM PRACTICE DERIVED FROM RESEARCH — 3


Sustained Silent Reading (SSR); Drop everything and read (DEAR); Free voluntary reading (FRV). School library programs. Premier’s Reading Challenge. Book Week Activities.

RESEARCH METHODS

USA based study. Survey methods. Case study methods.

FINDINGS

Sustained Silent readers report that they read more at the end of the SSR program than at the beginning (Pilgreen & Krashen 1993)

RESEARCH WHICH PROVIDES EVIDENCE THAT THIS WORKS

Pilgreen, J. & S. Krashen (1993) ‘Sustained silent reading with high school ESL students: Impact on reading comprehension, reading frequency, and reading enjoyment.’ School Library Media Quarterly 22: 21-23.

 
CLASSROOM PRACTICE DERIVED FROM RESEARCH — 4

Text based teaching. Genre based writing. Grammar and meaning

RESEARCH METHODS

Australian study. Questionnaire completed by student writers. 528 written texts collected from 33 writing situations. Discourse analysis of 7 narrative texts written by Year 6 students.

FINDINGS

This doctoral study identified a model, for teaching writing: Deconstruction, Joint Construction and Independent Construction) which has as its principal goal to describe language in use, provides a framework, hitherto lacking in documents about teaching writing.

RESEARCH WHICH PROVIDES EVIDENCE THAT THIS WORKS

Rothery, J. (1990) Story Writing in the primary school: assessing narrative type genres. Unpublished PhD thesis, University of Sydney.
Rothery, J. (1996) Making changes: Developing an educational linguistics. In R. Hasan and G. Williams (Eds). Literacy in society (pp. 86-123). London: Longman

CLASSROOM PRACTICE DERIVED FROM RESEARCH — 5
 
The Big Five in Reading: Phonemic awareness; Phonics; Fluency; Vocabulary; Reading comprehension
RESEARCH METHODS

A meta-analysis of studies in the teaching of reading using these four criteria for inclusion. Study participants must be carefully described (age; demographics; cognitive, academic, and behavioral characteristics). Study interventions must be described in sufficient detail to allow for replicability, including how long the interventions lasted and how long the effects lasted. Study methods must allow judgments about how instruction fidelity was ensured. Studies must include a full description of outcome measures.

FINDINGS

A national panel was assembled to assess the status of research-based knowledge, including the effectiveness of various approaches to teaching children to read.

RESEARCH WHICH PROVIDES EVIDENCE THAT THIS WORKS

National Institute of Child Health and Human Development. (1999). Report of the National Reading Panel: Teaching children to read. Reports of the subgroup. Washington, DC.

Table 2: Five well-known primary English teaching strategies or practices, linked to research on which these practices are based

References

About the authors

Dr Susan Feez is Associate Professor in the School of Education at the University of New England (UNE), Armidale, NSW. She has worked as a classroom teacher of language, literacy and EALD, and now teaches and researches in the fields of English language, literacies and EALD education. Susan is interested in the development of textbooks and teacher handbooks as a means of fast tracking into classrooms educational innovation derived from research. This has been the motivation for a series of educational publications she has contributed to over recent years.

Dr Robyn Cox is currently Associate Professor of Literacy Education at ACU National, Strathfield campus. Prior to this she was Principal Lecturer at the University of Worcester, UK and a member of the executive committee of the United Kingdom Literacy Association (UKLA). Robyn has also held positions at Universities in Australia, Singapore and Brunei Darussalam. She is the author of several international journal articles in the field of literacy research and has been involved in teacher education in four countries over a 20-year period. Robyn is well known for her commitment to the development of a strong professional knowledge base in initial teacher education and remains dedicated to bringing accessible educational research and theory to teacher education students.

How this content relates to AITSL teacher standards

Coming soon. Watch this space for the relevant professional teacher standards for PETAA Paper 209, with links to illustrations of practice videos on the AITSL website

Standard 1: Know students and how they learn

  • 1.2.1 Graduate Understand how students learn. Demonstrate knowledge and understanding of research into how students learn and the implications for teaching.
  • 1.2.2 Proficient Understand how students learn. Structure teaching programs using research and collegial advice about how students learn.

Standard 2: Know the content and how to teach it

  • 2.1.2 Proficient Content and teaching strategies of the teaching area. Apply knowledge of the content and teaching strategies of the teaching area to develop engaging teaching activities.

AITSL Illustration of Practice: Using a Dictagloss to support EAL/D students

AITSL Illustration of Practice: Building the field in science to assist students to make connections

AITSL Illustration of Practice: Developing social media profiles to build and represent content knowledge in geography

  • 2.1.3 Highly Accomplished Content and teaching strategies of the teaching area. Support colleagues using current and comprehensive knowledge of content and teaching strategies to develop and implement engaging learning and teaching programs.

AITSL Illustration of Practice: Strategies for composite language classes

AITSL Certification Evidence: Developing a Cooperative Reading program to address underachievement and disengagement with reading in upper primary

  • 2.5.1 Graduate Literacy and numeracy strategies. Know and understand literacy and numeracy teaching strategies and their application in teaching areas.

AITSL Illustration of Practice: Modelling recount writing and creating text adaptations using technology

AITSL Illustration of Practice: Strategies for developing early literacy and numeracy skills

AITSL Illustration of Practice: Developing critical reading skills using the SQ3R method

  • 2.5.2 Proficient Literacy and numeracy strategies. Apply knowledge and understanding of effective teaching strategies to support students’ literacy and numeracy achievement.

AITSL Illustration of Practice: Improving Sentence Structure knowledge using oral language in Year 1

AITSL Illustration of Practice: Using storyboards to develop multimodal texts

AITSL Illustration of Practice: Achieving multiple literacy outcomes through developing and composing multimodal texts

AITSL Illustration of Practice: Developing early literacy through explicit connections between meaning in text, oral language and image

  • 2.5.3 Highly Accomplished Literacy and numeracy strategies. Support colleagues to implement effective teaching strategies to improve students’ literacy and numeracy achievement.

AITSL Illustration of Practice: Using reciprocal teaching to improve reading with Year 3 and 4 students

AITSL Illustration of Practice: Using Strategies Reading Action to investigate characters in texts

AITSL Illustration of Practice: Explicit teaching of high frequency words through big books

AITSL Illustration of Practice: Modelling focus group teaching in literacy

AITSL Illustration of Practice: Collegiate discussions to improve teaching in literacy

AITSL Certification Evidence: Developing a Cooperative Reading program to address underachievement and disengagement with reading in upper primary

Standard 3: Plan for and implement effective teaching and learning

  • 3.3.1 Graduate Use teaching strategies. Include a range of teaching strategies.
  • 3.3.2 Proficient Use teaching strategies. Select and use relevant teaching strategies to develop knowledge, skills, problem solving and critical and creative thinking.
  • 3.6.1 Graduate Evaluate and improve teaching programs. Demonstrate broad knowledge of strategies that can be used to evaluate teaching programs to improve student learning.
  • 3.6.2 Proficient Evaluate and improve teaching programs. Evaluate personal teaching and learning programs using evidence, including feedback from students and student assessment data, to inform planning.

AITSL Illustration of Practice: Evaluating teaching programs that support early language and literacy development

AITSL Illustration of Practice: Using data to improve differentiation in reading groups

Standard 5: Assess, provide feedback and report on student learning

  • 5.1.2 Proficient Assess student learning. Develop, select and use informal and formal, diagnostic, formative and summative assessment strategies to assess student learning.

AITSL Illustration of Practice: Using formative assessment practices with students in the classroom

AITSL Illustration of Practice: Approaches to ongoing informal assessment

  • 5.1.3 Highly Accomplished Assess student learning. Develop and apply a comprehensive range of assessment strategies to diagnose learning needs, comply with curriculum requirements and support colleagues to evaluate the effectiveness of their approaches to assessment.

AITSL Illustration of Practice: Collegial discussions about forms of assessment

  • 5.3.1 Graduate Make consistent and comparable judgements. Demonstrate understanding of assessment moderation and its application to support consistent and comparable judgements of student learning.
  • 5.3.2 Proficient Make consistent and comparable judgements. Understand and participate in assessment moderation activities to support consistent and comparable judgements of student learning.

AITSL Illustration of Practice: Tracking student progress

  • 5.4.1 Graduate Interpret student data. Demonstrate the capacity to interpret student assessment data to evaluate student learning and modify teaching practice.
  • 5.4.2 Proficient Interpret student data. Use student assessment data to analyse and evaluate student understanding of subject/content, identifying interventions and modifying teaching practice.

AITSL Illustration of Practice: Improving teaching using data about student learning

AITSL Illustration of Practice: Moderating student work with colleagues to identify next steps for teaching and learning

AITSL Illustration of Practice: Developing intervention strategies using ILPs to support student achievement

Standard 6: Engage in professional learning

  • 6.1.1 Graduate Identify and plan professional learning needs. Demonstrate an understanding of the role of the Australian Professional Standards for Teachers in identifying professional learning needs.

AITSL Illustration of Practice: Applying professional learning resources to support student learning

  • 6.1.2 Proficient Identify and plan professional learning needs. Use the Australian Professional Standards for Teachers and advice from colleagues to identify and plan professional learning needs.

AITSL Illustration of Practice: Using feedback to set teaching goals and identify professional learning needs

  • 6.2.1 Graduate Engage in professional learning and improve practice. Understand the relevant and appropriate sources of professional learning for teachers.

Illustration of Practice: Using professional learning to improve teaching with ICT resources

  • 6.2.2 Proficient Engage in professional learning and improve practice. Participate in learning to update knowledge and practice, targeted to professional needs and school and/or system priorities.
  • 6.4.1 Graduate Apply professional learning and improve student learning. Demonstrate an understanding of the rationale for continued professional learning and the implications for improved student learning.

AITSL Illustration of Practice: Seeking professional learning

  • 6.4.2 Proficient Apply professional learning and improve student learning. Undertake professional learning programs designed to address identified student learning needs.

Standard 7: Engage professionally with colleagues, parents/carers and the community

  • 7.4.1 Graduate Engage with professional teaching networks and broader communities. Understand the role of external professionals and community representatives in broadening teachers’ professional knowledge and practice.
  • 7.4.2 Proficient Engage with professional teaching networks and broader communities. Participate in professional and community networks and forums to broaden knowledge and improve practice.