Introduction Libby Limbrick. Throughout this research critique, a variety

Introduction

            This paper will analyze a research
study related to the use of literacy intervention tools such as the Reading
Recovery intervention. The title of this research study is provided by the Journal of Research in Reading.  The title of this research study is, Can gains from literacy interventions be
sustained? The case of Reading Recovery. The authors of this study were
Rebecca Jesson and Libby Limbrick. Throughout this research critique, a variety
of sections will be fully analyzed and critiqued as they relate to the tenets
of reliable research in education.

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

Critique of the Study’s Purpose

            As I reflect on this study, I feel that
the researchers’ purpose was to identify the impact of using early literacy
interventions. Literacy is a topic of importance in much of educational research
because it is a tool that students will use for the rest of their lives. Much
of the research that is conducted around literacy focuses a great deal on
targeting specific factors that may impede or enhance the academic progress of
all students.

            As a result of targeting specific
factors that may impact the literacy development and overall success of
students, Jesson and Limbrick (2013) focused on the Reading Recovery
intervention. These researchers investigated whether reading achievement gains
were maintained once students successfully completed the Reading Recovery
intervention (Jesson & Limbrick, 2013). In addition to determining whether
reading achievement gains were maintained, Jesson and Limbrick (2013) also
examined any variables that were associated with the Reading Recovery growth outcomes
of this study.

            Throughout this study, these
researchers identified that early literacy programs have been known for
increasing the reading achievement skills of students that have had difficulty
with literacy concepts in their younger years (Jesson & Limbrick, 2013).

According to Jesson and Limbrick (2013) very little evidence identified how the
actual gains from using early literacy programs such as Reading Recovery, may
have contributed to the sustained academic progress of students in subsequent
years.

            In similar studies that investigated
the impact of using the Reading Recovery intervention, many researchers further
examined the benefits of this literacy tool. As a result,  Schwartz (2005) conducted a study with 1st-grade
at-risk students, where they engaged in the Reading Recovery intervention
during the beginning of the school year. The first-grade overall achievement
results were compared to another group of students that had participated in
this intervention throughout the second half of the school year (Schwartz,
2005).  

            The results of this study revealed
that the students that had engaged in this literacy intervention at the start
of the school year, performed better than students that had participated in
this intervention during the second half of the school year (Schwartz, 2005). The
results and purpose of this study are similar to the Jesson and Limbrick study,
because Schwartz examined the reading achievement gains, in addition to also
examining the effects of using this intervention as a long-term support in
subsequent years (Schwartz, 2005).

            In a study conducted by Vaughan
(2011) the sustained effects of using the Reading Recovery intervention for
student literacy achievement in the third grade, along with its effects on subsequent
years was investigated. The results of this study revealed that the use of this
literacy intervention significantly improved student literacy levels (Vaughan,
2011).  According to Vaughn (2011) the
long-term effectiveness of this intervention could not be established because
the results didn’t provide comparable results for any of the studied cohorts.  

            As I compare these three studies,
the overall purpose of the Jesson and Limbrick study, definitely addresses a
valid concern. Literacy is a topic of importance, and it is an area that many
teachers struggle to master. According to Gambrall and Morrow (2014) the goal
of literacy instruction is to prepare students with skills that they will be
able to use throughout their adulthood so that they can actively take part in
our democratic society. As I reflect on the study that was conducted by Jesson
and Limbrick, their focus and purpose for this study can definitely provide
great insight to educators and other researchers. By conducting this study and
providing the results, educators and other researchers can use this knowledge
to seek out other reliable tools to ensure that all students are provided with
the necessary tools to be literate and successful in today’s society.

Critique of the Introduction

            This study definitely builds a case for
the issues that are examined throughout this research study. The introduction
addresses areas where there is a wealth of research regarding reading
achievement programs, and it also identifies areas of importance, where further
research should be conducted to better assess student progress (Jesson &
Limbrick, 2014).

            According to Pyrczak and Bruce (2011)
an introduction should start by identifying a problem and should be followed by
a summary of how the researcher will investigate the problem. Jesson and Limbrick
identify the two phases of their study, and they also identify their basis for
data collection (Jesson & Limbrick, 2014). These are two essential
components to an introduction of any research study, because they introduce the
problem, establish the importance of the problem, and prepare the reader for
questions that will be answered throughout their research study (Pyrczak &
Bruce, 2011).

Review of Literature

            The review of literature section of
this study definitely presents a variety of literature that is relevant to the
main purpose of this study. Throughout the review of the literature, a variety
of areas such as the sustainability of Reading Recovery effects, the design of
the Reading Recovery intervention, and short-term success were all discussed.

            According to Clay, the Reading
Recovery intervention was designed to increase the pace of student learning and
levels of achievement (as cited in Jesson & Limbrick, 2014). As Jesson
& Limbrick (2014) explored other studies relevant to the use of this intervention,
they found that there was a substantial amount of research that demonstrated differing
views on this intervention.

            According to Lee, 84% of New Zealand
students that use the Reading Recovery intervention, are discontinued and assumed
as being able to perform with the average range of their class (as cited in
Jesson & Limbrick, 2014).  As a
result of these findings, Lee indicates that approximately one in 10 students
are recommended for long-term literacy supports, while 7% of students leave the
intervention and schools, resulting in misplaced outcomes (as cited in Jesson
& Limbrick, 2014). As Jesson and Limbrick (2014) identify the results of
Lee’s study, they provide further explanation of the ongoing effects of this
intervention.

            Reynolds and Wheldall argue that the
academic gains from using this intervention are eventually washed out over time
(as cited in Jesson & Limbrick, 2014). In support of this claim, Jesson
& Limbrick (2014) suggest that some of the washed out gains may be a
reflection of a small amount of knowledge that is needed for future learning.

            Jesson and Limbrick (2014) explored
the sustainability of effects in using the Reading Recovery intervention. According
to Hiebert and Taylor, no intervention can protect against future learning
difficulties (as cited in Jesson & Limbrick, 2014). Hiebert and Taylor
conclude that without the use of ongoing supports, some groups will still be
unable to read sufficiently (as cited in Jesson & Limbrick, 2014).

            Overall, the authors of this study
definitely accessed a variety of research studies that supported the purpose of
this study. They cited a variety of studies that could provide beneficial
insight regarding the implementation, sustainability, and overall outcomes of
using early literacy interventions (Jesson & Limbrick, 2014).

            For each of the studies that were
mentioned throughout this review of literature section, each study provided
different insight regarding the impact that the Reading Recovery intervention
had on students. As I analyzed the reviews of the given literature, I feel that
some of the sources that were used failed to provide further recommendations to
conduct further investigations on these topics. If Jesson and Limbrick would
have included more of these recommendations in their literature review, I feel
that these recommendations could have better supported their purpose for
conducting this research.

Critique of Author’s Solution to the
Problem

            This study tested the sustainability of
student outcomes with using the Reading Recovery intervention, and it also
tested the factors that were connected with sustainable gains (Jesson &
Limbrick, 2014).

            Throughout phase 1 of this study,
the researchers assessed the reading and writing achievement of students by
using standardized tests, and unassisted writing samples (Jesson &
Limbrick, 2014). They indicated that research assistants used writing rubrics
to assess the writing samples of students (Jesson & Limbrick, 2014).

            According to Jonnson and Svingby
(2007) the use of rubrics can be beneficial because they can increase a
consistency in scoring, and they can provide conclusions that are reliable. By
using a writing assessment rubric, these researchers were able to investigate
the raw score and national trends for these study participants (Jesson &
Limbrick, 2014).

            Throughout phase 2 of this study,
the researchers chose three schools based on the two criteria (Jesson &
Limbrick, 2014).  The criteria of this
phase consisted of Reading Recovery student scores that were school-based and
had maintained consistency, while the second criteria consisted of school means
that could compare to the national calculations (Jesson & Limbrick, 2014).

            I feel that the treatment that these
researchers chose was a treatment that definitely suited the purpose of this
study. Jesson and Limbrick (2014) assessed the sustainability of student
outcomes and investigated any factors that played a role in reading and writing
achievement. 

            Overall, I feel that both of these
treatments can provide beneficial data for teachers working to implement the
use of the Reading Recover intervention. When using learning interventions, it’s
essential for researchers to identify how they plan to test the overall
solution to a problem.  As researchers
identify their plan, the outcome of each study can reveal whether the treatment
had a significant impact on the participants throughout the study.

Critique of the Research Design

            For phase 1 of this study,  the subjects were students that had previously
used the Reading Recovery intervention successfully throughout their second
year of attending school (Jesson & Limbrick, 2014). The subjects according
to Jesson and Limbrick (2014) came from differing socioeconomic backgrounds.

             Demographic data, which included ethnicity,
the students’ home language, and the reading levels, were all collected and
examined (Jesson & Limbrick, 2014). In addition to this data, reading
achievement data of the participants were collected and reviewed from widely
used tests in New Zealand (Jesson & Limbrick, 2014). The tests that were
used to collect this data were the STAR and PAT Comprehension tests (Jesson
& Limbrick, 2014). As a result of these tests being administered to
students in their second year of school, the researchers briefly indicated the
time frame for this study as occurring being between the years of 4-6 (Jesson
& Limbrick, 2014).

            At the start of this study, writing
achievement data were collected from each participant, which consisted of
unassisted writing samples (Jesson & Limbrick, 2014). Research assistants
analyzed this data, and they used writing rubrics to calculate the raw score
and national norms for each participant (Jesson & Limbrick, 2014).

            For phase 1 of this study, it did
not indicate the training that the research assistants had received prior to
assessing the unassisted writing samples. In addition, the researchers did not
indicate if the testing administrators of the STAR and PAT Comprehension tests
had been provided with specific training to proctor these assessments with
fidelity. The demographic backgrounds, home languages, and reading levels of
the participants were also not disclosed. Throughout this study, the
researchers had indicated that this data was collected, but these specifics
would have been helpful information to include in their study.

            For phase 2 of this study, the
researchers chose three different schools based on two measures (Jesson &
Limbrick, 2014). The first measure was the mean score of student participation
with the Reading Recovery intervention on a school-based level, and the second
measure was school means compared with national calculations (Jesson &
Limbrick, 2014).

            Throughout phase 2 of this study,
the researchers indicated that school leaders and staff members were invited to
attend a focus group in which they were literacy leaders, educators,
principals, and junior level administrators (Jesson & Limbrick, 2014).

These participants engaged in discussions about the discontinued students of
this literacy intervention program had maintained ongoing success and academic
achievement (Jesson & Limbrick, 2014).

            For phase 2, the researchers did not
indicate a timeline of how long the discussions were, nor did they include a
description of the rubric or tool for measuring these discussions. The only
component that they identified about analyzing these discussions was that they
were coded by themes (Jesson & Limbrick, 2014).

            I feel that it would have been
beneficial for the researchers to include how the research assistants were trained
for conducting these discussions. In addition, it would have been beneficial
for them to include how the final coding of these discussions was actually
determined because without this information it is difficult to determine how
these discussions were actually assessed.

Critique of the Analysis of Data

            The data of this study were analyzed
using a method to scale the test scores of discontinued students that had
achieved comparably to students nationally (Jesson & Limbrick, 2014). The
mean was a standard deviation of 0.5, which concluded that the stanines did not
compare with the national distribution (Jesson & Limbrick, 2014).

            According to Jesson and Limbrick
(2014) one-third of the participants achieved at or above the mean scores for
both assessments. Overall, there was an approximate amount of 60% of students that
performed at, or above average levels on the PAT and STAR assessments (Jesson
& Limbrick, 2014).

            As the reading and writing data were
assessed, the researchers found that there was an average correlation between
the age levels and scores (Jesson & Limbrick, 2014). The outcomes of each
age level were moderately connected at 0.48, 0.57, and 0.51 (Jesson &
Limbrick, 2014).

            Jesson and Kimbrick (2014) investigated
the differences in outcomes by conducting a univariate analyses of students
that discontinued the intervention, in relation to the socioeconomic status of
the participants according to the statistic deciles. As a result of the STAR
data, the results revealed a significant difference, which prompted the researchers
to conduct a post hoc analysis (Jesson & Limbrick, 2014). The post hoc
analysis concluded that decile 2 schools had an increased stanine mean of 4.01,
while the middle-high SES had a mean of 4.79 (Jesson & Limbrick, 2014). Decile
8 was the middle-high SES, which resulted in a half of a stanine higher, than
the decile 10 school (Jesson & Limbrick, 2014).

            As the researchers assessed any
change in students’ level of achievement once they discontinued use of the
Reading Recovery intervention, the researchers conducted a univariate ANOVA test
which revealed a small difference in the subsequent scores of the discontinued
students’ reading levels (Jesson & Limbrick, 2014). Jesson and Limbrick
concluded that participants that read at higher levels achieved higher scores
on standardized assessments (Jesson & Limbrick, 2014).

            As I reflect on the overall data
analysis of this study, I feel that using the ANOVA analysis was a good choice
because it allowed the researchers to identify a small, but significant difference
in the stanine score outcomes  (Jesson
& Limbrick, 2014). According
to Tarlow (2016) the analysis of variance (ANOVA) is one of the most popular
techniques used in statistics to compare samples in a study. Researchers frequently
use ANOVA because it can compare multiple means.

             According to
Emerson (2017) both the t-test and ANOVA can compare the mean scores for
groups, but the t-test is designed to compare the means for two groups, while
ANOVA has been designed to compare the group means of more than two groups.

This study compared more than two groups, so I feel that using the ANOVA method
was the best choice in identifying any significant differences or gains in
student outcomes.

Critique of the Author’s Conclusions

            The conclusions based on the data from
this study were definitely a reflection of the conducted study outcomes. After
a deep analysis of achievement gains from the use of the Reading Recovery
intervention, students did indeed maintain levels in the range of their cohort
(Jesson & Limbrick, 2014). The evidence of this study suggests that more
than half the students maintained reading level achievement, but the overall
outcome was 1 standard deviation below the national average for reading (Jesson
& Limbrick, 2014).

            According to Larson and Farber
(2015) the standard deviation measures the change of a data set about the mean
and is always greater than or equal to zero. Throughout phase 1 of the study,
the researchers indicated that stanine mean for students was 0.5 standard
deviation below the national results (Jesson & Limbrick, 2014). This
statement supports their concluding thoughts on the maintained reading level achievement
for students in the tested cohorts.

            The author’s conclusions based on the
writing outcomes concluded that the results were well below national averages
and that comparisons of achievement could not be made (Jesson & Limbrick,
2014). The researchers identified that there was a strong relationship between
reading and writing achievement, which may conclude that this intervention can subsequently
impact writing and reading skills (Jesson & Limbrick, 2014).