You are on page 1of 12

See

discussions, stats, and author profiles for this publication at:


https://www.researchgate.net/publication/261511819

Synthesis of Research Symposium at


CLD's 35th International Conference on
Learning Disabilities: Must Reads for
2012-2013
ARTICLE in LEARNING DISABILITY QUARTERLY MARCH 2014
Impact Factor: 0.77 DOI: 10.1177/0731948714523435

READS

52
6 AUTHORS, INCLUDING:
Deborah K. Reed

Keith Smolkowski

University of Iowa

Oregon Research Institute

39 PUBLICATIONS 93 CITATIONS

80 PUBLICATIONS 1,842 CITATIONS

SEE PROFILE

All in-text references underlined in blue are linked to publications on ResearchGate,


letting you access and read them immediately.

SEE PROFILE

Available from: Kelli Dawn Cummings


Retrieved on: 06 January 2016

523435

LDQXXX10.1177/0731948714523435Learning Disability QuarterlyReed et al.

research-article2014

Article

Synthesis of Research Symposium at CLDs


35th International Conference on Learning
Disabilities: Must Reads for 20122013

Learning Disability Quarterly


111
Hammill Institute on Disabilities 2014
Reprints and permissions:
sagepub.com/journalsPermissions.nav
DOI: 10.1177/0731948714523435
ldq.sagepub.com

Deborah K. Reed, PhD1, Kelli D. Cummings, PhD2,


Elizabeth A. Allen, PhD3, Beverly L. Weiser, PhD4,
Brittany L. Hott, PhD5, and Keith Smolkowski, PhD6

Abstract
The Council for Learning Disabilities Research Committee hosted a Must Read session at the 35th Annual International
Conference in which they discussed influential articles published between August 1, 2012, and July 31, 2013. Articles were
selected in six areas relevant to learning disabilities research and practice: response to intervention, reading assessment,
math assessment, reading instruction, math instruction, and research methods. The six articles presented by the panel are
summarized and explained with respect to why they are considered a Must Read.
Keywords
response to intervention, reading, math, instruction, assessment, research methods
Part of the mission statement of the Council for Learning
Disabilities (CLD) is to enhance the education of individuals
with learning disabilities (LD). This requires not only rigorous,
evidence-based research on instructional practices, but also
scientific investigations of how to translate findings into natural school settings where they can make the greatest impact.
This concern with what is happening in schools and classrooms and how best to guide the instructional decision making
of practitioners captured the interest of the CLD Research
Committee as it prepared for the annual conference.
The committee members and authors of this article identified several categories of particular relevance to the field:
response to intervention (RTI), reading assessment, math
assessment, reading instruction, math instruction, and
research methods. We then identified articles exemplifying
current work in these categories published between August 1,
2012 and July 31, 2013 that made a significant contribution
to the mission of CLD (see http://www.cldinternational.org/
About/AboutCLD.asp). Members presented their selections
in a panel session at the annual conference and discussed why
the articles represented a Must Read for the past year. A
cross-cutting theme of the pieces was the school-level implementation policies and practices for students with LD.
The first selection, made by Committee Chair Deborah
Reed, addresses features of RTI as it is being implemented
in elementary settings. The assessment pieces were chosen
by Kelli Cummings, focusing on decision-making rules
for curriculum-based measurement (CBM) of reading, and
Elizabeth Allen, examining the diagnostic accuracy of
universal screeners for mathematics. Beverly Weiser

identified a meta-analysis of reading strategy instruction,


and Brittany Hott selected a position paper offering a
model for meeting the needs of students with math disabilities as they achieve more rigorous standards for mathematics achievement. Finally, Keith Smolkowski
identified an article offering a more comprehensive
method of evaluating the impact of instructional interventions. Here, we provide summaries of the literature and the
committee members reflections on the importance of the
work.

RTI
Deborah K. Reed

Brief Context/Rationale
As is well known, RTI became an allowable option for identifying students with LD under the 2004 reauthorization of
the Individuals With Disabilities in Education Act (IDEA).
1

Florida State University, Tallahassee, USA


University of Oregon, Eugene, USA
3
PRO-ED, Inc., Austin, TX, USA
4
Southern Methodist University, Dallas, TX, USA
5
Texas A&M University, Commerce, USA
6
Oregon Research Institute, Eugene, USA
2

Corresponding Author:
Deborah K. Reed, Florida Center for Reading Research at Florida State
University, 2010 Levy Ave., Ste 100, Tallahassee, FL 32310, USA.
Email: dkreed@fcrr.org

Downloaded from ldq.sagepub.com at Counsel for Learning Disabilities on March 9, 2014

Learning Disability Quarterly

Within 5 years of the reauthorization, all states reportedly


had RTI initiatives (Bradley et al., 2011). With the next
reauthorization of IDEA anticipated in 2014 or 2015, attention in the field has been turning to refining aspects of RTI
such as how to predict responsiveness (Compton et al.,
2012) or how a tiered model might be implemented with
secondary school students (Vaughn & Fletcher, 2012).
However, RTI as implemented by researchers might be difficult for practitioners to carry out or sustain (Burns et al.,
2013). To better understand the next steps for applications of
RTI in natural settings, it is worthwhile to explore how
schools have translated the research-based components into
practice, particularly in reading at Grades K-3 where more
robust evidence is available (Gersten et al., 2009).

education teacher). More variation was found among the


models for delivery of Tier 2 instruction (i.e., 48% delivered Tier 2 within Tier 1; 20% outside Tier 1, and 32% both
inside and outside Tier 1) and time allocated to Tier 2 (60
225 min per week) and Tier 3 (90375 min per week).
There were also various combinations across as well as
within schools of the seven models surveyed for serving
special education students (i.e., Tier 3 or 4 only, Tier 1 + 3,
Tier 1 + 2, Tier 1 + 2 + 3, Tier 1 + 2 + 4, Tier 1 + 3 + 4,
outside of tiers). The authors noted this variation was a positive indicator of individualizing the education program for
students in special education, but it also suggests there are
still challenges in determining the intersection between RTI
and special education.

Jenkins, J. R., Schiller, E., Blackorby, J., Thayer, S. K., & Tilly,
W.D. (2013). Responsiveness to intervention: Architecture
and practices. Learning Disability Quarterly, 36, 3646.
doi:10.1177/0731948712464963

Why this article is a must read. This article offers increasing evidence that those most intimately involved with RTI
implementation at the elementary school level have greater
knowledge of the hallmark elements (e.g., flexible options
for special education students, benchmarking all students,
monitoring progress of those in intervention tiers, increasing intensity of instruction across tiers) than what was
reported just 3 to 4 years ago (e.g., Mellard et al., 2010;
Tackett et al., 2009). Moreover, it did not matter whether
the school was in the 1st year or 2nd of implementation, or
if it had 3 to 7 years of experience. Jenkins et al. (2013)
rightly note the self-report nature of the data lends itself to
providing the right answer; nonetheless, it is encouraging that the RTI leadership in elementary schools at least is
familiar with what the right answers are. At a minimum,
the jargon has taken hold. And there are other indications
in the extant literature that elementary special education
teachers have gone beyond the superficial improvement in
their knowledge of RTI language to improving their practice (Swanson, Solis, Ciullo, & McKenna, 2012). Swanson
and colleagues (2012) observed teachers in Grades 3 to 5
providing more engaging reading instruction that was better oriented to evidence-based practices than what was
reported just a few years ago (Kethley, 2005; Swanson,
2008).
This does not speak to the capacity of general education
teachers or the quality of differentiated instruction occurring in Tier 1, where there is reason to believe special education students might be experiencing far greater variation
in effective practices (Connor et al., 2010). Because Jenkins
and colleagues (2013) did not have any general education
teachers in their sample (apparently because none had sufficient knowledge of RTI implementation in their schools),
it begs reflection on what the field is emphasizing in RTI
implementation and the provision of services for special
education students. Have we focused too much attention on
reading intervention to the point of marginalizing the really
hard work of prevention in core instruction, which was a
main thrust of the 2004 IDEA reauthorization?

Summary. Jenkins and colleagues sought to expand on previous studies of RTI implementation at the state and district
levels (e.g., Mellard, McKnight, & Jordan, 2010; Tackett,
Roberts, Baker, & Scammacca, 2009) by surveying a broader
sample of schools to determine school-level practices. The
sample included personnel from 62 schools in 31 districts
across 17 states. Although urban, suburban, and rural locations were represented, the districts were predominately
small (i.e., 20 or fewer schools and 5,000 or fewer students)
serving mostly White students, 30% or less of whom were on
free/reduced-price lunch. Follow-up phone interviews were
conducted with 71% of the sample. Respondents were those
with detailed knowledge of the schools RTI practices, so the
majority of individuals were principals (n = 13) and RTI
coaches (n = 12). Others were identified as special education
teachers (n = 6), district administrators (n = 6), RTI consultants (n = 4), and reading intervention teachers (n = 2).
All school were implementing RTI in Grades K-3 and
most focused on reading (84%) with fewer focusing on
math (62%), behavior (47%), or all three (30%). The results
revealed more consistency among schools implementing
RTI in reading than noted in the extant literature. For example, most respondents stated they were using benchmark
assessments (98%)typically curriculum-based measures
(90%) and differentiating instruction (80%). However, the
researchers did not ask participants to elaborate on what
they were doing for students of different reading abilities,
so it is unknown how they were defining differentiation.
There was also consistency in intensifying Tier 3 interventions as compared with Tier 2 such as by providing smaller
group, greater time allocation, increased frequency of progress monitoring, and greater expertise of personnel (e.g.,
Tier 2 was more likely to be delivered by a paraprofessional
and Tier 3 was more likely to be delivered by a special

Downloaded from ldq.sagepub.com at Counsel for Learning Disabilities on March 9, 2014

Reed et al.
The other important finding from the Jenkins et al.
(2013) study with implications for refining RTI implementation was the great variation in time allotments for Tiers 2
and 3. It seems intuitive that more instructional time would
be better than less, but the extant literature does not point to
an optimal amount of time for core reading instruction or
reading intervention in K-3. Recently, states have begun
reconsidering the once sacrosanct 90-min uninterrupted
reading block (e.g., Indiana, see Hinnefeld, 2013; West
Virginia, see Miller, 2011). This is likely to result in even
less consistency in intensity across schools unless the field
can generate better guidance for minimum and maximum
minutes of reading instruction and supplemental intervention per week. Earlier research identified optimal group
sizes (Vaughn, Linan-Thompson, & Hickman, 2003), so it
seems a curious gap in the RTI literature to not have
addressed time allotments that drive schools schedules.

Reading Assessment
Kelli D. Cummings

Brief Context/Rationale
CBM began as a tool to be used for evaluating individual
student gains in special education programs (Deno, 2003;
Deno, Mirkin, & Chiang, 1982) and grew in use until it was
considered the most widely used CBM in general education
(Graney & Shinn, 2005). The initial use of CBM as a tool
for basic skills monitoring in special education within a
problem-solving model has been associated with improvements in the precision of student individualized education
program (IEP) goals (Deno, Mirkin, & Wesson, 1984;
Drasgow, Yell, & Robinson, 2001; Shinn, 2008, Yell &
Busch, 2012) and even in the achievement gains made by
groups of students (L. S. Fuchs & Fuchs, 1986; Stecker,
Fuchs, & Fuchs, 2005). In addition, at a group level, there is
robust support for the use of CBM to facilitate screening
decisions, initial instructional grouping, and evaluation of
relative group progress (Wayman, Wallace, Wiley, Ticha,
& Espin, 2007). Notably, the use of CBM (rather than professional judgment alone) to make screening decisions has
evidence of reducing disproportionality in special education
referrals, evaluations, and identification rates (Marston,
Muyskens, Lau, & Canter, 2003).
Despite relatively robust evidence supporting the use of
CBM across a variety of educational contexts, there has
been remarkably little attention in the research literature to
the reliability and validity of the various features of progress monitoring for individual student decisions (e.g., goals,
decisions rules, and methods for evaluating the those rules).
In fact, most progress monitoring recommendations are
very similar, if not exactly the same, as they were during the
initial inception of CBM practice. Thus, for 2013s must
read selection, I recommend a CBM synthesis appearing in

the Journal of School Psychology (Ardoin, Christ, Morena,


Cormier, & Klingbeil, 2013). This review by Ardoin and
colleagues focuses on the degree of correspondence
between progress monitoring recommendations and the
extant literature supporting those practices.
Ardoin, S. P., Christ, T. J., Morena, L. S., Cormier, D. C., &
Klingbeil, D. A. (2013). A systematic review and summarization of the recommendations and research surrounding
Curriculum-Based Measurement of oral reading fluency
(CBM-R) decision rules. Journal of School Psychology, 51,
118. doi:10.1016/j.jsp.2012.09.004

Summary.Ardoin and colleagues (2013) have authored a


detailed literature synthesis in which they describe extant
research that supports the use of CBM in oral reading
(CBM-R), the most widely used and researched CBM for
progress monitoring (Graney & Shinn, 2005). As advances
in special education law take root (i.e., Individuals With
Disabilities Education Improvement Act of 2004, 2004),
the results from student progress data have been used for
increasingly higher-stakes decisions. For example, RTI has
as one of its two defining features a measured rate of student progress that is discrepant from expectations. Thus, the
Ardoin synthesis tackles two primary elements for evaluation. In the first, they focus on a descriptive synthesis of
the recommendations regarding how to employ CBM-R
decision rules on the progress monitoring data of individual
students (p. 2). Second, they examine the extent to which
there is an evidence base supporting those
recommendations.
The data set for the literature synthesis conducted by
Ardoin and colleagues (2013) included peer-reviewed journal articles, book chapters, and instructional manuals so
long as those references included either a recommendation
for or an evaluation of progress monitoring decision rules.
Their review resulted in final inclusion of 102 documents
through the 2010 publication year, though not all documents addressed all questions posed in the 2013 synthesis.
Basic descriptive information regarding procedures for
progress monitoring decision making from this sample
revealed two methods for evaluating students progress.
First, the data point decision rule, (p. 2) uses the number
of data points above, below, or both above and below an
aimline (i.e., expected rate of progress). This approach is
more commonly described in assessment manuals (60%)
than it is evaluated in empirical articles (12%). Second, the
trend line decision rule, (p. 2) uses methods for calculating a students slope and compares this observed slope to an
aimline. What Ardoin and colleagues noted about the trend
line decision rules was that, despite the increased ease of
calculating slope using ordinary least squares (OLS)
regression due to wide accessibility of personal computer
software or CBM databases, there was relatively little

Downloaded from ldq.sagepub.com at Counsel for Learning Disabilities on March 9, 2014

Learning Disability Quarterly

change in the frequency with which student trend lines have


been used for decision making over the years. Regardless of
whether the data point or trend line rules are used, the findings from the synthesis indicate remarkable variability in
terms of the number of data points that have been recommended to reliably evaluate a students progress. Some
authors in the evaluation sample recommended as few as 3
data points whereas others recommended at least 20, with 7
being the most frequently reported standard.
With the variation in both type of rule (data point or
trend line) and number of data points needed to make a
decision about progress noted, Ardoin et al. turned next to
review the empirical support for any recommendation that
existed in their defined corpus. What they found was that
the two articles most frequently cited as evidence in support
of a particular decision rule (i.e., Shinn, Good, & Stein,
1989; Good & Shinn, 1990) did not actually examine the
accuracy of any progress monitoring decision rule per se.
Instead, both of these articles examined only the predictive
accuracy of student trend lines when either OLS regression
or the split-middle (SM) technique was used. Although
these two most-cited articles note high degrees of error
around trend lines when 10 or fewer data points are collected, they do not provide direct support for any particular
number of data points to collect or type of decision rule to
implement.
Last, and perhaps most startling, was that Ardoin and
colleagues found no single document in their review that
provided any evidence of the accuracy of CBM-R decision
rules for progress monitoring. This finding, coupled with
the known unreliability of trend estimates, make typical
CBM-R progress monitoring practices seem questionable at
best. Should we consider more intensive intervention for a
student if he or she does not respond after four data points?
What if we are considering raising the goal and failing to
identify a different student for special education because his
or her trend line indicates progress above the minimum?
Would seven data points be sufficient for making this sort
of change? Ardoin and colleagues conclude their manuscript with the recommendation that at least eight data
points may be necessary to yield reliable decision making
for individual students, whether the data point or trend line
decision rule is used.

2013), it has become clear that pre-service trainers as well


as educational professionals should exercise extreme caution when determining the number of data points needed to
make reliable individual student decisions about short-term
progress. The enduring standard of the three- or four-point
data rule is certainly no longer defensible.
Ardoin and colleagues are careful to characterize their
findings as applying solely to the issue of making individual
decisions about student progress and not as undermining
any of the existing evidence regarding the use of CBM for
universal screening, benchmark assessment, or evaluating
group-level progress. Thus, their work can be considered
consistent with some of the recommendations from other
researchers noting that a students ranked performance (relative to his or her peers) may appear stable even when that
same students performance differs greatly when compared
with his or her prior performance on multiple progress monitoring forms. This individual variation may be due to
changes in difficulty of progress monitoring forms (e.g.,
Ardoin & Christ, 2009; Cummings, Park, & Bauer Schaper,
2013; Francis et al., 2008) that, despite strong relative correlations between forms, do not take into account absolute
score differences across them (Albano & Rodriguez, 2011).
This recent literature synthesis on progress monitoring
with CBM-R provides a future direction for many doctoral
students in search of big research ideas because there is a
dire need for research to develop, evaluate, and establish
evidence-based guidelines for use and interpretation of
CBM-R short-term progress monitoring data (Ardoin et
al., 2013, p. 14). This synthesis also suggests a reconciling
of current practices to the available empirical evidence
from research; including recommendations for trainers to
work with both current and future practitioners to assist
with understanding the clear benefits of CBM, examining
relative rates of student growth, evaluating the impact of
instruction on groups of students, and where caution still
needs to be exercised, such as with understanding the error
involved in CBM-R (individual) progress monitoring data
(p. 15). This work by Ardoin and colleagues offers a blueprint for at least the next decade of CBM research.

Why this article is a must read.Ardoin and colleagues


(2013) remind us that many of the standard rules for progress monitoring with CBM-R have little evidentiary basis as
they stand and, as a result, are not ready to be used for highstakes decisions such as RTI nor choices about whether to
continue, reduce, or remove extra supports for students
such as supplemental intervention. As a result of the findings from the Ardoin et al. synthesis as well as very recent
empirical research (e.g., Christ, Zopluoglu, Long, & Monaghen, 2012; Christ, Zopluoglu, Monaghen, & Van Norman,

Elizabeth A. Allen

Math Assessment
Brief Context/Rationale
The primary objective of the RTI method of academic intervention is to provide early assistance to children who are at
risk of academic failure. At the heart of the RTI process is
early identification through universal screening. However,
no previously published studies in the area of mathematics
had compared computer-adaptive testing (CAT) measures
with other measures designed for universal screening. This

Downloaded from ldq.sagepub.com at Counsel for Learning Disabilities on March 9, 2014

Reed et al.
left a gap in the literature on the relative concurrent and
predictive validity of CBM and CAT measures of mathematics at a time when school districts were evaluating new
instruments to adopt as universal screening measures for
predicting math achievement on end-of-year state achievement tests. My must read selection is the first study to
address these issues. Its findings have implications for both
test publishers and test users in developing and selecting
measures for use in universal screening.
Shapiro, E. S., & Gebhardt, S. N. (2012). Comparing computeradaptive and curriculum-based measurement methods of
assessment. School Psychology Review, 41(3), 295305.

Summary.Shapiro and Gebhardt (2012) investigated the


relationship (both concurrent and predictive) between computer-adaptive measurement and CBM methods using a
sample of 352 elementary school students in Grades 1
through 4 in rural Pennsylvania who were tested at 3 times
throughout the school year (fall, winter, and spring). Specifically, the authors investigated the relationship between
two CBMs, the AIMSweb Mathematics Curriculum-Based
Measurement (MCBM; NCS Pearson, 2012) and the AIMSweb Mathematics Concepts/Applications (MCAP; NCS
Pearson, 2012) tests, and a CAT measure, the STAR Math
(STAR-M; Renaissance Learning, 2012) test. Furthermore,
the authors investigated the diagnostic accuracy of the
STAR-M in predicting scores on the MCBM and MCAP
and the diagnostic accuracy of all three measures in predicting end-of-year math achievement scores on the Pennsylvania System of School Assessment (PSSA; Data Recognition
Corporation, 2010).
The reported concurrent correlations between STAR-M
and the MCBM and MCAP across all grades and data points
ranged from .25 to .61, with a median of .46. Overall, concurrent correlations were higher between STAR-M and
MCAP (range = .35 to .61, M = .45) than between STAR-M
and MCBM (range = .25 to .56, M = .41) or between MCAP
and MCBM (range .22 to .65, M = .44). The authors noted
that within-measure correlations across time (fall to winter,
winter to spring) were better for STAR-M (range = .56 to
.77, M = .66) and MCAP (range = .60 to .75, M = .67) than
for MCBM (range = .37 to .77, M = .56), indicating the
STAR-M and MCAP are more consistent over time.
Correlations between fall and winter administrations of
STAR-M, MCAP, and MCBM and the spring administration of the PSSA ranged from .58 to .63 (M = .60) for
STAR-M, .24 to .45 (M = .36) for MCAP, and .12 to .41 (M
= .29) for MCBM. Hierarchical regression analysis indicated that STAR-M accounted for between 19% and 21% of
the unique variance in the PSSA, followed by MCAP
(<1%2.2%), and MCBM (<1%).
The authors also investigated the diagnostic accuracy
of these mathematics measures. Diagnostic accuracy

refers to the accuracy with which a test differentiates students with a disorder (e.g., LD) from those without the
disorder. Methods for establishing diagnostic accuracy
include the calculation of a tests sensitivity and specificity. In this context, sensitivity refers to the ability of a test
to correctly identify students who do have a LDthe most
important attribute of a screening test. Conversely, specificity refers to the ability of a test to correctly identify
students who do not have a LD. Educational researchers
vary regarding how large a tests sensitivity and specificity should be but the recommendations range from .70 to
.90 or higher.
The diagnostic accuracy of the STAR-M, MCAP, and
MCBM was investigated by using each of the measures to
predict outcomes on the PSSA. Because the PSSA uses the
16th percentile as the cut point delineating proficient from
not proficient, it was used as the cut point on all measures.
Across all three measures, sensitivity ranged from .28 to
.72 and specificity ranged from .91 to .97 indicating that
although all three measures were excellent at identifying
students who are not at risk for failure in math (e.g.,
median specificity = .94), none of the three measures was
particularly good at identifying students who are at risk of
failure in math (median sensitivity = .51). When using
STAR-M to predict outcomes on MCAP and MCBM,
results did not improve (median sensitivity = .47; median
specificity = .93).
Why this article is a must read. When selecting measures
for universal screening, schools desire measures that are
well connected to the curriculum and effectively differentiate students who are likely to be at risk of academic failure
from those who are not. Shapiro and Gebhardt (2012) provide the first detailed study in the area of mathematics comparing a CAT measure with any other type of measure
designed for universal screening. The authors found that the
STAR-M consistently showed stronger relations to and was
a better predictor of end-of-year math achievement than
MCAP or MCBM, but even the STAR-M fell far short of
the recommended levels of diagnostic accuracy needed for
a screening measure.
These results illustrate the importance of examining
diagnostic accuracy rather than relying on correlations and
tests of significance when investigating the validity of
assessments. As these results have proven, just because two
measures are highly correlated, they do not necessarily
yield the same results as indicated in studies of diagnostic
accuracy. The findings from the article highlight the importance for test publishers to report studies of diagnostic accuracy and for test users to demand such data for selecting
measures for use in universal screening and diagnosis.
These findings also highlight the need for much more
research comparing CAT with CBM measures for universal
screening.

Downloaded from ldq.sagepub.com at Counsel for Learning Disabilities on March 9, 2014

Learning Disability Quarterly

Reading Instruction
Beverly L. Weiser

Brief Context/Rationale
Although there are experimental studies and extant literature on how to develop literacy skills for beginning readers,
there is a paucity of research on best practices to help older
readers comprehend and write about expository text, especially for students with LD. However, it is clear that students with reading difficulties need explicit and systematic
instruction of strategies and routines to exercise while reading and writing about challenging text across content areas
(Bulgren, Graner, & Deshler, 2013). One such strategy,
self-regulated strategy development (SRSD), developed by
Harris and Graham (1999), blends effective step-by-step
explicit instruction procedures for teachers with self-regulation strategies for students to follow in improving their writing abilities. The teacher steps include developing needed
pre-skills, discussing and modeling the strategy with students, having students memorize the strategy, giving guided
and scaffolded practice, and allowing students to have independent practice. During the process of explicit instruction,
the students are learning procedures for self-regulation such
as self-instruction, goal setting, self-monitoring, and selfreinforcement. In a meta-analysis, Mason (2013) examined
eight studies that combined SRSD with other cognitive
reading strategies and found significant effects for students
in Grades 4 to 8 with reading and writing disabilities.
Mason, L.H. (2013): Teaching students who struggle with learning to think before, while, and after reading: Effects of
self-regulated strategy development instruction. Reading &
Writing Quarterly: Overcoming Learning Difficulties, 29,
124144. doi:10.1080/10573569.2013.758561

Summary. Mason (2013) gathered together recent literature


and experimental studies to investigate whether interconnecting reading, writing, and language processes with cognitive thinking and discussion has positive effects on
improving the reading comprehension and writing of
expository text for students with and without LD. One such
cognitive routine, think before reading, think while reading, and think after reading (TWA), was explicitly taught
alongside SRSD while students were reading expository
text passages. TWA is a nine-step process that has three
steps before reading (i.e., thinking about the authors purpose, what the student already knows, and what the student
wants to learn), three steps during reading (i.e., thinking
about reading speed, linking new knowledge, and rereading for clarification), and three steps after reading (i.e.,
thinking about the main idea, summarizing information,
and what the student has learned). Some of the included
studies added in other aspects of effective instruction: student discourse, peer learning, written language instruction,

or summarization instruction. Effect sizes across the seven


studies ranged from 0.33 to 1.38 in reading comprehension, oral retelling, and/or writing favoring those students
receiving SRSD for TWA over contrast groups receiving
other supplemental instruction (i.e., guided reading or
reciprocal teaching) or typical instruction.
Why this article is a must read. The National Center for
Education Statistics (2011) reports that approximately 85%
of fourth through eighth grade students with reading disabilities and 66% of general education students are unable
to gain knowledge or demonstrate learning from the expository text that dominates their content area textbooks. Given
the numbers of students who struggle with literacy in the
average classroom, educators and researchers need to continue to search for the most effective ways to improve students outcomes. Findings from Masons meta-analysis
support the use of multicomponent strategy instruction,
which targets many areas of literacy at the same time while
students are reading challenging content area text. In addition, strong intercorrelations have been found between
reading and writing, oral language and writing, and oral language with reading comprehension (Shanahan, 2008).
Therefore, it seems plausible that interconnecting instruction in these areas will allow students with LD to further
increase their performance in both reading and writing.
As called for in college and career readiness standards
(e.g., National Governors Association [NGA]Center for
Best Practices & Council of Chief State School Officers
[CCSSO], 2010), close reading of complex text involves
moving beyond basic recall of facts toward engaging students with critical thinking questions and writing about the
text. SRSD for TWA looks promising for increasing the
reading comprehension, vocabulary, oral language, and
writing abilities of low-performing students because the
approach not only provides explicit instruction in needed
skills but also incorporates goal setting, self-monitoring,
and critical thinking. Masons findings suggest SRSD will
be most effective if implemented in conjunction with cognitive strategies such as TWA, discussion, peer learning
opportunities, and explicit writing instruction. This has
implications for both researchers and practitioners as we
continue to develop curriculum and instruction aimed at
providing the best educational supports for all learners,
especially those struggling with the demands and density of
content area expository text.

Mathematics Instruction
Brittany L. Hott

Brief Context/Rationale
Currently, 45 states, the District of Columbia, and the
Department of Defense have adopted the Common Core

Downloaded from ldq.sagepub.com at Counsel for Learning Disabilities on March 9, 2014

Reed et al.
State Standards (CCSS; NGA & CCSSO, 2010).
Consequently, many school districts have initiated the transition from locally developed standards or those produced by
the National Council for Teachers of Mathematics (NCTM)
to the CCSS. These changes have resulted in numerous curriculum and assessment modifications that affect students
and teachers. Therefore, it is important that practitioners and
researchers be cognizant of the impact these changes might
have on students with math difficulty (MD).
Powell, S. R., Fuchs, L. S., & Fuchs, D. (2013). Reaching the
mountaintop: Addressing the common core standards in mathematics for students with mathematics difficulties. Learning
Disabilities Research & Practice, 28, 3848. doi:10.1111/
ldrp.12001

Summary. This article presents an overview of the Common


Core framework for mathematics and implications for students with MD. The authors provide an outline of the CCSS
including domain areas, clusters, standards, and links
between the NCTM recommendations and the CCSS. The
authors also provide a framework for teaching and evaluating the progress of students with MD.
Powell and colleagues definition of MD includes students receiving special education services due to a documented LD as well as those experiencing difficulties
without a formal diagnosis. The difficulties experienced by
students with MD are primarily with foundational concepts,
and many students struggle with one-to-one correspondence, language comprehension, reading difficulties, and
visualspatial limitations even after they would be expected
to perform beyond these concepts to meet grade-level
standards.
The authors propose a model, Roscoes Mountaintop,
for teaching students with MD. Teachers choose one or
more clusters of grade-level standards for each student
experiencing difficulty. To climb toward the expected
goal, the student first achieves the easier standards at the
base of the mountain and gradually advances to the more
complex standards near the top. The foundational skills
necessary for the eventual mastery of the grade-level standards are referred to as base camps (p. 43) in between
the standards. Students who have difficulty with these
foundational concepts should be provided empirically validated interventions. Several foundational skills may be
addressed at the same time; however, each objective must
be met for the student to progress toward mastery.
Within the Common Core framework, Powell et al.
emphasize teachers of students with MD should (a) become
familiar with the standards, (b) select a section of gradelevel clusters to serve as the mountain top for a student in
a selected time period, and (c) develop instructional practices that include research-based strategies for addressing
grade-level standards as well as the necessary precursor
skills.

Why this article is a must read. The CCSS are designed to


provide students, teachers, and families with a clear understanding of expectations at each grade level. However, to
date, there are limited empirically validated provisions to
support the estimated 5% to 8% of students with LD in
mathematics and the up to 25% of students who experience
significant difficulty with mathematics (Mazzocco, Feigenson, & Halberda, 2011; Shalev, Auerbach, Manor, & GrossTsur, 2000). As students progress through the grades,
performance discrepancies between typically developing
peers and students with MD widen (Judge & Watson, 2011),
and those without solid precursor skills are often ill prepared to achieve increasingly demanding grade-level tasks.
It is important that school districts making changes to align
the curriculum, instruction, and assessment with the CCSS
ensure the needs of students with MDs are addressed
through well-designed individual plans that better equip the
students to master mathematical concepts.
The model proposed by Powell et al. for addressing student needs while providing access to the grade-level CCSS
makes this article a must read because it is among the first
to bring attention to the specific instructional needs of students with MD within the new framework. This will help
guide practitioners and researchers as they continue to evaluate the needs of students with MD.

Research Method
Keith Smolkowski

Brief Context/Rationale
Educators understand that many efficacious interventions
fail to offer practical advantages in their classrooms. An
intervention may produce outstanding student performance
gains in controlled research settings yet lack social validity,
acceptability, and relevance to the students who receive the
intervention or their teachers and parents. Although some
interventions may be too challenging to implement or use
undesirable methods, many simply require too much time,
money, or other scarce resources. Such challenges limit
teachers and administrators ability to adopt, implement,
and maintain many interventions. Reading Recovery (RR),
for instance, has been found to have positive effects by
What Works Clearinghouse (U.S. Department of Education,
2013), but the program requires substantial amount of time
from trained teachers, who typically support just 8 or 10
students per year. Complete implementation requires that
schools provide for program fees, teacher salaries, instructional materials, and rooms with one-way mirrors. Many
schools cannot sustain such an expensive, time-consuming
program. In contrast, schoolwide positive behavior interventions and supports (PBIS), implemented in tens of thousands of schools, has become popular largely because of its
low costs and high social validity. Blonigen and colleagues

Downloaded from ldq.sagepub.com at Counsel for Learning Disabilities on March 9, 2014

Learning Disability Quarterly

(2008) showed that aside from initial trainingone-time


transition costsschoolwide PBIS implementation replaces
other, possibly less-effective activities and hence requires
no additional expenses. Researchers would thus benefit
from extending their evaluations beyond tests of efficacy or
effectiveness to address additional features of an intervention such as reach, adoption, implementation, and
maintenance.
In my must read selection, Cook and Odom introduce
a special issue of Exceptional Children on implementation
science. They discuss evidence-based practices and how the
importance of such practices becomes bounded by educators or other practitioners willingness and ability to implement them. They describe how implementation science
sheds light on such concerns and briefly summarize the
articles in their special issue. Cook and Odom also offer a
solid overview of the topic and introduce content not discussed elsewhere in the issue.
Cook, B. G., & Odom, S. L. (2013). Evidence-based practices
and implementation science in special education. Exceptional
Children, 79, 135144.

Summary.Unique to Cook and Odoms contribution to


their special issue is an overview of Glasgow, Vogt, and
Boles (1999) Reach, Efficacy/Effectiveness, Adoption,
Implementation, and Maintenance (RE-AIM) framework, a
set of critical dimensions that encompass the full impact of
interventions. Briefly, RE-AIM suggests that intervention
developers and evaluators should address (a) reach, the proportion of the target population reached by a practice; (b)
efficacy, the capacity to produce the desired effect; (c)
adoption, the proportion of targeted practitioners who adopt
the practice; (d) implementation, the proportion of interventionists who implement the practice with fidelity; and (e)
maintenance, the proportion of interventionists who maintain the practice over time. Assuming estimates of each
component as proportions, Cook and Odom show that a
highly effective intervention may have limited total impact
through the product of terms: .80 (reach) .95 (efficacy)
.70 (adoptions) .60 (implementation) .50 (maintenance) = .16 (total impact). Researchers traditionally focus
on just efficacy (and effectiveness) research, but as suggested by the above calculation, a moderately effective
intervention may have greater total impact if it has high
rates of adoption, reaches most of all of the intended target
population, and can be implemented and maintained in
schools.
The RE-AIM framework, for example, can be powerful
when recognized as a tool to support the implementation of
sufficiently effective practices. Returning to PBIS, some
authors in the field of Applied Behavior Analysis have
claimed that PBIS trainers and technical assistance

providers have weakened the implementation fidelity of key


behavioral interventions and approaches because they minimize the focus on certain critical details (e.g., the role of
consequences) and eschew the use of certified behavior analysts (Johnston, Foxx, Jacobson, Green, & Mulick, 2006).
Although highly qualified psychologists may lead to more
tightly implemented interventions with greater efficacy, the
limited availability and cost of professional implementation
would likely reduce the reach, adoption, and maintenance. If
PBIS represents an approach that is 70% as effective as the
supports provided by certified behavior analysts, yet PBIS
reaches more students and can be implemented and maintained by indigenous educators, then the RE-AIM framework suggests that the impact could be substantially greater
than the more intensive, more precise strategies that require
access to scarce behavioral expertise.
The RE-AIM model was first developed for public
health researchers by Russ Glasgow and Shawn Boles and
has broad applicability. Glasgow and colleagues (1999)
framework has spread and been used in a variety of research
areas, such as diabetes self-management, family medicine,
housing improvement, and lifestyle change. Cook and
Odom have now brought the RE-AIM framework to special
education, where I believe its application will lead to interventions with greater overall impact on students lives.
Why this article is a must read. The value of Cook and
Odoms article stems from their thought-provoking treatment of implementation science, which can be a complicated topic. Implementation science, Cook and Odom
explain, includes the dissemination and diffusion, sustainability, scale-up, and professional development research all
intended to reduce the gap between research and practice.
The gapdescribed by some as a chasm (e.g., Donovan &
Cross, 2002)between research and practice (Cook &
Odom, p. 136) has persisted in special education for
decades, as many interventions have been tested with the
help of university-funded specialists unavailable in schools,
in laboratory settings, or with other features that limit applicability in the real world. Although many papers have
addressed this gap, their impact appears to have been minimal. Practitioners frequently report use of ineffective practices (e.g., teaching based on learning styles, unguided
discovery learning, and teaching self-esteem) often because
these are more acceptable, easier to implement, or cost less.
Cook and Odoms inclusion of the RE-AIM framework
broadens their contribution to the field. The RE-AIM model
was initially developed as a more comprehensive way of
reporting the value of an intervention or practice than the
handful of p values and effect sizes that result from an efficacy trial. Nonetheless, the framework, as well as other
research on implementation science, may best serve the
research community as a guide to intervention design.

Downloaded from ldq.sagepub.com at Counsel for Learning Disabilities on March 9, 2014

Reed et al.
When intervention designers integrate all RE-AIM factors
into their development plan, it forces them to consider
numerous details that can influence the overall impact. In
addition to the RE-AIM framework, Cook and Odom introduce and integrate other works on implementation science,
such as McIntosh, Filter, Bennett, Ryan, and Sugais (2010)
principles of sustainable prevention and Fixsen, Naoom,
Blas, Friedman, and Wallaces (2005) synthesis of implementation research. These works also offer important contributions but are likely to be more familiar to educators.
Ultimately, Cook and Odom present a motivating overview of implementation science with helpful examples.
Implementation is the next, and arguably most critical,
stage of evidence-based reforms (p. 142). Because the
impact of evidence-based practices on children is unavoidably bounded by implementation (p. 142), their suggestion
that special education researchers attend to all aspects of
implementation makes their article my must read.

Summary and Reflections


The six articles selected by members of CLDs Research
Committee highlight an important feature of translating
research in the field of LD into practice. That is, both our
knowledge base and practitioners implementation of evidence-based practices occur in an incremental fashion.
Each new study leads to additional questions warranting
further research, and potentially, revisions to the instructional implications. Similarly, schools enact changes in
stages as they deepen their understanding of the practices,
attempt to overcome barriers encountered, and react to new
policies or emerging research findings. The gradual process
of growing the knowledge base and translating it into realworld settings can make researchers frustrated with the
slow or imperfect change occurring in schools, and practitioners frustrated with the inconclusive or infeasible guidance emerging from academia. Our review of the must
reads for 20122013 provides a reminder that, despite our
different roles, we are all interested and persisting in making sound instructional decisions capable of enhancing the
education of individuals with LD. As long as we remain
vigilant to improving the field, perhaps the medieval philosopher Maimonides was correct that the risk of a wrong
decision is preferable to the terror of indecision.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with
respect to the research, authorship, and/or publication of this
article.

Funding
The author(s) received no financial support for the research,
authorship, and/or publication of this article.

References
*Items marked with an asterisk were included in the review.
Albano, A. D., & Rodriguez, M. C. (2011). Statistical equating
with measures of oral reading fluency. Journal of School
Psychology, 50, 4359. doi:10.1016/j.jsp.2011.07.002
Ardoin, S. P., & Christ, T. J. (2009). CBM of oral reading: Standard
errors associated with progress monitoring outcomes from
DIBELS, AIMSweb, and an experimental passage set. School
Psychology Review, 38, 266283.
*Ardoin, S. P., Christ, T. J., Morena, L. S., Cormier, D. C., &
Klingbeil, D. A. (2013). A systematic review and summarization of the recommendations and research surrounding
Curriculum-Based Measurement of oral reading fluency
(CBM-R) decision rules. Journal of School Psychology, 51,
118. doi:10.1016/j.jsp.2012.09.004
Blonigen, B., Harbaugh, W., Singell, L., Horner, R. H., Irvin, L.
K., & Smolkowski, K. (2008). Application of economic analysis to School-Wide Positive Behavior Support (SW-PBS)
programs. Journal of Positive Behavior Interventions, 10,
519. doi:10.1177/1098300707311366
Bradley, M. C., Daley, T., Levin, M., OReilly, R., Parsad, A.,
Robertson, A., & Werner, A. (2011). IDEA National assessment implementation study (NCEE 2011-4027). Washington,
DC: National Center for Education Evaluation and Regional
Assistance, Institute of Education Sciences, U.S. Department
of Education.
Bulgren, J. A., Graner, P. S., & Deshler, D. D. (2013). Literacy
challenges and opportunities for students with learning disabilities in social studies and history. Learning Disabilities
Research & Practice, 28, 1727. doi:10.1111/ldrp.12003
Burns, M. K., Egan, A. M., Kunkel, A. K., McComas, J., Petereson,
M. M., Rahn, N. L., & Wilson, J. (2013). Training for generalization and maintenance in RTI implementation: Frontloading for sustainability. Learning Disabilities Research &
Practice, 28, 8188. doi:10.1111/ldrp.12009
Christ, T. J., Zopluoglu, C., Long, J., & Monaghen, B. D. (2012).
Curriculum-based measurement of oral reading: Quality of progress monitoring outcomes. Exceptional Children, 78, 356373.
Christ, T. J., Zopluoglu, C., Monaghen, B. D., & Van Norman,
E. R. (2013). Curriculum-based measurement of oral reading: Multi-study evaluation of schedule, duration, and dataset
quality on progress monitoring outcomes. Journal of School
Psychology, 51, 1957. doi:10.1016/j.jsp.2012.11.001
Compton, D. L., Gilbert, J. K., Jenkins, J. R., Fuchs, D., Fuchs,
L. S., Eunsoo, C., . . .Bouton, B. (2012). What level of data is
necessary to ensure selection accuracy? Journal of Learning
Disabilities, 45, 204216. doi:10.1177/0022219412442151
Connor, C. M., Ponitz, C. C., Phillips, B. M., Travis, Q. M.,
Glasney, S., & Morrison, F. J. (2010). First graders literacy
and self-regulation gains: The effect of individualizing student instruction. Journal of School Psychology, 48, 433455.
doi:10.1016/j.jsp.2010.06.003
*Cook, B. G., & Odom, S. L. (2013). Evidence-based practices
and implementation science in special education. Exceptional
Children, 79(2), 135144.
Cummings, K. D., Park, Y., & Bauer Schaper, H. A. (2013).
Equating DIBELS next oral reading fluency progress

Downloaded from ldq.sagepub.com at Counsel for Learning Disabilities on March 9, 2014

10

Learning Disability Quarterly

monitoring passages. Assessment for Effective Intervention,


38, 91104. doi:10.1177/1534508412447010
Deno, S. L. (2003). Developments in curriculum-based measurement. The Journal of Special Education, 37, 184192.
Deno, S. L., Mirkin, P. K., & Chiang, B. (1982). Identifying valid
measures of reading. Exceptional Children, 49, 3645.
Deno, S. L., Mirkin, P. K., & Wesson, C. (1984). How to write
effective data-based IEPs. Teaching Exceptional Children,
52, 99104.
Donovan, M. S., & Cross, C. T. (Eds.). (2002). Minority students
in special and gifted education. Washington, DC: National
Academies Press.
Drasgow, E., Yell, M. L., & Robinson, T. R. (2001). Developing
legally and educationally appropriate IEPs: Federal law and
lessons learned from the Lovaas hearings and cases. Remedial
and Special Education, 22, 359373.
Fixsen, D. L., Naoom, S. F., Blas, K. A., Friedman, R. M.,
& Wallace, F. (2005). Implementation research: A synthesis of the literature (FMHI Publication No. 231). Tampa:
University of South Florida, Louis de la Parte Florida
Mental Health Institute, The National Implementation
Research Network.
Francis, D. J., Santi, K. L., Barr, C., Fletcher, J. M., Varisco, A.,
& Foorman, B. R. (2008). Form effects on the estimation of
students ORF using DIBELS. Journal of School Psychology,
46, 315342.
Fuchs, L. S., & Fuchs, D. (1986). Effects of systematic formative evaluation: A meta-analysis. Exceptional Children, 53,
199208.
Gersten, R., Compton, D., Connor, C. M., Dimino, J., Santoro,
L., Linan-Thompson, S., & Tilly, W. D. (2009). Assisting
students struggling with reading: Response to intervention and multi-tier intervention for reading in the primary
gradesA practice guide (NCEE 2009-4045). Washington,
DC: National Center for Education Evaluation and Regional
Assistance, Institute of Education Sciences, U.S. Department
of Education.
Glasgow, R. E., Vogt, T. M., & Boles, S. M. (1999). Evaluating
the public health impact of health promotion interventions:
The RE-AIM framework. American Journal of Public Health,
89, 13221327.
Good, R. H., & Shinn, M. R. (1990). Forecasting accuracy of
slope estimates for reading curriculum-based measurement:
Empirical evidence. Behavioral Assessment, 12, 179183.
Graney, S. B., & Shinn, M. R. (2005). Effects of Reading
Curriculum-Based Measurement (R-CBM) teacher feedback
in general education classrooms. School Psychology Review,
34, 184202.
Harris, K. R., & Graham, S. (1999). Programmatic intervention
research: Illustrations from the evolution of self-regulated
strategy development. Learning Disabilities Quarterly, 22,
251262. doi:10.2307/1511259
Hinnefeld, S. (2013, July 18). Will Indiana drop third-grade retention rule? School matters: K-12 education in Indiana. Retrieved
from http://inschoolmatters.wordpress.com/2013/07/18/willindiana-drop-third-grade-retention-rule/
Individuals With Disabilities Education Improvement Act of
2004. (2004). PL 108-446, 20 U.s.C 1400 et seq: U.S.
Department of Education.

*Jenkins, J. R., Schiller, E., Blackorby, J., Thayer, S. K., & Tilly,
W. D. (2013). Responsiveness to intervention: Architecture
and practices. Learning Disability Quarterly, 36, 3646.
doi:10.1177/0731948712464963
Johnston, J. M., Foxx, R. M., Jacobson, J. W., Green, G., &
Mulick, J. A. (2006). Positive behavior support and applied
behavior analysis. The Behavior Analyst, 29, 5174.
Judge, S., & Watson, S. M. R. (2011). Longitudinal outcomes for
mathematics achievement for students with learning disabilities. The Journal of Educational Research, 104, 147157.
doi:10.1080/00220671003636729
Kethley, C. I. (2005). Case studies of resource room reading
instruction for middle school students with high-incidence disabilities (Unpublished doctoral dissertation). The University
of Texas at Austin.
Marston, D., Muyskens, P., Lau, M., & Canter, A. (2003). Problemsolving model for decision making with high-incidence disabilities: The Minneapolis experience. Learning Disabilities
Research & Practice, 18, 187200. doi:10.1111/15405826.00074
*Mason, L. H. (2013). Teaching students who struggle with
learning to think before, while, and after reading: Effects of
self-regulated strategy development instruction. Reading &
Writing Quarterly: Overcoming Learning Difficulties, 29,
124144. doi:10.1080/10573569.2013.758561
Mazzocco, M. M., Feigenson, L., & Halberda, J. (2011). Impaired
acuity of the approximate number system underlies mathematical learning disability (dyscalculia). Child Development,
82, 12241237. doi:10.1111/j.1467-8624.2011.1608x
McIntosh, K., Filter, K. J., Bennett, J. L., Ryan, C., & Sugai,
G. (2010). Principles of sustainable prevention: Designing
scale-up of school-wide positive behavior support to promote durable systems. Psychology in the Schools, 47, 521.
doi:10.1002/pits.20448
Mellard, D., McKnight, M., & Jordan, K. (2010). RTI tier structures and instructional intensity. Learning Disabilities
Research & Practice, 24, 185195. doi:10.1111/j.15405826.2010.00319.x
Miller, D. (2011, June 18). Schools drop rule that thwarted reading. Read Aloud West Virginia. Retrieved from http://www.
readaloudwestvirginia.org/miller061811.php
National Center for Education Statistics. (2011). The nations
report card: Reading 2011National assessment of educational progress at grades 4 and 8 (NCES2012-457).
Washington, DC: Institute of Education Sciences, U.S.
Department of Education.
National Governors Association Center for Best Practices &
Council of Chief State School Officers. (2010). Common core
state standards. Washington, DC: Author.
*Powell, S. R., Fuchs, L. S., & Fuchs, D. (2013). Reaching the
mountaintop: Addressing the common core standards in mathematics for students with mathematics difficulties. Learning
Disabilities Research & Practice, 28, 3848. doi:10.1111/
ldrp.12001
Shalev, R. S., Auerbach, J., Manor, O., & Gross-Tsur, V. (2000).
Developmental dyscalculia: Prevalence and prognosis.
European Child & Adolescent Psychiatry, 9, 5864.
Shanahan, T. (2008). Relations among oral language, reading,
and writing development. In C. A. MacArthur, S. Graham,

Downloaded from ldq.sagepub.com at Counsel for Learning Disabilities on March 9, 2014

11

Reed et al.
& J. Fitzgerald (Eds.), Handbook of writing research (pp.
171186). New York, NY: Guilford Press.
*Shapiro, E. S., & Gebhardt, S. N. (2012). Comparing computeradaptive and curriculum-based measurement methods of
assessment. School Psychology Review, 41, 295305.
Shinn, M. R. (2008). Best practices in using curriculum-based
measurement in a problem-solving model. In A. Thomas &
J. Grimes (Eds.), Best practices in school psychology-V (pp.
243262). Washington, DC: National Association of School
Psychologists.
Shinn, M. R., Good, R. H., & Stein, S. (1989). Summarizing trend
in student achievement: A comparison of methods. School
Psychology Review, 18, 356370.
Stecker, P. M., Fuchs, L. S., & Fuchs, D. (2005). Using curriculum-based measurement to improve student achievement:
Review of the research. Psychology in the Schools, 42, 795
819. doi:10.1002/pits.20113
Swanson, E. A. (2008). Observing reading instruction for students
with LD: A synthesis. Learning Disability Quarterly, 31, 119.
Swanson, E., Solis, M., Ciullo, S., & McKenna, J. W. (2012). Special
education teachers perceptions and instructional practices in
response to intervention implementation. Learning Disability
Quarterly, 35, 115126. doi:10.1177/0731948711432510
Tackett, K. K., Roberts, G., Baker, S., & Scammacca, N. (2009).
Implementing response to intervention: Practices and

perspectives from five schools (Frequently asked questions).


Portsmouth, NH: RMC Research Corporation, Center on
Instruction.
U.S. Department of Education. (2013, July). Beginning reading intervention report: Reading recovery. Washington,
DC: U.S. Department of Education, Institute of Education
Sciences, What Works Clearinghouse.
Vaughn, S., & Fletcher, J. M. (2012). Response to intervention with secondary school students with reading difficulties. Journal of Learning Disabilities, 45, 244256.
doi:10.1177/0022219412442157
Vaughn, S., Linan-Thompson, S., & Hickman, P. (2003). Response
to instruction as a means of identifying students with reading/
learning disabilities. Exceptional Children, 69, 391409.
Wayman, M. M., Wallace, T., Wiley, H. I., TIcha, R., & Espin,
C. A. (2007). Literature synthesis on curriculum-based measurement in reading. The Journal of Special Education, 41,
85120.
Yell, M. L., & Busch, T. W. (2012). Using curriculum-based measurement to develop educationally meaningful and legally
sound Individualized Education Programs (IEPs). In C. A.
Espin, K. L. McMaster, & S. Rose (Eds.), A measure of success: The influence of curriculum-based measurement in education (pp. 3748). Minneapolis: University of Minnesota
Press.

Downloaded from ldq.sagepub.com at Counsel for Learning Disabilities on March 9, 2014

You might also like