Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Reclaiming Accountability: Improving Writing Programs through Accreditation and Large-Scale Assessments
Reclaiming Accountability: Improving Writing Programs through Accreditation and Large-Scale Assessments
Reclaiming Accountability: Improving Writing Programs through Accreditation and Large-Scale Assessments
Ebook552 pages7 hours

Reclaiming Accountability: Improving Writing Programs through Accreditation and Large-Scale Assessments

Rating: 0 out of 5 stars

()

Read preview

About this ebook

Reclaiming Accountability brings together a series of critical case studies of writing programs that have planned, implemented, and/or assessed the impact of large-scale accreditation-supported initiatives. The book reimagines accreditation as a way to leverage institutional or programmatic change.

Contributions to the volume are divided into three parts. Part 1 considers how specialists in composition and rhetoric can work most productively with accrediting bodies to design assessments and initiatives that meet requirements while also helping those agencies to better understand how writing develops and how it can most effectively be assessed. Parts 2 and 3 present case studies of how institutions have used ongoing accreditation and assessment imperatives to meet student learning needs through programmatic changes and faculty development. They provide concrete examples of productive curricular (part 2) and instructional (part 3) changes that can follow from accreditation mandates while providing guidance for navigating challenges and pitfalls that WPAs may encounter within shifting and often volatile local, regional, and national contexts.

In addition to providing examples of how others in the profession might approach such work, Reclaiming Accountability addresses assessment requirements beyond those in the writing program itself. It will be of interest to department heads, administrators, writing program directors, and those involved with writing teacher education, among others.

Contributors: Linda Adler-Kassner, William P. Banks, Remica Bingham-Risher, Melanie Burdick, Polina Chemishanova, Malkiel Choseed, Kyle Christiansen, Angela Crow, Maggie Debelius, Michelle F. Eble, Jonathan Elmore, Lorna Gonzalez, Angela Green, Jim Henry, Ryan Hoover, Rebecca Ingalls, Cynthia Miecznikowski, Susan Miller-Cochran, Cindy Moore, Tracy Ann Morse, Joyce Magnotto Neff, Karen Nulton, Peggy O’Neill, Jessica Parker, Mary Rist, Rochelle Rodrigo, Tulora Roeckers, Shirley K. Rose, Iris M. Saltiel, Wendy Sharer, Terri Van Sickle, Jane Chapman Vigil, David M. Weed

LanguageEnglish
Release dateApr 6, 2016
ISBN9781607324355
Reclaiming Accountability: Improving Writing Programs through Accreditation and Large-Scale Assessments

Related to Reclaiming Accountability

Related ebooks

Composition & Creative Writing For You

View More

Related articles

Reviews for Reclaiming Accountability

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Reclaiming Accountability - Wendy Sharer

    Reclaiming Accountability

    Improving Writing Programs through Accreditation and Large-Scale Assessments

    Edited by

    Wendy Sharer

    Tracy Ann Morse

    Michelle F. Eble

    William P. Banks

    Utah State University Press

    Logan

    © 2016 by the University Press of Colorado

    Published by Utah State University Press

    An imprint of University Press of Colorado

    5589 Arapahoe Avenue, Suite 206C

    Boulder, Colorado 80303

    All rights reserved

    Manufactured in the United States of America

    aaup logo The University Press of Colorado is a proud member of The Association of American University Presses.

    The University Press of Colorado is a cooperative publishing enterprise supported, in part, by Adams State University, Colorado State University, Fort Lewis College, Metropolitan State University of Denver, Regis University, University of Colorado, University of Northern Colorado, Utah State University, and Western State Colorado University.

    The paper used in this publication meets the minimum requirements of the American National Standard for Information Sciences—Permanence of Paper for Printed Library Materials. ANSI Z39.48-1992

    The University Press of Colorado gratefully acknowledges the generous support of East Carolina University toward the publication of this book.

    ISBN: 978-1-60732-434-8 (pbk)

    ISBN: 978-1-60732-435-5 (ebook)

    Library of Congress Cataloging-in-Publication Data

    Reclaiming accountability : improving writing programs through accreditation and large-scale assessments / edited by Wendy Sharer, Tracy Ann Morse, Michelle F. Eble, Will Banks.

           pages cm

        ISBN 978-1-60732-434-8 (pbk.) — ISBN 978-1-60732-435-5 (ebook)

    1.  English language—Rhetoric—Study and teaching. 2.  Report writing—Study and teaching.  I. Sharer, Wendy B., editor. II. Morse, Tracy Ann, editor. III. Eble, Michelle F., 1974– editor. IV. Banks, Will, editor.

      PE1404.R379 2015

      808'.0420711—dc23

                                                                2015011299

    Cover illustration © fotographic1980 /Shutterstock

    Contents


    Introduction: Accreditation and Assessment as Opportunity

    WENDY SHARER, TRACY ANN MORSE, MICHELLE F. EBLE, AND WILLIAM P. BANKS

    PART ONE: LAYING THE FOUNDATIONS—EDUCATING AND LEARNING FROM ACCREDITING BODIES

    1 Assessing for Learning in an Age of Comparability: Remembering the Importance of Context

    CINDY MOORE, PEGGY O’NEILL, AND ANGELA CROW

    2 QEP Evaluation as Opportunity: Teaching and Learning through the Accreditation Process

    SUSAN MILLER-COCHRAN AND ROCHELLE RODRIGO

    3 Understanding Accreditation’s History and Role in Higher Education: How It Matters to College Writing Programs

    SHIRLEY K. ROSE

    PART TWO: CURRICULUM AND PROGRAM DEVELOPMENT THROUGH ASSESSMENT AND ACCREDITATION

    4 Going All In: Creating a Community College Writing Program through the QEP and Reaccreditation Process

    JONATHAN ELMORE AND TERESSA VAN SICKLE

    5 Moving Forward: What General Studies Assessment Taught Us about Writing, Instruction, and Student Learning

    JESSICA PARKER AND JANE CHAPMAN VIGIL

    6 Making Peace with a Regrettable Necessity: Composition Instructors Negotiate Curricular Standardization

    DAVID WEED, TULORA ROECKERS, AND MELANIE BURDICK

    7 A Tool for Program Building: Programmatic Assessment and the English Department at Onondaga Community College

    MALKIEL CHOSEED

    8 Centering and De-Centering Assessment: Accountability, Accreditation, and Expertise

    KAREN NULTON AND REBECCA INGALLS

    9 Using Accountability to Garner Writing Program Resources, Support Emerging Writing Researchers, and Enhance Program Visibility: Implementing the UH Writing Mentors during WASC Reaccreditation

    JIM HENRY

    10 SEUFolios: A Tool for Using ePortfolios as Both Departmental Assessment and Multimodal Pedagogy

    RYAN S. HOOVER AND MARY RIST

    PART THREE: FACULTY DEVELOPMENT THROUGH ASSESSMENT AND ACCREDITATION

    11 Write to the Top: How One Regional University Made Writing Everybody’s Business

    POLINA CHEMISHANOVA AND CYNTHIA MIECZNIKOWSKI

    12 Everybody Writes: Accreditation-Based Assessment as Professional Development at a Research Intensive University

    LINDA ADLER-KASSNER AND LORNA GONZALEZ

    13 A Funny Thing Happened on the Way to Assessment: Lessons from a Thresholds-Based Approach

    MAGGIE DEBELIUS

    14 Faculty Learning Outcomes: The Impact of QEP Workshops on Faculty Beliefs and Practices

    JOYCE NEFF AND REMICA BINGHAM-RISHER

    15 From the Outside In: Creating a Culture of Writing through a QEP

    ANGELA GREEN, IRIS SALTIEL, AND KYLE CHRISTIANSEN

    About the Authors

    Index

    Reclaiming Accountability

    Introduction

    Accreditation and Assessment as Opportunity


    WENDY SHARER, TRACY ANN MORSE, MICHELLE F. EBLE, AND WILLIAM P. BANKS

    In the fall of 2007, the Council of Writing Program Administrators (CWPA) Executive Board called on WPAs and writing instructors to respond to large-scale writing assessment and higher education accreditation. The Board’s letter outlines specific actions to help stakeholders understand that only effective, valid, and reliable assessments that are context-specific and discipline-based can be used to improve student learning (Council of Writing Program Administrators 2007). These actions include learning about accrediting bodies and accreditation procedures and meeting with the Director(s) of Assessment at members’ institutions. The message throughout the letter is clear: those who teach writing and those who administer writing programs need to be involved in defining the terms and setting the parameters of large-scale writing assessment so that any changes implemented in response to assessment are in keeping with what research and practice have demonstrated to be truly effective in helping student writers.

    We see this collection as a resource for writing instructors and WPAs looking to answer the call to action in sustainable, research-driven, practice-tested ways. Contributions to the volume help readers to accomplish three key things:

    1. Understand the goals and limits of large-scale writing assessment from both the perspective of the accrediting bodies that require it and the writing instructors and WPAs who design, implement, and, ideally, benefit from it.

    2. Consider strengths and weaknesses of assessment structures and assessment-driven improvement initiatives that have been implemented at a variety of types of institutions (included are contributions from writing specialists from schools of differing sizes, student populations, geographical locations, and writing program structures). As the CWPA points out in their letter of fall 2007, specific stories about successful assessment processes are compelling to a range of audiences. The examples offered in the chapters that follow thus provide invaluable support for writing instructors and WPAs as they attempt to persuade other faculty and upper administrators to implement responsive and responsible assessment processes. At the same time, contributors to this collection do not hesitate to identify struggles and setbacks that they encountered in the processes of conducting large-scale assessment and responding to assessment results. The examples are useful for the support as well as the cautions that they provide to readers.

    3. Use ongoing accreditation and assessment imperatives to cultivate productive campus-wide conversations that increase faculty members’ ability to meet students’ writing and learning needs. A benefit of large-scale assessment that receives attention in a number of chapters in this collection is the fruitful and revealing discussion about what is valued in writing across disciplinary contexts. Institution-wide assessment initiatives, then, can be occasions to discover and disrupt unstated and incorrect assumptions that good writing is good writing regardless of where it occurs in the university. Writing specialists and WPAs are in a unique position to lead the development of authentic writing assessment on their campuses and, through their efforts, to change campus understandings of and approaches to writing instruction in ways that have not been possible since the birth of the Writing Across the Curriculum (WAC) movement.

    We recognize that what is true of writing is also true of assessment: context always affects what can be effective in any given situation. In response, we bring together here a series of critical case studies of writing programs from across the country that have planned, implemented, and assessed the impact of large-scale, accreditation-supported initiatives. Some of the chapters explore writing program responses to the Southern Association of Colleges and Schools (SACS) requirement that member institutions design, implement, and assess a Quality Enhancement Plan (QEP), a university-wide, five-year program aimed at improving some aspect of student learning. Other chapters explore writing-focused institutional responses to the demands of other higher education accrediting bodies. For example, the North Central Association of Colleges and Schools’ Higher Learning Commission requires an Academic Quality Improvement Program (AQIP), and the Western Association of Schools and Colleges, as well as the Middle States Commission on Higher Education, require institutional reviews that include programmatic assessments.

    This list of accrediting agencies requiring assessment and documented improvement initiatives shows that the impetus for large-scale assessment and public accountability has moved into higher education. Many readers may recognize that we have borrowed part of our title from Chris Gallagher’s (2007) book, Reclaiming Assessment: A Better Alternative to the Accountability Agenda, which chronicles the grassroots, context-appropriate assessments for local school districts in the state of Nebraska during the beginnings of the high-stakes testing environment of No Child Left Behind. Gallagher argues that engaging in this accountability work and focusing on local contexts is best for students and teachers when it comes to assessment and improving learning. As institutions of higher learning have increasingly been asked to account for student learning and to take action to improve their student learning, accreditation and the large-scale assessment it entails become one driver of this work. At East Carolina University, our home institution, we are witnessing first-hand the kinds of large-scale change that can result from engaging in institution-wide, context-specific assessment and accreditation processes. The work that the four of us have done as part of ECU’s accreditation through SACS has substantially benefitted our program, our institution, and our students. Among other things, we have gained

    • a major curricular revision in our composition program that moves the second of our required writing classes to the sophomore year and refocuses that course on the transition to writing in disciplinary contexts;

    • a new, technology rich University Writing Center space;

    • a new, tenure-track position for a Director of our University Writing Center;

    • a set of coherent learning outcomes for our WAC program;

    • a Writing Liaisons program: Writing Liaisons are faculty from across the university who regularly teach writing-intensive courses in their departments and who meet several times during the year with the leadership of our composition and WAC programs to ensure that students get consistent information about writing expectations and writing strategies;

    • an electronic portfolio structure in which students compile writing samples and self-analyses of their own writing across their time at the institution; and

    • expanded professional development opportunities for faculty.

    Getting to this point has not been easy. Details about how WPAs at other institutions have used accreditation and large-scale assessment projects to bring positive change—as well as failure narratives and cautionary tales about working with faculty, upper administration, and accrediting bodies in such projects—would likely have made our journey much smoother, but such material has not been easy to come by. Rather, it has been located primarily within institution-specific accreditation documents and assessment reports, which only some schools have elected to make publicly available on their websites. When we set out to find the most productive ways of using assessment and accreditation opportunities at our institution, there were few published accounts of how these kinds of endeavors had been pursued at other institutions. This collection helps fill that gap.

    The contributors to part one consider how specialists in composition and rhetoric can work most productively with accrediting bodies in order to design assessments and initiatives that meet requirements while also helping those agencies to better understand how writing develops and how it can most effectively be assessed. Angela Crow, Cindy Moore, and Peggy O’Neill explore the historical connections between accrediting agencies and writing programs, urging readers to draw on lessons learned from past assessment mandates. The field, Crow, Moore, and O’Neill make clear, has been and must continue to be vigilant in monitoring and responding productively to assessment mandates so that our programs and our students are not held accountable to standards that do not reflect what our research has shown us about writing pedagogy and the processes of learning to write. With growing calls from the public and politicians for stronger regulation of and greater comparability across higher education curricula, our attention to accreditation processes is critical if we wish to maintain the kinds of context-informed assessments that we know to be most valid and useful.

    One way to advocate for what we know to be good practice in assessment, Susan Miller-Cochran and Rochelle Rodrigo suggest in chapter two, is to work with and for accrediting agencies. More specifically, Miller-Cochran and Rodrigo explore the role of the QEP Lead Evaluator, a member of the SACS team that visits institutions undergoing reaccreditation. If writing is the focus of a school’s QEP—and this is often the case because writing, as we know, is essential to student learning across the curriculum—the QEP Lead Evaluator selected by SACS will, in all likelihood, be a composition specialist from another institution. Through the role of QEP Lead Evaluator, Miller-Cochran and Rodrigo explain, composition experts can share knowledge and research about writing pedagogy and writing assessment with faculty and key administrators at other institutions, thus providing invaluable external validation for and reinforcement of proposed curricular revisions and programmatic changes.

    In the final chapter of part one, Shirley Rose offers an overview of the principles that have historically guided the work of regional accrediting associations and the elements that have traditionally been part of the accreditation process, including cyclical review, institutional self-study, face-to-face interaction with stakeholders, and detailed, individualized institutional reporting of accreditation results. Rose argues that collaborations like those undertaken by Miller-Cochran and Rodrigo are extremely valuable because they position writing program administrators and composition specialists within accrediting organizations that, since their origins in the nineteenth century, have spearheaded the process of determining criteria by which evolving initiatives in higher education will be evaluated. Having writing specialists involved with this work is particularly important because these initiatives, such as the recent growth of dual-credit programs that offer both high school and college credit for the same course, frequently have a direct impact on first-year writing.

    The remaining two parts of the collection present case studies of how institutions have used ongoing accreditation and assessment imperatives to better meet student learning needs through programmatic changes and faculty development. Our goal in presenting these case studies is to provide concrete examples of productive curricular (part two) and instructional (part three) changes that can follow from accreditation mandates while, at the same time, highlighting and providing guidance for navigating challenges and pitfalls that WPAs may encounter within shifting, and often volatile, local, regional, and national contexts. We hope that the successes detailed within the chapters of parts two and three can be used by readers to bolster arguments for resource commitment from upper administration. In addition, contributors have included materials within their chapters and appendices that might be revised and repurposed by readers who are planning for and implementing assessment-driven change at their own institutions. At the same time, contributors have consciously included discussions of what has not gone as planned so that readers might be alert to these challenges and better prepared to respond if similar issues arise.

    Within part two, chapters are organized according to the relative development of the institution’s writing programs prior to assessment-driven change. The section begins with guidance for faculty who are looking to use the accreditation moment to build an institution-wide writing program from the ground up. Jonathan Elmore and Teressa Van Sickle, WPAs at Beaufort Community College, a small two-year institution in coastal North Carolina, used the SACS QEP mandate to establish a WAC program, a writing center, and an online writing lab. As Elmore and Van Sickle explain, large-scale assessment demands can provide the spark needed to start a writing program where one had not existed before.

    The next two chapters provide insight into how large-scale assessment can drive significant change to existing programs that have resisted change. Jessica Parker and Jane Chapman Vigil discuss how a reaccreditation visit from the Higher Learning Commission (HLC) to Metropolitan State University of Denver, Colorado spurred change to the university’s general education program, a program that had remained fairly static at this urban school of over 22,000 students for 20 or more years. The chapter recounts the composition program’s journey from square one—with no stated learning outcomes, no method of collecting materials for assessment, and no system in place for conducting assessment of any kind—to robust process of assessment and, as indicated by the assessment, curricular revision and implementation of co-curricular support. Drawing from the struggles they encountered along the way, Parker and Vigil provide pointers for other writing instructors and WPAs embarking on an assessment journey. These tips include strategies for constructing rubrics, calibrating readers and aligning different perspectives on what good writing means, fostering instructor buy-in for the assessment process, and ensuring that what is learned through assessment is clearly applied through changes in curriculum, curricular resources, or faculty support.

    In a similar vein, David Weed, Tulora Roeckers, and Melanie Burdick provide an in-depth exploration of curricular streamlining within the composition program at Washburn University, a public, open-admissions institution of roughly 7,000 students in Topeka, Kansas. Concerns at Washburn about institutional reaccreditation through the Higher Learning Commission led to a university-wide mandate that all general education courses be redesigned to incorporate common assessments worth 30% of the course grade (112). The composition program at Washburn did not have a history of standardization across sections of composition, but this mandate necessitated some level of it. Weed, Roeckers, and Burdick explain how they met the challenge of establishing a common structure for the composition program while also maintaining a good deal of autonomy for sometimes resistant instructors who were used to operating independently. The chapter details the collaborative processes the authors used to identify common course objectives and to transform those objectives into an assessment rubric. Interviews, focus groups, and artifacts collection from several pilot sections that used the new objectives and assessment rubric highlighted for Weed, Roeckers, and Burdick the importance of involving faculty in the process of establishing shared outcomes that are broad enough to allow individual instructors some control over their pedagogy yet narrow enough to ensure that instructors and students recognize that they are participating in a larger, coherent program.

    Accreditation concerns similarly propelled the development of an identifiable writing program at Onandaga Community College. It took pressure from the Middle States Commission on Higher Education, Malkiel Choseed explains, to lead to the creation of a WPA position, Writing Program Coordinator, in 2003. From this beginning, the teaching of writing at OCC evolved from a curriculum based on lore and the literary training of most faculty to one based on composition theory and writing research. Furthermore, Choseed highlights how large-scale, accreditation-driven assessment can lead to meaningful collaboration between WPAs, writing instructors, and stakeholders beyond the institution. More specifically, the alignment of writing-related learning outcomes across courses in a program, across programs in an institution, and across institutions in an educational system, such as the State University of New York system of which OCC is a part, can bring momentum and power for change to both individual institutions and to the institutional networks of which they are a part. In the case of OCC, connections Choseed made with other professionals during the processes of large-scale assessment and reaccreditation provided a foundation for additional curricular (and financial) development in the form of a multi-institutional collaboration, with funding from the US Department of Labor, aimed at preparing working adults for career changes.

    Accreditation and assessment demands can also be channeled into innovation for programs that have been actively engaged with current, research-based curricular practices. For example, Karen Nulton and Rebecca Ingalls recount how they have used the Middle States Commission on Higher Education’s accreditation process at Drexel University in Philadelphia to improve their composition program, a program that already embraced rhetorical awareness and a writing-about-writing approach. Nulton and Ingalls parlayed the Middle States process into an occasion to engage faculty, both within and beyond the composition program, in productive conversations about what they really value in student writing and to establish procedures for assessing the contributions that the composition program makes to students’ progress toward those outcomes. In discussing their program revisions, Nulton and Ingalls introduce a theme that runs through the rest of the collection: the fact that large-scale assessment of writing across the university can spur extensive discussion about differences and similarities in what writing is and does in different disciplinary contexts and can enable the creation of a shared vocabulary about writing and a mechanism—in this case, a reflective analysis—for assessing student achievement across writing contexts.

    The final chapters in part two provide insight into additional innovations that large-scale assessment and accreditation processes can bring to schools with established writing programs. Jim Henry used accreditation pressure from the Western Association of Colleges and Schools (WASC) to establish a program that embeds writing tutors, known as writing mentors, in composition courses at the University of Hawai’i, Mānoa. In addition to more individualized instruction that benefits student writers, this new program enabled faculty in the composition program to advance financial resources for the program, advance research agendas for faculty and graduate students, and advance the program’s institutional reputation and visibility (163). Henry recounts the moves through which he and his colleagues navigated the assessment and accreditation process while also garnering financial support locally and nationally; involving graduate students in research, presentations, and publications drawn from the writing mentors initiative; and raising the institutional visibility of the composition program online, within the university’s Assessment Office, and at institution-wide conferences and workshops. These moves, Henry suggests, can be adapted to other institutional contexts for similar benefits in the wake of accreditation.

    Ryan Hoover and Mary Rist round out part two with details about the implementation of electronic assessment portfolios as part of accreditation-related activities in the composition program at St. Edward’s University, a four-year school in Austin, Texas, that has offered an undergraduate major in Writing and Rhetoric since 1987. Not only has this program been in place for over two decades, it is currently the sixth largest major on campus, yet even this well-established program seized the moment of accreditation to make productive changes. Portfolios not only made assessment easier for the program by tracking evaluation scores and calculating historical trends, but also brought additional benefits for students and faculty within the program and across the university (187). Through ePortfolios, multi-modal composing became a serious topic of discussion and instruction across disciplines, students’ investment in reflecting on their work as represented in the ePortfolio grew, and ePortfolios were adapted with good results for faculty involved with program assessment and for students in majors across the university as they planned for careers after the university.

    Part three builds on part two by exploring examples of a common benefit fostered through large-scale assessment and accreditation processes: professional development programs that foster campus-wide commitment to writing and that focus on teaching writing in different disciplinary contexts. In chapter 11, Polina Chemishanova and Cynthia Miecznikowski present a QEP that brings together curricular change and professional development. The SACS requirement enabled the creation of a WAC program within a context of productive, cross-curricular conversations about what good writing means. Chemishanova and Miecznikowski’s experience with establishing a WAC program at the University of North Carolina at Pembroke, a four-year, MA-granting institution of approximately 6,000 students in rural southeastern NC, illustrates how a seemingly top-down requirement, imposed from outside of the institution, can be used to facilitate, rather than impede, faculty-driven change. Linda Adler-Kassner and Lorna Gonzalez then detail the accreditation-inspired processes through which they, in collaboration with faculty from across the University of California at Santa Barbara, developed writing assessment instruments that respond to diverse disciplinary expectations for writing. These collaborations led participating faculty to reevaluate their approaches to teaching writing as they became more aware of the variations in writing that students encounter as they take courses across disciplines and even as they take courses within the same discipline. Such revelations have, Adler-Kassner and Gonzalez explain, led faculty to request additional professional development opportunities related to writing and the teaching of writing.

    In a similar vein, Maggie Debelius explores how Georgetown University’s reaccreditation through the Middle States Commission on Higher Education became an occasion to engage faculty from diverse disciplinary backgrounds in discussions of what they value in writing. As part of their accreditation plan, Debelius and her colleagues worked with faculty to identify threshold concepts or specific core ideas within a discipline that are fundamental to success in that discipline. Debelius explains how this exploration of threshold concepts shifted faculty’s perceptions of writing from something that can be used to test students’ achievements to something that disciplinary members actively use to accomplish shared goals. The process of accreditation, then, enabled a significant change in how faculty understand writing and the teaching of writing: the perception of writing in disciplinary contexts moved from a view of writing as a static entity that students should be taught how to replicate to a view of writing as a dynamic process that students should be taught how to participate in. This shift brought about a change in how faculty across the university viewed responsibility for writing instruction and writing assessment, with faculty in individual departments taking ownership of writing-related curricula and establishing writing assessment procedures within their own disciplinary areas.

    Joyce Neff and Remica Bingham-Risher provide further insight into the potential impact of accreditation-related professional development. Two central initiatives of Old Dominion University’s QEP involve faculty development: a series of workshops designed to help faculty discover strategies for using writing-to-learn and for teaching writing in disciplinary contexts, and an internal grant program that supports faculty innovation in the teaching of writing. Recognizing that changing faculty perceptions and practices is often essential to the kinds of broad institutional change called for by reaccreditation processes, Neff and Bingham-Risher suggest that other schools consider developing Faculty Learning Outcomes (FLOs) to supplement and complement the Student Learning Outcomes (SLOs) that are often required as part of reaccreditation processes. To help other institutions with the development and assessment of FLOs, Neff and Bingham-Risher provide a draft of their university’s FLOs and detail the procedure that they implemented to measure the impact of professional development workshops on faculty members’ construction, revision, and implementation of writing assignments in courses across the disciplines.

    To conclude part three and the collection, Angela Green, Iris Saltiel, and Kyle Christiansen write from the perspective of having finished their accreditation cycle. With their five-year impact report on their writing-focused QEP approved by the SACS, Green, Saltiel, and Christiansen reflect on the long-term, substantial impact that their accreditation-related assessment work has accomplished at Columbus State University in Georgia. Most notably, they describe and document how this work has led to a fundamental shift in faculty attitudes about the role that writing can play in classrooms across the university (306). Despite the fact that participation in the CSU QEP was voluntary, institutional priority given to the accreditation work, along with the careful planning and timing of implementation, enabled QEP leadership to effect the kinds of changes that a number of other institutions discussed in this collection are working toward: Writing moved from being the responsibility of the English department to being viewed as the responsibility of all, as well as a significant mode of teaching and learning in any subject (306). While ultimately successful, the accreditation-driven changes at CSU did not occur without challenges. These challenges, the authors explain, are likely to be faced by others endeavoring to make similar changes at their institutions. To help others meet similar challenges, Green, Saltiel, and Christiansen provide examples of successful initiatives and assessment structures and discuss how they were able to overcome obstacles such as administrative/personnel turnover during the accreditation cycle and the inevitable and unpredictable budgetary restrictions faced by state schools in tough economic times.

    References

    Council of Writing Program Administrators. 2007. Executive Board Letter re Writing Assessment and Higher Education Accreditation. Accessed January 5, 2015. http://wpacouncil.org/node/887.

    Gallagher, Chris. 2007. Reclaiming Assessment: A Better Alternative to the Accountability Agenda. Portsmouth, NH: Heinemann.

    Part One

    Laying the Foundations

    Educating and Learning from Accrediting Bodies

    The chapters in part one address the first objective of this collection: to help readers better understand the potential advantages and drawbacks of large-scale writing assessment from both the perspective of the accrediting bodies that require this assessment and the writing instructors and WPAs who, ideally, design, implement, and benefit from it. Compositionists have long recognized that assessment, if it is to be valid and meaningful, cannot operate at a distance from local contexts. As early as 1996, Brian Huot outlined a New Theory of Writing Assessment based upon assessment procedures [that] recognize the importance of context, rhetoric, and other characteristics integral to a specific purpose and institution. The procedures are site-based, practical, and have been developed and controlled locally (Huot 1996, 552).

    Just as local contexts—local concerns, local student populations, local faculty attributes and attitudes, local budgets, local communities—need to be studied for assessment to be meaningful and for resulting initiatives to be effective, larger contexts in which the demand for assessment is shaped and through which local assessments are coordinated should be considered. Together, contributors to this first section provide a broad historical account of the longstanding, sometimes-cooperative and sometimes-conflicted, relationship between writing programs and accrediting agencies. Cindy Moore, Peggy O’Neill, and Angela Crow explore how compositionists have responded to assessment mandates dating back to the middle of the last century. Often, these responses have served to undermine the validity of writing assessments based on testing scenarios and to promote process-oriented approaches such as portfolio assessment. Shirley Rose provides additional historical context through her account of the evolving purposes and goals of regional accrediting agencies and how those purposes and goals intersect with the institutional location and mission of first-year writing. These intersections, she explains, mean that accrediting agencies can be valuable partners with writing program faculty and administrators in providing a social good by making the work of higher education both visible and legible to those it serves (52–53).

    In keeping with the belief that there are benefits to be gained through involvement in the accreditation process, each of the three chapters in part one include suggestions for what compositionists can take to further the tradition of working with and through accrediting bodies. The writers of each of the three chapters in this section emphasize that members of the field should become more aware of and involved in the regional and national mechanisms that establish the terms of assessment data collection and analysis. Susan Miller-Cochran and Rochelle Rodrigo provide a specific example of how to do this, using their work with the Southern Association of Colleges and Schools (SACS) to illustrate how writing specialists can exert influence outward by educating accrediting agencies and campus administrators about what we know about teaching writing, learning to write, and measuring writing abilities. Collaborating with these agencies, Miller-Cochran and Rodrigo point out, can do more than just protect writing programs from ineffective assessment mandates: it can create excellent opportunities for the field as a whole to gain a better understanding of how different writing programs are structured and how they function within and across diverse institutional contexts. This accreditation-driven insight can then inform policies and actions taken by the field’s own regional and national organizations.

    References

    Huot, Brian. 1996. Toward a New Theory of Writing Assessment. College Composition and Communication 47 (4): 549–66. http://dx.doi.org/10.2307/358601.

    1

    Assessing for Learning in an Age of Comparability

    Remembering the Importance of Context


    CINDY MOORE, PEGGY O’NEILL, AND ANGELA CROW

    As compositionists attempt to navigate the ever-changing landscape of contemporary higher education accreditation, we can draw on our rich history of using external assessment mandates to our advantage. Given recent calls for more standardized assessment methods, and developments in processing, sharing, and analyzing big data sets based on those standardized methods, it is imperative that we work with accreditors, employing our expertise, as we have in the past, to develop approaches to assessing students and teachers that improve learning and are consistent with our disciplinary theories about writing, teaching, and assessment. Below, we review compositionists’ responses to earlier assessment mandates and illustrate how a knowledge of such work can help current program administrators and faculty negotiate an accreditation context increasingly influenced by public calls for higher ed accountability and the new technologies that offer a means to achieve it.

    Learning from the Past: Writing Assessment and Accountability in Context

    Because of the consequences of assessment for both teaching and learning, writing administrators and faculty have, for decades, seen externally inspired writing assessment initiatives as opportunities to document student achievement, gather information about program strengths and challenges, and use that information to improve curriculum and instruction. The literature in our field is replete with articles and books that illustrate our willingness—and ability—to make assessment mandates serve our more specific interests while also satisfying requirements imposed by state legislators and accreditors.

    During the mid- to late-twentieth century as the college-bound student population diversified, calls for writing placement and proficiency tests increased, and composition faculty became more involved in writing assessment. Understanding the sociopolitical implications of these tests for students and teachers (Greenberg 1982, 743), many compositionists worked throughout the 1970s and 1980s to ensure that externally imposed writing assessments were informed by current theory and research and served the needs of their particular students (Cooper and Odell 1977; Gere 1980; Greenberg 1982; White 1984). For example, one study investigated the effect of different kinds of testing upon the distribution of scores for racial minorities (White and Thomas 1981, 276), examining the consequences of a standardized multiple-choice test of usage and a local exam designed by California State University faculty in conjunction with testing experts. The study showed that scores for white students were similar for the standardized exam and the essay portion of the CSU exam, but the results for black and Latino students were quite different: the standardized multiple-choice exam rendered a much more negative judgment of these students’ use of English than did the evaluators of their writing (1981, 281). White and Thomas argued that the results cast some real question upon the validity of usage testing as an indicator of writing ability (1981, 280).

    Through research such as this, writing scholars realized the power of assessments to influence teaching and learning and wanted to minimize the chances that writing teachers [would] find themselves administering writing proficiency tests that [bore] little relationship to their perception of college-level writing ability and that administrators would use the results of a test they consider inadequate or inappropriate for placement or promotion" (Greenberg 1982, 367). While such efforts were not directly linked to accreditation demands, they highlighted the importance of balancing the needs of a particular local context with outside interests, for example, university administrators or policymakers (Greenberg 1982).

    In the 1980s, one of the most influential responses to an external assessment mandate was the portfolio assessment system developed by Pat Belanoff and Peter Elbow to replace a university mandated proficiency exam. Belanoff and Elbow (1986) argued that the portfolio assessment, which involved teachers working in groups for norming during the semester and then at the end for the formal portfolio evaluation, promoted better teaching and more learning because, in part, the teachers developed shared evaluation criteria and the group discussions informed their work with students in their classrooms. The portfolio program Belanoff and Elbow designed kick-started the portfolio movement in college composition, a movement that shaped writing assessment, teaching, and research. Though Belanoff and Elbow did not frame the portfolio assessment in terms of accreditation, it was part of a university assessment mandate that undoubtedly would have been reported to accreditors.

    While many compositionists focused on research and development of individual placement and proficiency tests, some compositionists created more comprehensive writing programs in response to university assessment mandates. The best-known example of this type of work was done by Richard Haswell (2001) and his colleagues at Washington State University. Haswell led the effort to build a multi-tiered, integrated writing assessment program in response to a university general education program that mandated assessment of the composition program. Their adventure into writing assessment resulted in a program that encompassed placement testing upon entrance, portfolio exit assessments for first-year composition, a WAC requirement, and a rising junior portfolio that combined an impromptu essay, a reflective piece, and a collection of papers produced for courses across the curriculum. At all levels writing teachers were involved in the assessments, and a comprehensive writing center provided support for students, including required small-group sessions for those who did not pass the junior portfolio (Haswell 2001).

    The WSU writing program, which received a commendation from its accrediting agency in 1999, illustrates some of the basic principles of writing assessment documented in the composition scholarship: (1) assessing and teaching writing are closely linked, and (2) the goal of writing assessment is to facilitate students’ development as writers and improve students’ writing. Such principles not only influenced how compositionists responded to earlier assessment mandates, but they offered theoretically grounded guidance for negotiating an accreditation landscape that became informed by similar principles. Since the mid-1990s, accreditation agencies have been focusing more on how university resources, including those dedicated to instruction and assessment, are being used to promote student learning. Though this shift from a concern with inputs (such as faculty expertise and per-student expenditure) to the outcomes of such investments reflected educational theory and research, it was also a response to growing public pressure to ensure that students were acquiring the knowledge and skills promised by higher ed institutions. Because the only way to really show that learning has occurred is through work completed by students, accreditors began asking schools to demonstrate that students had met learning outcomes with direct assessment evidence. Writing, as compositionists have long understood, is one of the best, most direct methods for making student learning visible. Thus, as accrediting agencies turned their attention to what and how much students were learning, results from writing assessments began to figure more prominently in accreditation processes.

    As they did when faced with earlier placement and proficiency mandates, writing specialists have used changing accreditation requirements to their advantage, documenting student learning and program effectiveness for their own purposes while also helping their institutions meet accreditation standards (Carter 2003; Walvoord 2004). One of the best examples of such work has been documented by John Bean and his colleagues at Seattle University who developed discourse-based assessments to respond in part to pressures from the Northwest Association of Schools and Colleges (Bean, Carrithers, and Earenfight 2005, 6). Bean

    Enjoying the preview?
    Page 1 of 1