You are on page 1of 2

Pop Music Mini-Project Assessment Reflection

Pick another assessment from the inventory that you feel could be improved:
1. Discuss the characteristics of the assessment, being sure to address the following:
The assessment is called the Pop Music Mini-Project. This project is simultaneously formative and summative. As we are
finishing up a short unit on poetry, this assessment serves a summative purpose because it assesses students ability to identify and
analyze figurative language and sound devices in poetry. However, Ms. F will return to poetry later in the school year and will review
similar topics. In this way, the assessment also serves a formative purpose because the later unit on poetry will dig deeper into higherlevel reading and analysis skills.
a.

How well does the assessment differentiate between students skill levels?

To begin the assessment, I modeled each step of the process for students in both a small group and whole group setting.
Students had complete examples of what their projects should look like in their Reading Notebooks. Students had a choice of two out
of four songs to work on. Before beginning the assessment, I told the students about a particular song that was much harder than the
others. Out of 53 students, only three chose to attempt this song. Apart from this distinction, the assessment requires all students to
demonstrate mastery of the same skills: annotation and analysis of sound devices and figurative language in pop songs. Though the
same skills are assessed, each song had varying types of figurative language and sound devices. The students were required to find at
least four different examples of figurative language and four different sound devices (i.e. This could mean two or three hyperboles, but
they are different examples in the song). This meant that each students annotations could look very different compared to a peer. I
would argue that this flexibility in annotation and identification of the figurative language and sound devices does demonstrate some
differentiation between students skill levels. However, there may have been too much choice based on interest, which could have
limited alignment in skill level, particularly for the three students that chose the song I announced as more challenging.
b.

How confident are you in the assessments consistency (e.g., reliability)?

In order to grade this assessment, I created a rubric with four categories to assess:

Annotations for Song #1 (25pts)


Annotations for Song #2 (25pts)
Analysis Part One Listing Elements of Authors Craft for BOTH Songs (25pts)
Analysis Part Two Explaining the meaning of ONE example of figurative language from EACH song (25pts)

Within these categories, particularly the annotations, the four point values were based on the inclusion of a certain number of
correct annotations; four correctly annotated sound devices and four correctly annotated examples of figurative language earned
students full credit (25pts) for songs one and two respectively. I believe this portion of the assessment is mostly consistent, but does
not completely rule out the graders subjectivity. Considering figurative language, there are some nuances in what counts as what
particularly regarding metaphor and hyperbole since these often have implied meaning or comparison. Additionally, some examples
could count as multiple examples. I personally would count each annotation as one of the four, while another grader might only count
the phrase once despite two different markings. Finally, because I spent so much time working with students individually and in small
groups to scaffold their learning on this project, I had many opportunities to hear them explain some of their thinking. I know this
impacted what I counted for credit when I was grading. In this way, the assessment itself is not totally reliable because of the other
factors that impacted my judgment as I was grading. This might not have been replicated if students did the assessment completely at
home or if another person were grading.
In the analysis portion, I did not clearly outline how many points each part of the analyses were worth. Though I do not
believe the point discrepancies are completely subjective, I am aware that the lacking numerical values somewhat influenced how I
distributed points while grading, even with the rubric. This alone raises questions about the reliability; Im not sure that this portion of
the grade was evenly distributed among all students.
c.

How well does the assessment cover the intended learning domain?

The learning domains for this project were annotating and analyzing poetry. Specifically in the analysis, students were to
apply their knowledge of authors craft to list elements of authors craft that they annotated. Considering the annotating, I believe this
domain was covered quite well. Students were required to find examples within songs different from whole group practice and
exposure. To do this, they used our class annotation code to mark the examples identified. This allowed me to assess their ability to
use annotation to identify figurative language and sound devices in context.

Analysis Part One of the project was meant to assess students understanding of authors craft. Here, they needed to list four
elements of authors craft from each song that they annotated. However, I provided students with a formatted example using a sound
device. I also instructed the students to label their elements as figurative language or a sound devices. Additionally, our work and
conversations in class drifted away from the term authors craft and focused on the fact that they were simply listing some of the
things they annotated. Because of these factors, the assessment does not adequately assess students understanding of the term authors
craft. Instead, this assessment functioned more as universal exposure of the term.
Analysis Part Two required students to choose one example of figurative language from the song and explain its meaning.
Students needed to list the example, identify the example, and provide a literal or real world meaning. I provided students with two
different modeled examples (one was on the handout and another was in their reading notebooks). The rubric addresses this
component in consideration of thoughtful ideas, in complete sentences, and with minimal spelling or grammar errors. The assessment
focused more on the thoughtfulness and completeness of their answers, and there were not clear components of the rubric addressing
analysis. Generally, I think this assessment is a starting place to assess poetic analysis, but I dont think it covers it fully.
2. What would you do to improve the above assessment characteristics (differentiation, consistency, and coverage)?
Differentiation I would differentiate this assessment the same in the future, however I would more clearly determine level of
challenge. In this way, I could make suggestions based on skill level for individual students instead of letting them choose blindly.
Consistency Particularly for Analysis Part One and Two, I would assign explicit point values on the rubric to simplify the math and
eliminate some of the subjectivity in grading. These values would make it clear how many points were lost particularly for somewhat
incomplete submissions.
Coverage The coverage of authors craft is the least aligned portion of the assessment. In the future, I would ensure more instruction
to teach and practice identifying elements of authors craft OR eliminate this part of the assessment altogether. Additionally, if
authors craft is included, part two of the analysis could be extended to include both the meaning and a discussion of why the author
would choose to include this example of figurative language. This could take the analysis deeper and more specific on the rubric.

You might also like