“I can’t wait to find out!”

As stated in the last post, Learning from Failures, I decided to adjust my approach to having students analyze and discuss data. We’d put a lot of time into working out many of the kinks, but it was really time to move on to scatterplot representations of data. My students already knew a lot about scatterplots and best-fit lines, so this allowed me to dive right in with some data.

Rather than stating a claim, I started with a statement and four questions:

I have the following measures (in cm) about 54 students: height, arm span, kneeling height, hand span, forearm length, and wrist circumference.

  1. Which pair(s) of variables do you think might show the strongest correlation? (And what would a strong correlation look like in a scatterplot?)
  2. Which pair(s) of variables do you think might show the weakest correlation? (And what would a weak correlation look like in a scatterplot?)
  3. Which variable (from the list above) do you think would be the best predictor of a person’s height (in cm)?
  4. Write one claim statement about the class data variables.

These questions forced them to think about the data and make some predictions about what they might see once they were able to access it. We hadn’t really talked much about correlation, so I was really interested in their responses to what strong and weak correlations look like on a scatterplot.

Generally speaking, they said that strong correlations

  • look like a line
  • can almost see a line
  • looks like a more defined line
  • looks pretty linear

and weak correlations

  • look like randomly placed dots
  • have points that are far from the line
  • looks more spread out and scattered
  • has dots all over the place

As for question 3, there was quite a debate between whether arm span or kneeling height would be the best predictor of a student’s height. One side (6 students) argued that arm span would be the best predictor because “everyone knows that your arm span is about the same as your height.” The other two students claimed that kneeling height would be a better predictor because “it’s part of your height.” Both sides stuck to their convictions – neither could be swayed, not even by what I thought was the astute observation that kneeling height is probably about 3/4 of height. This prediction was made by a student in the arm span camp!

Students each received their own copy of the data and investigated their claims. During the next class, we took a look at a couple of those claims, together. The plot on the left is height vs arm span, with the line y = x (height = arm span). The plot on the right is height vs kneeling height, with the line y = (4/3)x (kneeling height = 3/4 height).

More debate ensued, though most admitted that kneeling height had a stronger correlation to height than arm span did (for this data, at least). And maybe the 3/4 wasn’t the best estimate, but it was pretty close. They also talked about those outliers, which led to a conversation about outliers and influential points.

Moving from Class Data to Cars

I took a similar approach with the next data set.

I have some data about cars, including highway mpg (quantitative), curb weight (quantitative), and fuel type (categorical: gas, hybrid, electric). Think about how these variables might be related and make some predictions.

  1. How might the highway mpg and curb weight be related?
  2. how might the curb weight and fuel type be related?
  3. how might highway mpg and fuel type be related?
  4. Do you think there might be any outliers or influential points? If so, what might they be?

Through some class discussion, we came up with the following claims and predictions.

img_20170215_134600196_hdr

Students still had not seen the data and one of them said, “I really can’t wait to see what this looks like!” Another said, “Yeah, I’m not usually all that interested in cars, but I really want to know.”

1 Comment

Filed under BMTN, technology

Learning from Failures

Continuous improvement in my practice is about identifying a specific process that can be improved, applying a change idea, collecting data, and analyzing the results. This term, I am attempting to apply change ideas in my Statistical Analysis class. This is a twelve week introductory class focusing mostly on descriptive statistics. My goal is to have my students reason more about what the statistics are telling them and to justify their claims with evidence. Our 9th grade team has put an emphasis on the structure of claim-evidence-reasoning across the content areas, meaning that students are using this structure in humanities and science and math. I wanted to continue that structure with my 10th graders in this statistics class. So I revamped my approach to the course.

My idea was to use claims to drive the data analysis. It started off well enough. I created some claims and used a Pear Deck to ask students to consider the kind of data that they might need to collect and analyze. (Pear Deck allows them to think individually and respond collaboratively.) Here are the claims:

  • Women who win the “Best Actress” Academy Award are typically younger than men who win the “Best Actor” Academy Award.
  • Sales of vinyl records are rising and will soon overtake the number of digital downloads.
  • Opening box office for sequels in movie franchises (for example. Captain America, Star Wars, Harry Potter, Hunger Games) is typically higher than for other movie openings.
  • LeBron James is the best professional basketball player of all time.
  • For-Profit colleges are more likely to recruit low income individuals for admission.
  • More African American males are incarcerated than any other group of Americans.

Conversation around these claims also included predictions about whether or not the students thought they were true.

Remember, though, the goal was to use the structure of claim-evidence-reasoning, and my kids needed a model. So I gave them this one. After a conversation with a humanities colleague, the students analyzed my example using the techniques they learned in humanities class (highlighting claims and evidence in two different colors). This led us to create “criteria for success” and structure for a five paragraph essay. The analysis showed me that my example could be improved, so I came back after Christmas break with a second draft. We had some discussion about what had changed and whether or not the second draft was an improvement or not. Seemed like all was well. Time for them to “have at it.”

But I wanted them to practice with a single, agreed upon, class claim first. So we brainstormed lots of different claims they could research and settled on:

Original films are typically better than newer entries or sequels.

They had this document to remind them about what to write and off they went to collect whatever data they thought was relevant. And then snow season began. During the first 3 weeks of January we had 6 classes due to holidays, workshop days, snow days, and a broken boiler (no heat). Even though we ask kids to do work during snow days, my students were making very little progress on this assignment. Colossal failure. I gave them too much all at once. They were wallowing in the data collection.

I regrouped. I looked at all of the data that they had collected and gave them this data set to analyze and this document to write their essays. Problem solved, right? Wrong, again. Still too much. At the end of week 4 of this “practice” assignment (interrupted by two more snow days), and after talking with by Better Math Teaching Network colleagues and my humanities colleague, I realized that I had never actually taught them how to write a paragraph that interprets a specific kind of statistic (even though they had examples).

So, at the end of January, I tackled how to write those body paragraphs. We started with writing about means and medians. Given these box plots of average critics ratings, I asked students to write what the medians say about the claim. 02-17-2017-image001

Thinking it would take them about 5 minutes to write, I thought we’d be able to critique the paragraphs that students wrote before the end of class. Wrong, again. But we were able to take a look at what they wrote during the next class. (It’s a very small class.)

I called on my humanities colleague once more and she helped me to create some scaffolding to help them organize their thoughts. This time, with variability. Each group of two received one of the variables to analyze and organize a paragraph around. Once again, we shared the paragraphs they wrote for each measure. I’m not sure how I feel about this, since all of the paragraphs are basically the same. But I guess the point was to focus on the statistics that they included as evidence and not the specific language used. Were the paragraphs “quality”? Here’s a first draft of a rubric to measure that.

As January turned into February, and the snow making machine really kicked in, I called uncle on this, feeling like I had eventually learned something – along with my students – and decided to move on. (We only have 12 weeks, after all.) I’m not sure if this is one iteration, two iterations, or three iterations of my change idea. How ever many iterations it is, it led me to a slightly different approach with scatterplot analysis.

But that’s another blog post.

 

3 Comments

Filed under BMTN, teaching

“I’m not good at math.”

I can’t tell you how many times I’ve heard this from students. I guess I was lucky. Growing up, nobody ever told me that I wasn’t good at math (or anything else, really). More importantly, nobody ever made it okay for me not to be good at math, or school, or whatever I was interested in learning about. But not all of my students have the family support that I did (and continue to have). So part of nurturing their talent falls to me. I’ve always told my students that I want them to be fearless problem solvers – to face life unafraid of what lies ahead. To nurture this I have to allow space within my classroom for some of the “messy stuff” like playing around with patterns and numbers, wondering about data and statistics, or building seemingly impossible 3D geometric structures. And then pointing out how they just did math – and they were good at it.

You see, when my students say, “I’m not good at math,” they really mean, “I’m not good at quickly recalling math facts when I think everyone is looking at me and waiting for me to respond in some brilliant way.” They equate math with arithmetic and math facts and speed. I try to point out the difference between math and arithmetic (math facts), which sometimes helps. I tell them how bad I am at subtracting numbers quickly in my head.

So what do I do to develop fearless problem solvers? I pose a problem for my students to solve. Then I step back. I observe. I listen. I ask questions. I make them all think before anyone is allowed to speak. I make them talk to me about what they’re thinking and I make them talk to each other, especially to each other. That way I get to listen more. I practice wait time, sometimes for awkwardly long, silent moments. Eventually, I no longer hear, “I’m not good at math.” Except when they want to compute something quickly, on the spot, in the moment, and it isn’t working. And then they turn and say, “Sorry, arithmetic.”

3 Comments

Filed under MTBoS Challenge, teaching

Fresh Blogging Opportunity

Welcome to the Explore the MTBoS 2017 Blogging Initiative! With the start of a new year, there is no better time to start a new blog! For those of you who have blogs, it is also the perfect time to get inspired to write again! Please join us to participate in this years blogging initiative! […]

via New Year, New Blog! — Exploring the MathTwitterBlogosphere

1 Comment

Filed under MTBoS Challenge

My Favorite Games

My advisory students are now seniors. We started together four years ago, along with the school. It’s a humbling journey to spend four years with the same group of students, helping them navigate through high school, getting them ready for whatever adventure follows.

We do a lot of work in advisory – research about “Life after Baxter,” prepping for student-led conferences, creating and maintaining digital portfolios, keeping track of academic progress, and completing any required paperwork, for starters. Even though we meet three times a week for about 35 minutes each time, we still have some “down” time.

We like to play games together. We play Set, Farkle, and Joe Name It along with various card games. Taking some time to play and laugh together is important to building those relationships.

1 Comment

Filed under Baxter, MTBoS Challenge

Standards-Based Grading

There’s lots of talk out there, and especially in New England, about standards-based education. Whatever you think about standards-based, or proficiency-based, or competency-based education (they are all the same to me – just using some different words), the bottom line is that we teachers are now supposed to be able to certify that, regardless of any other factors beyond our control, our students are able to _________. Fill in the blank with your skill or habit of choice. This is tricky business. The tricky part is

  • not to distill learning into a checklist of discrete items that have no connection to each other.
  • to maintain a cohesive, robust curriculum with a clear scope and sequence.
  • to develop cross-curricular, integrated courses that give students rich opportunities to build those skills.
  • to build an assessment system that students, teachers, and parents have a common understanding of.

My school has put a lot of energy into creating a standards-based assessment (and reporting) system. Since we are still a new school, there is nothing to change except our own perceptions. We started out using the old 1-2-3-4 system, but ran into trouble with different interpretations of what those numbers represented and how students were able to achieve, or not. Some teachers maintained that standards in a course were global and that there was little chance for a 9th grader to demonstrate at a level higher than a 2. Other teachers defined course standards as local, so that students could earn a 3 or even a 4 on the standards within that class. Clearly, this was a problem.

The other problem is that any time grades are represented using numbers, people want to operate with them, or break them down further (using 2.3, for example). But those numbers represent discrete categories of performance or understanding. A 2.3 doesn’t make any sense if it isn’t defined. So we had to create a brand new system.

Each reporting standard – those big things like Algebra & Functions – has indicators that are connected to each level on the big scale toward graduation benchmarks. These are defined in a rubric. For any given course, we identify what the “target” knowledge & skills are, what level of the rubric we are targeting. For example, in the Modeling in Math class, the target level is Entering.

During a course, we report if a student is “below target,” “on target,” or “above target” for an assessment on particular indicator of a reporting standard. This way a student can be “on target” – meaning that the student is making solid progress and is doing what is expected in the course – but still not be at the graduation benchmark for that standard. After all, Modeling in Math is the first course that our 9th graders take. It’s unlikely that they will meet the graduation benchmark after just this one twelve-week class.

Report cards and transcripts report the big picture status toward graduation. So that 9th grader who was “on target” during the class has made progress toward graduation, but still has work to do to meet that benchmark. And that work could happen in a series of courses or through some combination of courses and portfolio, giving the student control over her education.

 

Leave a comment

Filed under Baxter

#LessonClose versions 1.1 & 1.2

WordPress tells me that I created this draft 3 months ago. I had every intention of updating along the journey of my Lesson Close adventure. Alas, that didn’t happen. Here’s what did happen …

I found it very difficult to decide, in the moment, which survey to send to students. So, I edited the survey to allow the students to choose what they wanted to tell me about the class – what they learned, how they learned it. I used the same survey structure as before, but this time students made a choice. I honestly thought that given a choice of what to reflect on, students would engage more. Wrong.

I asked them what happened: Too many choices, completing it electronically was too much of a hassle, there wasn’t enough time at the end of class to complete it.

Enter version 1.2: paper option, fewer choices, a few extra minutes. Still didn’t work. So I asked again: Still too many choices, still not enough time. One student said, “Even though the posting reminder came up with 5 minutes to go, our conversations about the math were so engaging that we didn’t want to stop to do a survey.” Another said, “The first question was fine, but I really didn’t want to take the time to write stuff for the second question.” This was the general sentiment.

When I reflected on this sequence of events with my colleagues at the Better Math Teaching Network, one teacher (who also has several years of teaching experience) said, “I feel like exit slips are just data for someone else who isn’t in my classroom. I know what my kids know and what they don’t know because I talk with them.” And I thought, she’s absolutely right. Here I was, trying to do something with exit polls – trying to get my students to reflect on the class, to be meta-cognitive about their learning. They were telling me through their actions and class engagement that they were learning just fine, thank you.

I have lots of formative assessment strategies, but this is the last time that I try to implement exit slips for the sake of implementing exit slips. I know what my kids know because I talk to them.

2 Comments

Filed under BMTN, teaching