interesting numbers v2

This is a follow-up post to my first day math problem from last year.

I’ve only posted once in a while over the past few years, but I’m very glad I spent the time last year to write about how my first day math problem went. After re-reading my post from last year, I was able to rapidly load the problem and how it went back into my memory, and I decided to make some of the changes I had written about.

Here’s the updated version of the problem, which adds scaffolding and will hopefully keep groups moving along who aren’t as comfortable with this type of open-ended exploration.



And here’s the checklist I’m going to use to keep myself organized as I walk around and monitor each group’s progress. (This idea comes from 5 Practices for Orchestrating Productive Mathematics Discussions by by Margaret S. Smith and  Mary Kay Stein.)

Feel free to pass along any feedback or take this a step further!

Resources:

math is like a pomegranate

This is my contribution to The Virtual Conference of Mathematical Flavors.

“Math is like a pomegranate—intimidating, and kinda scary looking at first, but also incredibly fascinating and vibrant.”

In order to figure out what flavor of math I’ve been serving up in my classrooms over the past six years, I’m going to take a stab at Sam Shah’s idea of working backwards from what students have written about their experiences in my math classes. (Spoiler alert: the answer is apparently pomegranate; who knew?!) Of course, not all of my students have had transformative experiences and others have straight up had a bad time. But right now, I’m going to focus on the students who have been positively impacted in order to articulate what the best implementation of my ideals has felt like.

But to be honest, it feels way scarier to share the positive things students have written about me over the years than anything critical. When I was younger, I used to brag and show off; I thought that if people knew about all the things I was good at, they would have to like me. Once I figured out that this is not how relationships work, the pendulum swung hard in the other direction for me. I grew increasingly uncomfortable accepting compliments and I minimized my achievements, working to avoid even the appearance of self-promotion. It’s an ongoing struggle to get right-sized, but lately I’ve begun to internalize the idea that being excessively diminutive is its own barrier to connection.

So with that confession out in the open, here are some of my favorite reflections students have written. (The title of this post comes from one of these!)



A few years ago, I started asking students to write advice to next year’s students. And when I remember, I make sure to share this advice once the new group arrives. Here are some examples of what my students have written.



Reading these, some of the core beliefs I bring to my teaching are apparent to students in different courses and at different schools. I hope that my students internalize them as well:

  • Math is something to get excited about. I want my math-skeptical students stay curious about why certain people openly love math, and I want them to find reasons of their own for loving math. I’m not shy about telling them when I get goosebumps when talking about math, and I don’t hesitate to make corny memes—and be super proud of them—to show how highly I think of a mathematical idea or how much their understanding has grown. (See the “extending the definition of sine and cosine” trigonometry meme I made this year.)
  • Math is a playground for creativity. You can ask and answer your own questions. There are games to make up and play, connections to establish, new approaches and representations to develop, and structures to create and explore. Some of my favorite moments come when a student (or even better, a group of students) comes up with a solution pathway I’ve never considered or notices a pattern I’ve never seen before.
  • Engaging with math is an opportunity to build confidence. No matter where you are in your mathematical journey, there are ideas to wrestle with in math that are hard, but not impossible. It’s like an infinite gym for your brain with an endless selection of workouts. Realizing that you can do something you’d previously found scary, intimidating, or intractable is incredibly empowering.
  • Expect and welcome obstacles. In math, there’s nothing wrong with being wrong, and getting stumped is an invitation to push your thinking deeper or to try something else. For this reason, I react like it’s the most normal thing in the world when a student tells me their approach didn’t work or when they don’t know what to do next. Sometimes they get a little peeved when I don’t rescue them right away, and that’s okay!
  • Math is especially enjoyable when shared with others in a caring and trusting community. Despite the cultural trope of the solitary mathematical genius, there is no rule saying that math has to be a solo sport. The process of guiding another person to a mathematical idea you’ve uncovered requires patience, clear thinking, and careful consideration of what the other person is comprehending. Similarly, the practice of asking for and receiving guidance requires humility, self-awareness, and careful articulation of what you’re understanding and where you’re feeling fuzzy. To help with this, I treat the word obvious like a cuss word in math class, and students usually buy in pretty quickly!

As a final thought, it feels liberating to put this out there in a less formal way than I’ve articulated aspects of my educational philosophy in the past. For comparison, this is what I’ve used in previous job searches, and almost all of it was composed in 2014, before any of the student reflections above were written.



I still stand by everything in this document, but I really appreciate the type of unencumbered sharing Sam’s framing of the prompt for this virtual conference has facilitated. In other words, asking “What mathematical flavor are your serving up?” rather than “What’s your theory of mathematics education?” seems more likely to inspire folks to share a healthy multiplicity of approaches instead of competing formal philosophies. And, it gives us an opportunity to celebrate our wins instead of worrying about all of the things we’re not doing.

initiation problems for optimization unit

I’ve had the pleasure of teaching standard-level calculus with Sam Shah this school year, and recently we’ve been working with the students on optimization. Rather than starting with the canonical calculus optimization problems, we decided to jump in with maximizing the area of various shapes under curves:

Feb-01-2018 16-37-59.gif

I think I first saw a problem like this in a textbook called Advanced Mathematics by Richard G. Brown (page 167, #12):

Untitled3

I’ve always been fascinated by these types of problems because they’re easy to understand and make guesses about but often have unexpected solutions. I wanted to use Desmos to bring this problem to life, so I put together a Desmos activity and companion sheet (.docx version) to look at four of these problems.

Students worked in groups, and they while they all had their own screen, they were expected to move together and come to consensus on the best possible shape before moving on. This fostered lively debate among students as they tried different shapes and improved their guesses by manually calculating areas.

For the “isosceles triangle under a parabola” problem shown above, there were a variety of responses, but there was convergence around the optimal triangle (whose vertex in Quadrant I is (\sqrt{3}, 6):

Untitled.png

Students were also asked to identify the constraints of their shapes before moving on to the next challenge. After locking in their guesses for each challenge, students had to really dig in to the second challenge (the “rectangle under a parabola” problem). This time, we provided a slider that would calculate the area of the rectangle for them as they changed the x-coordinate of the vertex in Quadrant I:

Feb-01-2018 16-47-47

Student were expected to record these data points on the companion sheet to form a sketch of the area function:

challenge2.png

Many students initially assumed that this area function was going to form a parabola, but after plotting more points, the class decided that it couldn’t be because of the lack of symmetry. But this function has a peak—how could they find it? This is where the calculus kicked in!

After carefully taking the derivative of the area function, setting it to 0, solving, and determining the dimensions of the best possible rectangle, students were able to finally determine which group came the closest with their initial attempt. They were also ready to tackle the remaining challenges on the second part of the companion sheet (.docx).

Before jumping in, I was also able to recognize groups for getting the closest to the best possible shape while also pointing out that they could do even better!

Untitled2.png

All in all, I thought this was a super fun way to kick off the optimization unit while keeping engagement high and providing valuable practice with the non-calculus algebra that can trip students up. Most importantly, calculus was positioned as the aspirin for the headache posted by the Desmos activity.

I would love to see these types of optimization problems become more popular!

Resources:

P.S. See Sam’s post on a lovely lesson he put together called POP! Popcorn Optimization Problem, which is a way more engaging way for students to tackle traditional optimization problems that usually look like this:

original-439030-1.jpg

first day math problem 2017–18: interesting numbers

I decided to re-sequence my start-of-the-year activities and to lead with a low-floor, high-ceiling problem in assigned random groups of three or four students.

Here is the problem, which comes from Phillips Exeter Academy’s Math 1 curriculum:

pea-1-92-3.png

I told the groups to figure out everything they could about this situation with prompts like, “What do you notice about interesting numbers? What do you wonder about them?”

As I watched twelve groups of students explore this problem over three classes, I began to see students latch onto different aspects of this problem. All of these questions and discoveries are inter-related, so I’m writing them down now so that I can map them out in the future.

Questions:

  1. Which numbers up through 20 (or so) are interesting?
  2. Why are powers of 2 interesting?
  3. Are powers of 2 the only interesting numbers?
  4. Are there any interesting odd numbers?
  5. What happens when I sum any two consecutive positive integers?
  6. What happens when I sum any three consecutive positive integers?
  7. If is odd, what happens when I sum any n consecutive positive integers?
  8. If is even, what happens when I sum any n consecutive positive integers?
  9. How can I decompose any odd number?
  10. How can I decompose any multiple of 3?
  11. If is odd, how can I decompose any multiple of n?
  12. How can I decompose any even number?
  13. Is there a general algorithm for decomposing any number?
  14. How many ways are there to decompose a given number?

Realizations:

  1. All powers of 2 are interesting.
  2. Only powers of 2 are interesting.
  3. No odd numbers are interesting.
  4. The sum of two consecutive positive integers is odd.
  5. The sum of three consecutive positive integers is a multiple of 3.
  6. If n is odd, the sum of n consecutive positive integers is a multiple of n.
  7. If n is even, the sum of n consecutive positive integers is n/2 more than a multiple of n.
  8. There is an algorithm for decomposing even numbers.
  9. There is exactly one way to decompose a prime number greater than 2.
  10. The powers of 2 are exactly the whole numbers without odd factors.

There was a split between groups that started by trying to answer (the very natural) question #1 (and thus getting to realizations #1 and #2) and those that started by generating and then trying to answer questions #5 and 6 (and thus getting to realizations #4 and #5). There was also one group in one class that decided to explore the sum of the first n consecutive integers (i.e., they wanted to know about the triangular numbers).

I think I will definitely use this problem again, with perhaps a bit more structure and guided mini-explorations along the way as groups arrive at various questions and realizations. It would probably be worth making a checklist for each group to help keep me organized as I keep tabs on each group’s progress.

Related:

more on grading—a synthesis of some my favorite thinkers (part two)

[A continuation of part one.]

Cohen, Guskey, Schimmer, Wormeli

Many teachers worship at the church of the arithmetic mean.

In Fair Isn’t Always Equal (2006), Rick Wormeli writes:

… it’s easier to defend a grade to students and their parents when the numbers add up to what we proclaim. It’s when we seriously reflect on student mastery and make a professional decision that some teachers get nervous, doubt themselves, and worry about rationalizing a grade. These reflections are made against clear criteria, however, and they are based on our professional expertise, so they are often more accurate. Sterling Middle School assistant principal Tom Pollack agrees. He comments, “If teachers are just mathematically averaging grades, we’re in bad shape.” (p. 153)

The best case I’ve been able  to make for why the practice of averaging is so fraught is given by Thomas Guskey in On Your Mark (2014):

Can you imagine, for example, the karate teacher suggesting that a student who starts with a white belt but then progresses to achieve a black belt actually deserves a gray belt? (p. 89)

Tom Schimmer hammered this point home in a December 2013 webinar called “Accurate Grading with a Standards-based Mindset”:

Adults are rarely mean averaged and certainly, it is irrelevant to an adult that they used to not know how to do something. Yet for a student, these two factors are dominant in their school experience.

In his article published in the April 2016 issue of “Educational Leadership,” Guskey echoes Wormeli’s point that defensibility and the perception of objectivity are highly prized among many teachers:

In teachers’ minds, these dispassionate mathematical calculations make grades fairer and more objective. Explaining grades to students, parents, or school leaders involves simply “doing the math.” Doubting their own professional judgment, teachers often believe that grades calculated from statistical algorithms are more accurate and more reliable.

In this blog post, David B. Cohen makes the case for reforms many folks in the TTOG community have been pushing for for some time:

We need to relinquish our preconceptions about the meanings of specific numbers and percents. Giving up the idea of points altogether would help; points are a convenient fiction, as long as you don’t think too hard about what they supposedly represent.

Cohen recommends ditching the 100-point system:

Why do we need 100 points then? That’s a level of definition that has no meaning. It would be like having a weather report stating today’s high temperature was 58.3 degrees, or including cents in conversations about rents or mortgage payments.

All of these points and reforms encounter institutional resistance, however, because of how much they ask teachers to make major shifts in their practice.

For me, though, it’s worth it. I was so glad to see this article by Alex Carpenter and Alberto Oros in the August 2016 edition of “Educational Leadership,” which made the connection explicit between grading practices and enacting a social justice pedagogy. The authors implore us to “take a moment, right now, to think about how we can modify our gradebooks in the name of justice.”

I’ll reiterate my questions from a year ago, because they are still very fresh on my mind.

A couple questions on my mind

  1. What practices do you, your department, and/or your institution have in place to facilitate difficult conversations about grading, reporting, and assessment?
  2. To what extent would it be a useful exercise for each department within a school to produce its own purpose statement for grading? (“The purpose of grades within the ___ department at ____ School is …”)

more on grading—a synthesis of some my favorite thinkers (part one)

This is part one of a series I’ll be writing on grading.

Guskey, Kashtan, and Reeves

On his blog, Douglas Reeves writes:

I know of few educational issues that are more fraught with emotion than grading. Disputes about grading are rarely polite professional disagreements. Superintendents have been fired, teachers have held candle-light vigils, board seats have been contested, and state legislatures have been angrily engaged over such issues as the use of standards-based grading systems, the elimination of the zero on a 100-point scale, and the opportunities for students to re-submit late or inadequate work.

Miki Kashtan, co-founder of Bay Area Nonviolent Communication, succinctly and insightfully explain  what’s needed to ground intense conversations in cooperation and goodwill:

Focusing on a shared purpose and on solutions that work for everyone brings attention to what a group has in common and what brings them together. This builds trust in the group, and consequently the urge to protect and defend a particular position diminishes.

In On Your Mark (Solution Tree, 2014), Thomas Guskey backs up Kashtan and calls upon the work of Jay McTighe and Grant Wiggins on backward design when he writes, “Method follows purpose.” (p. 15)

Guskey continues to emphasize the importance of beginning with the end in mind when we come together to discuss our craft with other educators:

Reform initiatives that set out to improve grading and reporting procedures must begin with comprehensive discussions about the purpose of grades … (p. 21)

Summary

  • Discussing grading can quickly become prohibitively emotional. (Reeves)
  • Focusing on a shared purpose helps those of us who have already put a stake in the ground to be willing, eager and able to move it. (Kashtan)
  • Before considering the “how” of grading, deeply consider the “why.” (Guskey)

A couple questions on my mind

  1. What practices do you, your department, and/or your institution have in place to facilitate difficult conversations about grading, reporting, and assessment?
  2. To what extent would it be a useful exercise for each department within a school to produce its own purpose statement for grading? (“The purpose of grades within the ___ department at ____ School is …”)

More to come.

thoughts on resilience & grading

I spent the last three days helping to facilitate a leadership retreat for some of our rising 10th, 11th, and 12th graders. This year’s theme was resilience, which we linked closely to one’s relationship with failure.

In several different ways, we asked students to reflect on the extent to which the school provides opportunities for them to fail, process what happened, make adjustments, and persevere through a difficult situation.

As we concluded the retreat this morning, we invited the students to consider how they and the adults at our school could facilitate the development of resilience during the upcoming school year. I was overjoyed with the first comment a boy put forward, which he intended for both students and adults:

Too often we get so focused on grades that we lose sight of the learning. Let’s keep the conversations about the learning rather than the grade.

I was blown away because I had hoped a student would bring this up, and this boy came right out with it. I’d like to make some strategic changes in my messaging around grading, reporting, and assessment this school year, and making the connection to resilience explicit could help keep these shifts rooted in a value to which the community has expressed a commitment.

My guiding question is this: What grading, reporting, and assessment practices (and policies) most effectively promote resilience in students?

There are many broad categories of issues come to mind, but in my current context I’d like to focus on redos and retakes.

I would like to try to assemble the most concise, convincing evidence that allowing multiple attempts at demonstrations of mastery facilitates the development of resilience. (I would go further and say that the practice of averaging in the scores of unsuccessful attempts impedes the development of resilience.)

Here’s a selection of articles I’ve read that support this view.

As Thomas Guskey writes in On Your Mark, we won’t get very far if we don’t agree on the purpose of grades, so the goal here is to convince someone who believes that the primary purpose of grades (in math class especially) is to summarize performance on one-time tests (via the arithmetic mean).

What do you think?

  1. What grading, reporting, and assessment practices (and policies) most effectively promote resilience in students?
  2. What is the most concise, convincing evidence you know of that allowing multiple attempts at demonstrations of mastery facilitates the development of resilience?

P.S. The value of mastery-based (competency-based) learning has begun to make its way to the independent school world as well: in this article from 2014, David Cutler writes about his expectation that traditional grades will be obsolete by 2034.

planning for accelerated precalculus

This fall, I’ll be teaching a group of very strong students in the highest of three levels of math my school offers. The goal is to give students an intense “Honors Precalculus+” treatment and get them started on calculus (up through the product rule or so) by the end of the school year so that they can jump right into BC Calculus the following fall.

I’m working on developing the standards for the course, and I’m using the model of “performance indicators” and “learning targets” I grew familiar with when I worked at a mastery-based learning school in New Haven. (For background, see the Great Schools Partnership’s document Proficiency-Based Learning Simplified)

I would welcome your thoughts on these learning goals. Do any of them feel too easy? Too difficult? How is the balance? If you had to write an essential question capturing these standards, would would it be?


Finally, here’s some additional background on where I’m coming from.

Source Materials

I’m building this course based on a few sources of problems and materials:

Influential Books

Here are a few books I keep thinking about as I plan this course:

early steps in grading transformation: an email to my colleagues

Hello fellow math teachers,

I’d like to share an email I sent to my department this morning describing my experiences moving forward with mastery-based learning and standards-based grading. I’m working to move towards the ideas articulated by many in the #ttog community.

I’d welcome your feedback on the sample progress reports, grading frameworks, or presentation of ideas I’ve put forward below.

– Tom


Dear math colleagues,

I hope you’re having a wonderful summer! I’ve been back in DC for about a week now after teaching two math classes at Phillips Academy in Andover, MA for a residential program called (MS)^2: Math and Science for Minority Students.

After reading On Your Mark by Thomas Guskey at the beginning of the summer, I decided to use the classes I taught at Andover as an opportunity to put together a “proof of concept” for a standards-based method of grading and reporting. In the spirit of moving forward with the conversation several of us began at the end of the school year, I’d like to share with you a method of grading and reporting I have been working on for a few years and had a chance to refine this summer.

I’ve attached a sample end-of-summer progress report for each class I taught:

A few notes for context:

  • I saw each class of 13–14 students for 110 minutes in the morning and 70 minutes in the evening every weekday for five weeks.
  • “Math IA” had the bottom third of the rising sophomores and “Math IC” had the top third.
  • Phillips Academy uses a 1–6 scale for summative grades rather than letter grades. The official labels are as follows:
    • 6—High Honors [at least ~93%]
    • 5—Honors [at least ~85%]
    • 4—Good [at least ~77%]
    • 3—Satisfactory [at least ~69%]
    • 2—Low pass [at least ~60%]
    • 1—Fail [at least ~40%]
    • I included a key with more specific interpretations of these labels in the progress reports.
  • The back-end of these progress reports comprises an Excel spreadsheet and a mail merge in Word, so it’s relatively easily to produce report cards on the fly once it’s all set up.

I wanted to reflect these ideas in putting together this system:

  • Each course was designed backwards from the learning targets, which were given to students up front so that they knew exactly what the expectations were.
  • No summative grade was attached to any particular assessment. Students received written feedback on their work as well as progress reports reflected their current level of mastery on each learning target.
    • Scores were attached to skills rather than assignments.
  • Each learning target was scored on a 1–4 scale. (A key for these is also included.)
    • The code to the left of each learning target is a reference to a section in the textbook so that students could easily look up examples and additional information.
    • The summative grade for each unit was achieved by averaging the learning targets for each unit.
  • The final exam, which was cumulative, focused on those skills for which the class as a whole had the lowest scores, so as to provide the greatest opportunity for demonstrating improvement.
    • Students could bump all the way up from a “1” to a “4” for a particular learning target if they demonstrated mastery on the final.
    • If a student had significant trouble with a learning target on the final, they could bump down at most 1 level. If they already had a “2,” that score remained.
  • The summative grade for the course was achieved by averaging all of the learning targets from the course.
    • The method for converting from 1–4 to 1–6 is described below.
  • Throughout the summer, students had the chance to demonstrate that they now understood something they previously did not.
    • This could take the form of a short interview or answering a brand new question addressing a given learning target.
  • In order to earn the right to another attempt, students were required to engage in additional learning (making corrections, completing practice problems and checking answers, making flash cards or graphic organizers, etc).
    • In addition, students could not ask to demonstrate new learning on the same day they’d received tutoring from me. I would tell them, “I need you to sleep on it and try it tomorrow without my help so we can make sure it made it into long-term memory.”
    • Students were repeatedly told, “Over the course of the summer, you will have multiple opportunities to show what you have learned. The only truly final opportunity to show what you know will be the final exam.”
      • Consequently, students could always improve their scores on each learning target. Scores of “1” and “2” were treated as “not yet” rather than “failing.”
      • The stakes for any one assessment did not feel unmanageably high.
  • Homework completion was reported separately from mathematical achievement.

After a period of adjustment, nearly all students came to internalize the growth mindset implicit in this method of grading and reporting, and reviews were very positive.

Naturally, there were plenty of areas of improvement as well:

  • I tried to capture too many learning targets, and they were often too granular.
    • For example, I’m not sure that “I can identify the intervals over which a function is increasing, decreasing, and constant”is significant enough to merit its own learning target. Perhaps this specific skill belongs under a broader learning target.
    • On the other hand, I found “Using a table, a graph, or an equation, I can explain what it means for a function to be quadratic” to be a useful piece of information to capture and report on.
  • By averaging all the learning targets, I sent the message that all learning targets were equally important.
    • In reality, I’ve written learning targets requiring different depths of knowledge. It might be better to explicitly group learning targets by DoK and to calibrate the distribution, and I imagine this distribution would vary based on the level of the course.
  • Broader learning goals, such as mathematical practices and habits of mind, were omitted.
    • Goals such as communication, mathematical reasoning/proof, modeling, attention to detail/precision etc. are not explicitly measured or reported.
    • A colleague of mine has done some excellent work in enumerating these types of goals, and I’d like to try to pick a few of them to focus on this fall.
    • This summer, I generally didn’t penalize students for careless mistakes if the core understanding seemed to be there. However, I don’t want to send the message that attention to detail isn’t important, so I’d like to find a way to capture some data about precision.
  • The conversion process to achieve summative grades was somewhat arbitrary.
    • Here was the scale I used; note that the bar is slightly higher for the upper-level class:
      • 6: At least 3.7
      • 5: At least 3.2 (For Math IC, 3.3)
      • 4: At least 2.7 (For Math IC, 2.8)
      • 3: At least 2.3 (For Math IC, 2.3)
      • 2: At least 1.7 (For Math IC, 1.8)
      • 1: At least 1
    • I’d like to explore how this might look for converting to letter grades.

What I’d especially like feedback on:

  • How many learning targets seem reasonable for a math class with ten units?
  • What range of cognitive demand (depth of knowledge) should be required by a learning target?
    • How should the answer to this question change based on the level of the class?
    • Should learning targets be framed in terms of the Mathematical Tasks Framework, the Transfer Demand Rubric (Proposed Grading Framework), or some combination of the two along with Webb’s DoK taxonomy?
  • What types of cutoffs might make sense for converting from a 1–4 scale to a letter grade scale?
    • For example, should the gap between a B– and a B be congruent to the gap between an A and an A+?
  • What is the most effective way to measure and report attention to detail, precision, and avoidance of careless mistakes?
  • Anything else that comes to mind.

Thanks for taking the time to read. Again, no pressure to reply—just wanted to get these thoughts out while they’re fresh.

OK, back to summer!


Attachments: