“Oh, I get it,” the students say, and I tell them that that's great, and that it is not enough. I tell them that learning math is a lot like learning to play a musical instrument or like learning to dance or to play a sport. Having once successfully played through a piece of music, or having once correctly executed a move, is all very well as a starting point - and only practice and repetition with feedback will ensure that they perform reliably and smoothly. “Getting it” goes only so far, and learning involves something more and other than that.
This is what I tell my students, so I do “get” that insight is not enough, that learning requires changing behaviors and habits, and yet it seems I have not learned this lesson at a behavioral level myself. And so it was that one of the most useful experiences of the Math conference at Asilomar yesterday was cutting the last class to stare at the ocean and think about what I had wanted to do and not do this semester, and about what is actually happening in my classroom. It’s been an unexpectedly difficult semester for private reasons (conditions at school would have predicted the best teaching year ever), with limited occasion for reflection on my work, and the Math conference provided a chance to take stock and regroup.
I “get” that assigning a large number of practice problems has little or no advantage over assigning fewer problems and is likely to even be counterproductive, I see that assigning homework problems that a large number of students won’t be able to do correctly on their own is silly, I know that having the students do more work than I have time to check is a waste of their time as well as mine, and I believe that there is much value in spending some minutes of class time every now and again on activities whose purpose is building relations rather than practicing math skills. Yet, judging from what I’m doing it would seem I disagreed with all of the above. Somehow, all kinds of resolutions made late last year got lost in the flurry of starting up again this year, and I’ve found myself teaching in ways I had made definite decisions not to.
So, what to do? Since it’s not a matter of “getting it,” not a matter of knowledge or conviction about what to do, I’m thinking about what kinds of feedback mechanisms to set up toward changing my behaviors. I could write down what I want to do and keep rewriting it daily to remind myself. I could talk to colleagues about what I want to do and ask them to do random five minute observations and let me know what things are looking like – and I am fortunate enough to have several fantastic colleagues who would be willing to do this. I could tell the students that there’s a cap of a certain number of homework problems and ask them to remind me if I forget my own policy. I’m sure they’d be delighted to oblige.
On another note, it’s been nice to rediscover the online math ed community recently after having neglected these readings for lengthy periods at a time this fall. New ideas and debate are like food and sleep in that they’re easily given lower priority during hectic times, and yet these things are all necessary for teaching and learning well in the long run. Glad for all of you who are still around and writing...
Sunday, December 7, 2008
Sunday, August 3, 2008
Algebra 2 is amorphous and has multiple heads
and defies reduction into compact and self-contained little parts. I am still trying to do just that, however. This continues the discussion about applying Dan Meyer's assessment system for Algebra 2, and anyone not deeply interested in this narrow topic may as well go on to the next item in their Reader.
Some time in June, Glenn Waddel wrote:
Glenn has meanwhile posted a carefully worked out list, and written about the tension between assessing specifically enough without introducing an intimidating number of concepts. Reading his list reminded me that the differences between the various versions of Algebra 2 around are significant, and that our final lists will have to be different to accommodate our respective course specifications and student groups.
In particular, my Intermediate Algebra course leaves conics and discrete math for the Trig/Precal course, which uses the same textbook and picks up where my course leaves off. I do not need to include assessments on these topics, then. On the other hand I must make sure that graphical features of quadratics, polynomials and exponentials are covered carefully, as this will not be repeated in Precal, and this increases the number of skills for these topics beyond what may be needed in Glenn's course. Also, Intermediate Algebra is for the students who do not make it to Honors Algebra 2, and so I need to include a lot of Algebra review. Dan Greene's version of Algebra 2 is similar to mine in the topics it covers, but students arrive directly from Algebra 1 without Geometry between, and so may need somewhat less review. I'm guessing that Sam Shah's course is for relatively advanced students. However, while our lists will need to be different in order to take these things into account, comparing notes could still be very useful.
Dan Greene and I met a few weeks ago, and made some progress on breaking down the chapter on Exponential and Logarithmic Functions, a unit where I am replacing all of my concept test items from last year. So far, the unit on Numbers and Functions has been the most demanding unit, I think. There are so many big, abstract and quite unfamiliar ideas there. On the one hand, the process of breaking down this chapter into parts that can be practiced separately may therefore be all the more necessary in order to make it accessible to students. On the other hand, much of the point is for students to recognize an abstract idea, such as the transformation of a function, across pretty different contexts, and this is just hard to assess in a piecemeal way. Or so it seems to me.
Finding a convenient format or platform for technical discussion of how to slice a topic into discrete skills and concepts is a challenge of its own, though. A series of blog posts, one for each chapter, seems both clunky and overly time-consuming (and school starts in just over a week over here). Instead, I've stored my work-in-progress on this Google Site, and if you have time and inclination to think some about what are the essential things to test for each topic or anything else, suggestions would be much appreciated.
Some time in June, Glenn Waddel wrote:
I have a rough draft of my skills checklist done right now. I am not sure I am going to post it yet. I am not happy with it. I think I am stuck in the “do I have to assess everything?” mode.More than a month later, that's where I am still... and since time is running out I'm going to post the incomplete work in case that helps accelerate the process.
Glenn has meanwhile posted a carefully worked out list, and written about the tension between assessing specifically enough without introducing an intimidating number of concepts. Reading his list reminded me that the differences between the various versions of Algebra 2 around are significant, and that our final lists will have to be different to accommodate our respective course specifications and student groups.
In particular, my Intermediate Algebra course leaves conics and discrete math for the Trig/Precal course, which uses the same textbook and picks up where my course leaves off. I do not need to include assessments on these topics, then. On the other hand I must make sure that graphical features of quadratics, polynomials and exponentials are covered carefully, as this will not be repeated in Precal, and this increases the number of skills for these topics beyond what may be needed in Glenn's course. Also, Intermediate Algebra is for the students who do not make it to Honors Algebra 2, and so I need to include a lot of Algebra review. Dan Greene's version of Algebra 2 is similar to mine in the topics it covers, but students arrive directly from Algebra 1 without Geometry between, and so may need somewhat less review. I'm guessing that Sam Shah's course is for relatively advanced students. However, while our lists will need to be different in order to take these things into account, comparing notes could still be very useful.
Dan Greene and I met a few weeks ago, and made some progress on breaking down the chapter on Exponential and Logarithmic Functions, a unit where I am replacing all of my concept test items from last year. So far, the unit on Numbers and Functions has been the most demanding unit, I think. There are so many big, abstract and quite unfamiliar ideas there. On the one hand, the process of breaking down this chapter into parts that can be practiced separately may therefore be all the more necessary in order to make it accessible to students. On the other hand, much of the point is for students to recognize an abstract idea, such as the transformation of a function, across pretty different contexts, and this is just hard to assess in a piecemeal way. Or so it seems to me.
Finding a convenient format or platform for technical discussion of how to slice a topic into discrete skills and concepts is a challenge of its own, though. A series of blog posts, one for each chapter, seems both clunky and overly time-consuming (and school starts in just over a week over here). Instead, I've stored my work-in-progress on this Google Site, and if you have time and inclination to think some about what are the essential things to test for each topic or anything else, suggestions would be much appreciated.
Monday, July 28, 2008
Coffee and math ed readings
This summer I've met with a doctoral student of mathematics education a couple of times. Her area of interest is mathematical learning disabilities (MLD). Last time we met in a coffee shop to discuss an article by Geary et. al.1 on students' placing of numbers on a number line, a topic I've been fascinated by for some time.
As it turns out, number lines constitute an active area of study in cognitive psychology and neuroscience, of theoretical interest
Geary et. al. compared first and second graders' placements of numbers on a blank number line. They found some evidence that mathematically learning disabled students' placements not only failed to conform to the linear pattern at a rate comparable to that of their peers. In addition, their pre-instructional number placements also looked less like the logarithmic placement of non-disabled children:
In other news, this thing of chatting about research in MLD over a morning coffee has been immensely enjoyable. The readings are demanding enough that I'd be much less likely to work through them if I were studying alone, but I'm awfully glad to be learning some of this.
I'm wondering how much interest there would be for some kind of regular math teacher/ math ed researcher meetups, such as a discussion of a predetermined article over coffee on Saturday mornings. Many new math teachers already have ed classes scheduled at that time, though, and older math teachers typically have family to take care of during weekend mornings, so how many would remain? And would there really be many researchers interested in talking with teachers? Still, given the curious absence of contact between researchers working on math education and math instructors working in schools, even in cases where their buildings are in the same geographical area, it would seem that some thinking should be done on how to afford more "vertical alignment" in Dina Strasser's sense of the term.
1. Geary, D. C., Hoard, M. K., Nugent, L. and Byrd-Craven, J. (2008). Development of Number Line Representations in Children with Mathematical Learning Disability. Developmental Neuropsychology 33:3, 277 - 299
2. Siegler, R. S. and Opfer, J. (2003). THe development of numerical estimation: Evidence for multiple representations of numerical quantity. Psychological Science, 14, 237 - 243.
As it turns out, number lines constitute an active area of study in cognitive psychology and neuroscience, of theoretical interest
because magnitude representations, including those that support the number line, may be based on a potentially inherent number-magnitude system that is supported by specific areas in the parietal cortices ... (p. 279)Geary's article cites earlier work by Siegler and Opfer2 which suggests that young children use a more or less logarithmic scale when placing numbers. Children tend to perceive the difference between 1 and 2 as being greater than the distance between 89 and 90 in a semi-systematic way (p. 279), so that most numbers get clustered to the left hand side of the number line. This tendency is thought to reflect the postulated "inherent number-magnitude system."
Geary et. al. compared first and second graders' placements of numbers on a blank number line. They found some evidence that mathematically learning disabled students' placements not only failed to conform to the linear pattern at a rate comparable to that of their peers. In addition, their pre-instructional number placements also looked less like the logarithmic placement of non-disabled children:
Even when they made placements consistent with the use of the natural number-magnitude system, the placements of children with [mathematical learning disabilities] and their [low achieving] peers were less precise than those of the [typically achieving] children in first grade, that is, before much if any formal instruction on the number line. The implication is that children with MLD and LA children may begin school with a less precise underlying system of natural-number magnitude reprsentation. (p. 293)Geary et. al. report correlations between performance on the number line tests with a battery of other cognitive tests. Unfortunately I know neither enough statistics nor enough cognitive psychology to extract terribly much information from these parts. Of rather more immediate interest to me as a teacher is, in any case, the question of how to go about eliciting the kind of cognitive change that's needed here. It certainly is not the case that all kids have the linear scale all figured out by the end of second grade - many of my 9th and 10th graders last year had not. The good news is that for this important topic, instruction tends to work. Cognitive Daily reports on more recent work by Siegler and Opfer showing that second graders responded quickly to some targeted feedback on their number placements, and that
once the linear form is learned, the transformation is quick, and permanent.
In other news, this thing of chatting about research in MLD over a morning coffee has been immensely enjoyable. The readings are demanding enough that I'd be much less likely to work through them if I were studying alone, but I'm awfully glad to be learning some of this.
I'm wondering how much interest there would be for some kind of regular math teacher/ math ed researcher meetups, such as a discussion of a predetermined article over coffee on Saturday mornings. Many new math teachers already have ed classes scheduled at that time, though, and older math teachers typically have family to take care of during weekend mornings, so how many would remain? And would there really be many researchers interested in talking with teachers? Still, given the curious absence of contact between researchers working on math education and math instructors working in schools, even in cases where their buildings are in the same geographical area, it would seem that some thinking should be done on how to afford more "vertical alignment" in Dina Strasser's sense of the term.
1. Geary, D. C., Hoard, M. K., Nugent, L. and Byrd-Craven, J. (2008). Development of Number Line Representations in Children with Mathematical Learning Disability. Developmental Neuropsychology 33:3, 277 - 299
2. Siegler, R. S. and Opfer, J. (2003). THe development of numerical estimation: Evidence for multiple representations of numerical quantity. Psychological Science, 14, 237 - 243.
Have you used Algebra tiles?
I introduced integer tiles to my Algebra classes early last fall, and then quickly gave it up. Several students balked at using such middle school measures for studying math, and my arguments that being able to represent math statements in many different ways, including with concrete objects, failed to persuade. In a class of insecure freshmen still figuring out their relative positions in the class, and in some cases still stinging from having been placed in Algebra rather than in Geometry after the placement test, using materials perceived as childish just wasn't socially acceptable.
I quietly dropped the project, only including a problem on modeling integer subtraction as an extra credit problem on a unit test some time later. Not one student got it right. Later in the year, when a number of students continued to demonstrate confusion about combining signed integers and combining like terms, I sometimes wished I'd stuck with the manipulatives a little longer.
Now I'm trying to make up my mind about whether - and, if so, how - to use tiles in my Algebra classes in the fall. Apart from the probable social issues to deal with, I'm wondering about the efficacy of Algebra tiles. A point I've picked up in passing while reading this summer (I'm sorry I can't recall where!) is that the same students who are likely to have much trouble with elementary Algebra are also likely to have difficulty picking up how to manipulate Algebra tiles.
There is, after all, no magic involved. The rules for representing addition and subtraction of integers with bi-colored tiles are not self-evident or trivial. Even for me, the representation of subtraction problems by adding the necessary number of "zero pairs" came as a bit of a surprise. And while I then found the very idea to be very cool and exciting, that is more than I can take for granted that my students will, even if they aren't unable or unwilling to master the rules of the game.
So, what are your experiences with Algebra tiles? How to you go about changing the image of tiles as being all too elementary? And assuming that you have gotten your young charges to take the tiles seriously, how much do you feel that the students learn this way that they do not learn just as well by simply reiterating the formalism of signed numbers and like terms?
I quietly dropped the project, only including a problem on modeling integer subtraction as an extra credit problem on a unit test some time later. Not one student got it right. Later in the year, when a number of students continued to demonstrate confusion about combining signed integers and combining like terms, I sometimes wished I'd stuck with the manipulatives a little longer.
Now I'm trying to make up my mind about whether - and, if so, how - to use tiles in my Algebra classes in the fall. Apart from the probable social issues to deal with, I'm wondering about the efficacy of Algebra tiles. A point I've picked up in passing while reading this summer (I'm sorry I can't recall where!) is that the same students who are likely to have much trouble with elementary Algebra are also likely to have difficulty picking up how to manipulate Algebra tiles.
There is, after all, no magic involved. The rules for representing addition and subtraction of integers with bi-colored tiles are not self-evident or trivial. Even for me, the representation of subtraction problems by adding the necessary number of "zero pairs" came as a bit of a surprise. And while I then found the very idea to be very cool and exciting, that is more than I can take for granted that my students will, even if they aren't unable or unwilling to master the rules of the game.
So, what are your experiences with Algebra tiles? How to you go about changing the image of tiles as being all too elementary? And assuming that you have gotten your young charges to take the tiles seriously, how much do you feel that the students learn this way that they do not learn just as well by simply reiterating the formalism of signed numbers and like terms?
Saturday, June 14, 2008
Applying Dan's assessment system, Part II - scoring
Note: A discussion of more general lessons learned while applying this assessment system is posted here. This entry is a dry, technical discussion of scoring and grade calculations, of interest only to teachers thinking of applying this system themselves.
Dan Meyer assesses students at least twice on every test item, scoring out of 4 each time. At the second round of assessment, he alters the possible points from 4 to 5. If a student scores a 4 on both rounds of assessment, she nets a 5. Otherwise, her highest score applies. Dan makes the second round of assessments a little harder than the first, so that a second 4 indicates greater skill than the first 4.
Altering the possible points from 4 to 5 entails that students who do not actually improve their performance from one assessment to the next automatically see their grade drop. For example, if a student scores a 3 on the first assessment, and then another 3 on the second, more demanding assessment of the same skill, the grade on that particular concept drops from a 3/4=75% to a 3/5=60% - from a solid C to a D-. This caused problems that almost made me abandon the system early last fall. For one, students got upset when their grade dropped without their knowledge having changed. Secondly, having grades drop after a progress report has been issued is not actually legal - or so I was told by my Department Head. An evening shortly after the first progress reports had been printed found us manually going through all the scores in the gradebook, altering the scores so that the grades would be the same as before, for example by changing 3/5 to 3.75/5, since that is equal to 3/4=75%.
Another problem with this system was that a scale from 0 to 4 seemed fairly coarse grained. Students who made a mistake significant enough not to merit a top score on the first assessment would be marked down by 25 percentage points, and if they did not improve markedly by the second assessment they would net a D-. Improvement from this D- would be possible only if they subsequently scored a perfect score. I first thought that the large number of skills and the repetition of assessments would lead to an adequate continuity of the total grading scale, that students might average a C by scoring perfectly on some skills and poorly on others. However, some students seemed, even when working hard, to be unable to ever score a 4. They'd always make some or other significant mistake, but not enough to make a D- seem appropriate. Now I am sure that in the mutual adjustment of quiz difficulty and scoring practice there is some wiggle room for making this work in a fair way, and I assume Dan Meyer has figured out a balance here. However, I ended up changing my grading scale.
Solving these problems proved pretty difficult without losing important features of the original system, however, and I found no perfect solution. I wanted my score assignment to do what Dan's did, in particular, to make it necessary for students to take every assessment twice, in order to ensure stability and retention of knowledge. Dan's practice of increasing the possible points does just that - students can not just be satisfied with their 3/4=75% and decide not to attempt the second assessment of the same skill. In the end I decided not to report students' scores online until they had had both assessments. I made the two assessments of equivalent difficulty (which simplified things for me) and then grades were assigned based on students' best two scores according to the following table:
In summary: For scores of 3 or lower, the higher score applies. If both scores are above 3, the grade is the average of the two. If one score is above a 3 and the other below, the grade is the average of 3 and the higher grade. With this score assignment, students still had an incentive to demonstrate perfect mastery twice, in order to net a grade of a 100.
A disadvantage of this system is it's clunkiness compared to Dan's simpler system. Much of the appeal of this whole approach to grading was its transparency to students, the clarity it could afford them about what to focus on. Some of this is lost with this conversion table. Also, since the best two scores count, the system appears to have somewhat more inertia; poor scores don't go away as fast as they seem to in the original system, where the better score always counts. This slower improvement is more appearance than reality, since two 4's are necessary to achieve a 100 in Dan's system too, but appearance matters in this context. The main disadvantage, however, was switching to this different scale after the first progress report, which caused some confusion and, I think, some loss of buy-in from students. They seemed a little less enthusiastic about completing their tracking sheets after that.
As an alternative, I experimented a little with just entering both of the best two scores into PowerGrade this spring, labeling the entries "Skill 14A" and "Skill 14B," for example, and assigning half weight to each. I am undecided about whether I will do this in the fall or just enter the composite grade. It is of paramount importance that the students understand the relation between the scores on the papers they get back and the scores on their grade printout, and this system would help in that regard, but it would make for a large number of gradebook entries, which means more messiness.
Finally, a note on the scoring of any quiz item: In some cases it made sense to assign a point value to different components of the test item, and sometimes I wrote the test items to make this possible. Other times, I evaluated the complete response to the test item as a whole, and assigned scores as follows:
Frankly, for some skills that did not lend themselves well to decomposition into parts with point values for each, I'd score based on my mental image of what a D-, a B- and an A would look like. If grades are supposed to be derived from scores rather than the other way around, that introduces some circularity that one might argue about, but I don't care. I think grades as descriptors of performance levels rather than as translations of some numerical score make more sense anyway. But that is another story that would make for a separate discussion.
And since this scoring business turned out so much trickier than I'd anticipated, well-thought out suggestions for making it clearer and fairer would be appreciated.
Dan Meyer assesses students at least twice on every test item, scoring out of 4 each time. At the second round of assessment, he alters the possible points from 4 to 5. If a student scores a 4 on both rounds of assessment, she nets a 5. Otherwise, her highest score applies. Dan makes the second round of assessments a little harder than the first, so that a second 4 indicates greater skill than the first 4.
Altering the possible points from 4 to 5 entails that students who do not actually improve their performance from one assessment to the next automatically see their grade drop. For example, if a student scores a 3 on the first assessment, and then another 3 on the second, more demanding assessment of the same skill, the grade on that particular concept drops from a 3/4=75% to a 3/5=60% - from a solid C to a D-. This caused problems that almost made me abandon the system early last fall. For one, students got upset when their grade dropped without their knowledge having changed. Secondly, having grades drop after a progress report has been issued is not actually legal - or so I was told by my Department Head. An evening shortly after the first progress reports had been printed found us manually going through all the scores in the gradebook, altering the scores so that the grades would be the same as before, for example by changing 3/5 to 3.75/5, since that is equal to 3/4=75%.
Another problem with this system was that a scale from 0 to 4 seemed fairly coarse grained. Students who made a mistake significant enough not to merit a top score on the first assessment would be marked down by 25 percentage points, and if they did not improve markedly by the second assessment they would net a D-. Improvement from this D- would be possible only if they subsequently scored a perfect score. I first thought that the large number of skills and the repetition of assessments would lead to an adequate continuity of the total grading scale, that students might average a C by scoring perfectly on some skills and poorly on others. However, some students seemed, even when working hard, to be unable to ever score a 4. They'd always make some or other significant mistake, but not enough to make a D- seem appropriate. Now I am sure that in the mutual adjustment of quiz difficulty and scoring practice there is some wiggle room for making this work in a fair way, and I assume Dan Meyer has figured out a balance here. However, I ended up changing my grading scale.
Solving these problems proved pretty difficult without losing important features of the original system, however, and I found no perfect solution. I wanted my score assignment to do what Dan's did, in particular, to make it necessary for students to take every assessment twice, in order to ensure stability and retention of knowledge. Dan's practice of increasing the possible points does just that - students can not just be satisfied with their 3/4=75% and decide not to attempt the second assessment of the same skill. In the end I decided not to report students' scores online until they had had both assessments. I made the two assessments of equivalent difficulty (which simplified things for me) and then grades were assigned based on students' best two scores according to the following table:
In summary: For scores of 3 or lower, the higher score applies. If both scores are above 3, the grade is the average of the two. If one score is above a 3 and the other below, the grade is the average of 3 and the higher grade. With this score assignment, students still had an incentive to demonstrate perfect mastery twice, in order to net a grade of a 100.
A disadvantage of this system is it's clunkiness compared to Dan's simpler system. Much of the appeal of this whole approach to grading was its transparency to students, the clarity it could afford them about what to focus on. Some of this is lost with this conversion table. Also, since the best two scores count, the system appears to have somewhat more inertia; poor scores don't go away as fast as they seem to in the original system, where the better score always counts. This slower improvement is more appearance than reality, since two 4's are necessary to achieve a 100 in Dan's system too, but appearance matters in this context. The main disadvantage, however, was switching to this different scale after the first progress report, which caused some confusion and, I think, some loss of buy-in from students. They seemed a little less enthusiastic about completing their tracking sheets after that.
As an alternative, I experimented a little with just entering both of the best two scores into PowerGrade this spring, labeling the entries "Skill 14A" and "Skill 14B," for example, and assigning half weight to each. I am undecided about whether I will do this in the fall or just enter the composite grade. It is of paramount importance that the students understand the relation between the scores on the papers they get back and the scores on their grade printout, and this system would help in that regard, but it would make for a large number of gradebook entries, which means more messiness.
Finally, a note on the scoring of any quiz item: In some cases it made sense to assign a point value to different components of the test item, and sometimes I wrote the test items to make this possible. Other times, I evaluated the complete response to the test item as a whole, and assigned scores as follows:
Frankly, for some skills that did not lend themselves well to decomposition into parts with point values for each, I'd score based on my mental image of what a D-, a B- and an A would look like. If grades are supposed to be derived from scores rather than the other way around, that introduces some circularity that one might argue about, but I don't care. I think grades as descriptors of performance levels rather than as translations of some numerical score make more sense anyway. But that is another story that would make for a separate discussion.
And since this scoring business turned out so much trickier than I'd anticipated, well-thought out suggestions for making it clearer and fairer would be appreciated.
Friday, June 13, 2008
Applying Dan's assessment system, Part I
Dan Meyer breaks his courses into some 35 discrete skills and concepts, keeps separate records on students' performance on each skill, and keeps retesting students and counting their highest scores. The following two entries are some notes on things I learned while applying an adapted version of his system to my Algebra and Intermediate Algebra this year. The second entry is a dryly technical discussion of scoring.
In accordance with my Department Head's recommendation, I did not entirely replace traditional comprehensive tests with this more piecemeal system. For Algebra 1, these concept quizzes were weighted at 40% of students' grades while comprehensive tests made up the remaining 30% of the assessment grade. For Intermediate Algebra I weighted the two types of assessments at 35% each. My experiences were that...
...this system worked significantly better for Algebra 1 than for Intermediate Algebra.
In Algebra 1, I felt that pretty much everything the students really needed to know was covered by the concept quizzes – I might as well not have done chapter tests at all. For Intermediate Algebra, however the skills tended to get cumbersomely complex or impossibly many, and the supplemental chapter tests were necessary and useful.
One reason is that Intermediate Algebra, which is essentially the first 70-80% of Algebra 2, covers much more1. Another reason is that synthesis and solution of multi-step problems are inherent, irreducible goals of Algebra 2, and these skills need to be assessed, too.2
... for diagnosing and remedying deficiencies in basic skills, this system was beautiful.
At some point early in the semester I realized that a number of incoming Algebra 1 students did not know the concept of place value and could place neither decimal numbers nor fractions on a number line. Writing an assessment on placing decimals on the number line made it possible to separate out who was having trouble with this, and to know when a critical mass of students had caught up in this area. As a tool for probing missing background skills and for placing these skills clearly and definitely on the agenda this was powerful.
... writing effective assessment items was harder than I thought.
When an assessment may potentially be repeated two, three, even five or six times, what it measures had better be really important, and the assessment had better actually capture the intended skill. It is not as easy as it may sound to decide which elements of the course really are that important; which are the parts on which other understanding hinges. My list of concepts to be assessed always tended to get too long, and trimming down to the real essentials was a constant challenge. As for designing valid measurements of students' skills, I guess only experience makes it possible to figure out what kinds of problems will really show that they know what they need to know, what kinds of problems plough just deep enough without getting too involved, what kinds of misunderstandings are typical and must be caught in order to make necessary remediation possible.3
... assessments are not enough. Improvement is not automatic.
That's obvious, of course. How silly to think otherwise. Frankly, part of what I found attractive about this assessment system was the idea that with goals broken down into such small, discrete pieces, students would become empowered and motivated and take the initiative to learn what they needed to make the grade. That was actually to a significant extent the case. Tutoring hours were far more efficient due to the existence of these data, and students knew what to do to "raise their grade." However, a lot of students continued to score poorly, repeating the same mistakes, after three, four, five rounds of assessment on the same topic. Some would come during tutoring hours to retake a quiz and still make exactly the same mistakes... For weaker students especially, then, it is important to remember that the assessment data are tools for me to actually use. There is no automaticity in the translation of this very specific feedback into actual understanding.
... the transparency of the system means bad things are out there for everyone to see.
That's what we want, don't we? The direct and honest reporting involved was a major appeal of this system. However, it takes some foresight for this not to lead to discouragement. While it is pretty common practice among math teachers, any teachers, to rescale test scores so that the class average turns out okay, this could not be done in any simple way with these conceptwise assessments. The only way to improve class grades was by reteaching the material and testing again. This involved a time delay during which the grades, which were published in an online gradebook, could be quite low. This was especially true during the first month or two of school, when the grades were constituted by relatively few entries, and - well - the first months of school may not be the time you want parents to worry about what you're doing when you're a new employee. In the early stages I ended up scaling chapter tests a good deal in order to compensate for some low concept quiz scores and make the overall grades acceptable. With time, a combination of rewriting certain concept quizzes that were needlessly tricky and teaching some topics better made this less necessary. 4
In conclusion, I am definitely keeping this system for Algebra 1, probably increasing the weighting of these assessments and reducing the number and importance of comprehensive tests. For Intermediate Algebra I am keeping chapter tests, and writing a new set of piecemeal assessments to cover just the basics, so that I can have the hard data on who is really lost, but without even trying to force these assessments to cover the entire curriculum. I'll need to make sure that the first skills are very well taught and mastered before the first round of assessments: thinking a little strategically to make sure the early results are good increases buy-in, and student ownership is after all much of the point here.
Notes
1 By way of example, a comparison of the content of the chapters on exponents in the two courses: To assess mastery of this chapter for Algebra 1, I needed to check that students knew the definition of a natural power as repeated multiplication, that they could apply the power rules to simplify expressions, that they could deal with negative and zero powers, that they could complete a table of values of a simple exponential function such as 2x and plot the points to sketch a simple exponential graph. For the chapter on exponential and logarithmic functions for Intermediate Algebra, however, I needed to check whether students could do all of the above, plus convert between exponential and logarithmic form, apply the properties of logarithms, solve exponential and logarithmic equations by applying one-to-one properties, solve such equations by applying inverse properties, apply the change-of-base formula, apply the compound interest formula, identify transformations of the exponential function, understand that exponential and logarithmic functions are inverses of each other, plus a few other things that I just skipped. The number of chapters to be covered is pretty much the same for both courses, but the number of concepts and skills? Different stories. Writing broader concept tests for more advanced courses is a possibility, but the advantages of this piecewise assessment system over the usual comprehensive test system is quickly lost this way.
2 For an example of how some core skills of Intermediate Algebra are by nature multi-step and integrative, consider the case of solving a third degree polynomial equation by first finding a root by graphing, then dividing by the corresponding linear factor, then applying the quadratic formula to find the remaining roots. This task is too complex for a concept wise assessment to be very useful. I had separate assessments on 1) identifying factors given the graph of a polynomial, on 2) polynomial division and rewriting a polynomial using the results of the division process, on 3) stating and applying the factor theorem, and 4) applying the quadratic formula. I still wanted to check whether the students could put it all together.
3 As for the assessment being valid, actually capturing the important skill, here's an example of a failed attempt: I wrote one concept quiz about identifying the difference between an equation and an expression, about distinguishing the cases where you solve for a variable from the case where you can only simplify – but success on this assessment did not mean an end to confusing these two cases. Does that mean that the assessment was poorly written, or rather that this distinction just doesn't lend itself to being assessed once and for all in a little concept quiz? Is understanding equivalence, and distinguishing equations as statements that are true or false from expressions that just are, too abstract ideas to be covered this way? I don't know, but my impression is that the quiz did little to eradicate the common mistake of treating expressions as if they were equations, for example by adding or subtracting new terms in order to simplify.
4 This is at a private school, where determining the required level of mastery of each standard is to a larger extent up to the teacher, since no state testing is involved in defining the bar.
In accordance with my Department Head's recommendation, I did not entirely replace traditional comprehensive tests with this more piecemeal system. For Algebra 1, these concept quizzes were weighted at 40% of students' grades while comprehensive tests made up the remaining 30% of the assessment grade. For Intermediate Algebra I weighted the two types of assessments at 35% each. My experiences were that...
...this system worked significantly better for Algebra 1 than for Intermediate Algebra.
In Algebra 1, I felt that pretty much everything the students really needed to know was covered by the concept quizzes – I might as well not have done chapter tests at all. For Intermediate Algebra, however the skills tended to get cumbersomely complex or impossibly many, and the supplemental chapter tests were necessary and useful.
One reason is that Intermediate Algebra, which is essentially the first 70-80% of Algebra 2, covers much more1. Another reason is that synthesis and solution of multi-step problems are inherent, irreducible goals of Algebra 2, and these skills need to be assessed, too.2
... for diagnosing and remedying deficiencies in basic skills, this system was beautiful.
At some point early in the semester I realized that a number of incoming Algebra 1 students did not know the concept of place value and could place neither decimal numbers nor fractions on a number line. Writing an assessment on placing decimals on the number line made it possible to separate out who was having trouble with this, and to know when a critical mass of students had caught up in this area. As a tool for probing missing background skills and for placing these skills clearly and definitely on the agenda this was powerful.
... writing effective assessment items was harder than I thought.
When an assessment may potentially be repeated two, three, even five or six times, what it measures had better be really important, and the assessment had better actually capture the intended skill. It is not as easy as it may sound to decide which elements of the course really are that important; which are the parts on which other understanding hinges. My list of concepts to be assessed always tended to get too long, and trimming down to the real essentials was a constant challenge. As for designing valid measurements of students' skills, I guess only experience makes it possible to figure out what kinds of problems will really show that they know what they need to know, what kinds of problems plough just deep enough without getting too involved, what kinds of misunderstandings are typical and must be caught in order to make necessary remediation possible.3
... assessments are not enough. Improvement is not automatic.
That's obvious, of course. How silly to think otherwise. Frankly, part of what I found attractive about this assessment system was the idea that with goals broken down into such small, discrete pieces, students would become empowered and motivated and take the initiative to learn what they needed to make the grade. That was actually to a significant extent the case. Tutoring hours were far more efficient due to the existence of these data, and students knew what to do to "raise their grade." However, a lot of students continued to score poorly, repeating the same mistakes, after three, four, five rounds of assessment on the same topic. Some would come during tutoring hours to retake a quiz and still make exactly the same mistakes... For weaker students especially, then, it is important to remember that the assessment data are tools for me to actually use. There is no automaticity in the translation of this very specific feedback into actual understanding.
... the transparency of the system means bad things are out there for everyone to see.
That's what we want, don't we? The direct and honest reporting involved was a major appeal of this system. However, it takes some foresight for this not to lead to discouragement. While it is pretty common practice among math teachers, any teachers, to rescale test scores so that the class average turns out okay, this could not be done in any simple way with these conceptwise assessments. The only way to improve class grades was by reteaching the material and testing again. This involved a time delay during which the grades, which were published in an online gradebook, could be quite low. This was especially true during the first month or two of school, when the grades were constituted by relatively few entries, and - well - the first months of school may not be the time you want parents to worry about what you're doing when you're a new employee. In the early stages I ended up scaling chapter tests a good deal in order to compensate for some low concept quiz scores and make the overall grades acceptable. With time, a combination of rewriting certain concept quizzes that were needlessly tricky and teaching some topics better made this less necessary. 4
In conclusion, I am definitely keeping this system for Algebra 1, probably increasing the weighting of these assessments and reducing the number and importance of comprehensive tests. For Intermediate Algebra I am keeping chapter tests, and writing a new set of piecemeal assessments to cover just the basics, so that I can have the hard data on who is really lost, but without even trying to force these assessments to cover the entire curriculum. I'll need to make sure that the first skills are very well taught and mastered before the first round of assessments: thinking a little strategically to make sure the early results are good increases buy-in, and student ownership is after all much of the point here.
Notes
1 By way of example, a comparison of the content of the chapters on exponents in the two courses: To assess mastery of this chapter for Algebra 1, I needed to check that students knew the definition of a natural power as repeated multiplication, that they could apply the power rules to simplify expressions, that they could deal with negative and zero powers, that they could complete a table of values of a simple exponential function such as 2x and plot the points to sketch a simple exponential graph. For the chapter on exponential and logarithmic functions for Intermediate Algebra, however, I needed to check whether students could do all of the above, plus convert between exponential and logarithmic form, apply the properties of logarithms, solve exponential and logarithmic equations by applying one-to-one properties, solve such equations by applying inverse properties, apply the change-of-base formula, apply the compound interest formula, identify transformations of the exponential function, understand that exponential and logarithmic functions are inverses of each other, plus a few other things that I just skipped. The number of chapters to be covered is pretty much the same for both courses, but the number of concepts and skills? Different stories. Writing broader concept tests for more advanced courses is a possibility, but the advantages of this piecewise assessment system over the usual comprehensive test system is quickly lost this way.
2 For an example of how some core skills of Intermediate Algebra are by nature multi-step and integrative, consider the case of solving a third degree polynomial equation by first finding a root by graphing, then dividing by the corresponding linear factor, then applying the quadratic formula to find the remaining roots. This task is too complex for a concept wise assessment to be very useful. I had separate assessments on 1) identifying factors given the graph of a polynomial, on 2) polynomial division and rewriting a polynomial using the results of the division process, on 3) stating and applying the factor theorem, and 4) applying the quadratic formula. I still wanted to check whether the students could put it all together.
3 As for the assessment being valid, actually capturing the important skill, here's an example of a failed attempt: I wrote one concept quiz about identifying the difference between an equation and an expression, about distinguishing the cases where you solve for a variable from the case where you can only simplify – but success on this assessment did not mean an end to confusing these two cases. Does that mean that the assessment was poorly written, or rather that this distinction just doesn't lend itself to being assessed once and for all in a little concept quiz? Is understanding equivalence, and distinguishing equations as statements that are true or false from expressions that just are, too abstract ideas to be covered this way? I don't know, but my impression is that the quiz did little to eradicate the common mistake of treating expressions as if they were equations, for example by adding or subtracting new terms in order to simplify.
4 This is at a private school, where determining the required level of mastery of each standard is to a larger extent up to the teacher, since no state testing is involved in defining the bar.
Saturday, May 24, 2008
I love inverses :)
It's sheer nerd joy, finding the inverse of an exponential or a quadratic function; confirming that entering the output of a relation into its inverse really does return the original input; finding that the graphs of a relation and its inverse really are reflections of each other in the line y = x. I think that requiring all Intermediate Algebra students to do this would be demanding a bit too much, so I offer some worksheets on inverses as extra credit opportunities, and under such conditions many students are more than willing to try. With appropriate enthusiasm, one student highlighted parts of this graph of a quadratic and its inverse in red pencil before turning it in:
I find that this work on inverses deepens students' understanding of the meaning of solving equations, and helps them appreciate the idea that the operations needed to isolate the variable are operations that undo operations previously performed on it. The students need a lot of help on the first examples, and then are quite pleased with themselves when they find they can do this initially hard bit of algebra on their own.
Following Dan Greene, I emphasize the three representations of a relation (Equation! Table! Graph!) again and again, and it is helpful to reiterate these alternative representations when working with inverses. We can find the inverse by interchanging x and y in the equation, by interchanging the values of x and y in the table, or by interchanging the coordinates of each point on the graph of a relation. Talking this way in the context of finding inverses in turn reinforces the idea of equations, tables and graphs as representations of the same information - another nice thing about working with inverses.
Worksheets:
- Exponential and logarithmic functions as inverses, Word and PDF
- Quadratics and square root relations as inverses, Word and PDF
I find that this work on inverses deepens students' understanding of the meaning of solving equations, and helps them appreciate the idea that the operations needed to isolate the variable are operations that undo operations previously performed on it. The students need a lot of help on the first examples, and then are quite pleased with themselves when they find they can do this initially hard bit of algebra on their own.
Following Dan Greene, I emphasize the three representations of a relation (Equation! Table! Graph!) again and again, and it is helpful to reiterate these alternative representations when working with inverses. We can find the inverse by interchanging x and y in the equation, by interchanging the values of x and y in the table, or by interchanging the coordinates of each point on the graph of a relation. Talking this way in the context of finding inverses in turn reinforces the idea of equations, tables and graphs as representations of the same information - another nice thing about working with inverses.
Worksheets:
- Exponential and logarithmic functions as inverses, Word and PDF
- Quadratics and square root relations as inverses, Word and PDF
Friday, April 25, 2008
Maybe manipulatives aren't the answer?
This NYT article suggests that
... it might be better to let the apples, oranges and locomotives stay in the real world and, in the classroom, to focus on abstract equations ... Dr. Kaminski and her colleagues Vladimir M. Sloutsky and Andrew F. Heckler ... performed a randomized, controlled experiment. ... Though the experiment tested college students, the researchers suggested that their findings might also be true for math education in elementary through high school ...
In the experiment, the college students learned a simple but unfamiliar mathematical system, essentially a set of rules. Some learned the system through purely abstract symbols, and others learned it through concrete examples like combining liquids in measuring cups and tennis balls in a container.
Then the students were tested on a different situation — what they were told was a children’s game — that used the same math. ... The students who learned the math abstractly did well with figuring out the rules of the game. Those who had learned through examples using measuring cups or tennis balls performed little better than might be expected if they were simply guessing. Students who were presented the abstract symbols after the concrete examples did better than those who learned only through cups or balls, but not as well as those who learned only the abstract symbols.
The problem with the real-world examples, Dr. Kaminski said, was that they obscured the underlying math, and students were not able to transfer their knowledge to new problems.
“They tend to remember the superficial, the two trains passing in the night,” Dr. Kaminski said. “It’s really a problem of our attention getting pulled to superficial information.”
Tuesday, March 25, 2008
Completing the Square
Toward the goal of sharing more of the humdrum, everyday business of teaching math, here are a few notes about how we do completing the square in my classes. I have no cool tricks or creative activies for this, and I would very much like to see more of yours. Nevertheless, completing the square is, inexplicably, a favorite topic of mine.
The first time I taught it I relied somewhat on the formulaic addition and subtraction of the square of half the middle coefficient, but that left the class utterly confused and frustrated. Now I rely less on this rule and more on pattern recognition and intuition. This appears to make for better retention, though the students do have trouble applying it to numerically messy cases, where a formulaic approach - if ever mastered, that is - would be safer.
We start by reviewing how to square a binomial, because while we have of course worked on multiplying binomials and using standard factoring patterns earlier, the error of squaring a binomial by squaring each term is remarkably resistant to instruction. We always need to refresh that by writing out the factors and multiplying, carefully, term by term. Indeed, any time that a review of completing the square is called for later, squaring binomials from scratch is the point I will return to, and invariably it will turn out that many students have forgotten what the squared binomial looks like. After we've done the multiplication from first principles for a handful of examples, I point out the pattern in the middle and last terms of the product and ask the students to pick up speed, which they do. I write up a few binomials where the second term is a fraction and remind the students that one advantage of fraction form over decimal form is that fractions are really easy to square.
I'll call on individual students or have the class shout out answers, and will alternate between having students suggesting problems and solving them ("J., will you give us a binomial?" "S., will you square that for us?") and I have repeatedly been surprised at how engaged the students tend to become during this exchange, since the topic, after all, isn't that inherently exciting, and we aren't doing anything particularly nifty. Part of the reason may be that it is easier than for many other topics to sense just where the students are and to tailor the next example so that it matches their readiness.
When I notice that the class is beginning to get that "now what...?" feeling, we reverse the process: I write up a perfect square trinomial and have students factor it. We keep doing this until the students again reach the point where this is too easy, and then start looking at cases where only the quadratic and linear terms are given and the students need to figure out what numbers would fit in the blank spaces in a form such as this one: Later, writing this form on the board will be sufficient to cue a large fraction of the students to what they are trying to do.
We move on to rewriting simple quadratics (where a=1 and b is an integer) in vertex form. Later I will show them that adding and subtracting the square of half of the coefficient of the linear term will give us just what we want, but at this stage we simply identify the squared binomial, multiply it out, and compare this with the original quadratic to see what we need to add or subtract. For example, to write in vertex form we will recognize that (x-4)^2 is the square term, and since expanding this gives a constant term of 16 we'll need to subtract 13 in order to ensure that we have the same quadratic that we started with: This approach seems to stick fairly well in students' memories. Many students who do not correctly add and subtract the half of the middle coefficient later (they'll insert an x in there, or halve it incorrectly, or something) will be able to rewrite simple quadratics in vertex form, and I can see from their scratch work in the margin that they're just comparing the expanded square with the original quadratic. I'm pleased with that, because the equivalence of the quadratic in its two forms is one of the big ideas I want them to take away, and the fact that we aren't dealing with different quadratics even though they do look different isn't nearly as self-evident to the young ones as it is to us.
So, that was not terribly exciting or innovative, I concede. But how do you teach completing the square?
The first time I taught it I relied somewhat on the formulaic addition and subtraction of the square of half the middle coefficient, but that left the class utterly confused and frustrated. Now I rely less on this rule and more on pattern recognition and intuition. This appears to make for better retention, though the students do have trouble applying it to numerically messy cases, where a formulaic approach - if ever mastered, that is - would be safer.
We start by reviewing how to square a binomial, because while we have of course worked on multiplying binomials and using standard factoring patterns earlier, the error of squaring a binomial by squaring each term is remarkably resistant to instruction. We always need to refresh that by writing out the factors and multiplying, carefully, term by term. Indeed, any time that a review of completing the square is called for later, squaring binomials from scratch is the point I will return to, and invariably it will turn out that many students have forgotten what the squared binomial looks like. After we've done the multiplication from first principles for a handful of examples, I point out the pattern in the middle and last terms of the product and ask the students to pick up speed, which they do. I write up a few binomials where the second term is a fraction and remind the students that one advantage of fraction form over decimal form is that fractions are really easy to square.
I'll call on individual students or have the class shout out answers, and will alternate between having students suggesting problems and solving them ("J., will you give us a binomial?" "S., will you square that for us?") and I have repeatedly been surprised at how engaged the students tend to become during this exchange, since the topic, after all, isn't that inherently exciting, and we aren't doing anything particularly nifty. Part of the reason may be that it is easier than for many other topics to sense just where the students are and to tailor the next example so that it matches their readiness.
When I notice that the class is beginning to get that "now what...?" feeling, we reverse the process: I write up a perfect square trinomial and have students factor it. We keep doing this until the students again reach the point where this is too easy, and then start looking at cases where only the quadratic and linear terms are given and the students need to figure out what numbers would fit in the blank spaces in a form such as this one: Later, writing this form on the board will be sufficient to cue a large fraction of the students to what they are trying to do.
We move on to rewriting simple quadratics (where a=1 and b is an integer) in vertex form. Later I will show them that adding and subtracting the square of half of the coefficient of the linear term will give us just what we want, but at this stage we simply identify the squared binomial, multiply it out, and compare this with the original quadratic to see what we need to add or subtract. For example, to write in vertex form we will recognize that (x-4)^2 is the square term, and since expanding this gives a constant term of 16 we'll need to subtract 13 in order to ensure that we have the same quadratic that we started with: This approach seems to stick fairly well in students' memories. Many students who do not correctly add and subtract the half of the middle coefficient later (they'll insert an x in there, or halve it incorrectly, or something) will be able to rewrite simple quadratics in vertex form, and I can see from their scratch work in the margin that they're just comparing the expanded square with the original quadratic. I'm pleased with that, because the equivalence of the quadratic in its two forms is one of the big ideas I want them to take away, and the fact that we aren't dealing with different quadratics even though they do look different isn't nearly as self-evident to the young ones as it is to us.
So, that was not terribly exciting or innovative, I concede. But how do you teach completing the square?
Saturday, March 22, 2008
Joke
- What's the difference between an outgoing Physicist and one who is not?
- The outgoing Physicist looks at your shoes while talking to you.
Oof.
- The outgoing Physicist looks at your shoes while talking to you.
Oof.
Thursday, March 6, 2008
Pi Day
What do you all do for Pi Day? In particular, what might be worth the while in an Algebra 1 class?
Would anyone with a high-traffic blog mind posting some version of this question, in order to cast a wider net? That would be nice of you...
Would anyone with a high-traffic blog mind posting some version of this question, in order to cast a wider net? That would be nice of you...
Monday, February 18, 2008
Unequal Methods
Colleagues - I could use some advice. I just graded the Algebra 1 tests on Inequalities and Absolute Value, and they were quite awful. While some of that is due to the distractions of Spirit Week and me being sick a few days, there's more to it, and any reports on successful approaches to teaching inequalities in general and absolute value inequalities in particular would be most appreciated.
A first hurdle is to help students actually understand the number line graphs that they draw of solutions to inequalities. Many use middle school mnemonics about arrows pointing in the same direction as the inequality sign, and draw their graphs from these rules (and go wrong when the variable appears on the right hand sign of the inequality, of course). We've worked on listing a few actual numbers that are part of the solution, plotting these, and then drawing the graph afterward. That helps a good deal after a while.
The next hurdle is to understand the difference between AND and OR inequalities. The most effective approach so far has been a combination of the above mentioned insistence on a list of actual, specific numbers that satisfy the conditions, and lots of problem quartets like the following:
Then enter the absolute value inequalities, and what a mess they are. There are so many different ways of solving them, and of talking about them, and I've made the mistake of covering several instead of sticking to one geometric approach and one algebraic approach. Now students are, quite predictably, using messy combinations of these.
Geometrically, the absolute value of x - 2 can be understood as the distance of x - 2 from zero, or the distance of x from 2. Last semester, with the Intermediate Algebra class, I relied on the former and had the students set up inequalities such as | x - 2 | > 3 by drawing a number line, placing "x - 2" more than three units away from zero on either side of zero, reading the resulting inequalities ( "x - 2 > 3" and "x - 2 < -3" ) from their sketch, and solving from there. It didn't really stick. I am not sure whether that was due to inadequate repetition or due to this approach being conceptually confusing.
Anyhow, with the Algebra 1 group this semester, I instead belabored the geometric interpretation of | x - 2 | as the distance between x and 2. I taped a large number line under the blackboard and we checked this definition by walking back and forth: -1 is three steps from 2, and sure enough, | -1 - 2 | = 3, and so forth. In order to solve inequalities such as | x - 2 | > 3 I had two students walk three steps from 2 on the number line in either direction, and we talked about what numbers were more than 3 units away from 2. This was difficult for many students (and not only the small group that always tunes out when I use any concrete representations because they think that's too middle school). My hunch is that there's some relation between this confusion and the difficulties Mr. K's students had with the meaning of "more."* Once students did pick up the idea it seemed to stick, but many never really got it. Maybe it's harder to ask when you're confused about what the walking up and down the number line is supposed to mean than when the material is more evidently academic.
I had first hoped to rely on this geometric approach to help students remember the direction of the inequality sign for the two linear inequalities in terms of which they will rewrite their absolute value inequalities, but gave up on that and introduced the approach that follows naturally from the algebraic definition of absolute value. If | x - 2 | > 3 then either x - 2 is greater than 3 or else the opposite of x - 2 is greater than 3.
Now I wish I'd used the geometric approach only for predicting and interpreting answers and stuck religiously to the algebraic approach to setting up the inequalities - because now students are using strange combinations of the two, such as x + 2 > -3 or -x + 2 > -3. In other words, complete confusion, with neither a clear concept nor a clear method to rely on. That's pretty discouraging even before thinking about the many students who did not even acknowledge the fact that there are two solutions to absolute value problems, that a distance can be in either of two directions... So, math teachers, what do I do now?
*We tend to assume too much about students' immediate grasp of the very idea of comparing quantities, let alone the isomorphism from this ranking of magnitudes to a spatial ordering along a line. Bob Moses, with his interest in pre-mathematical concepts that must be in place in order to succeed at Algebra, would presumably have a lot to say about this.
A first hurdle is to help students actually understand the number line graphs that they draw of solutions to inequalities. Many use middle school mnemonics about arrows pointing in the same direction as the inequality sign, and draw their graphs from these rules (and go wrong when the variable appears on the right hand sign of the inequality, of course). We've worked on listing a few actual numbers that are part of the solution, plotting these, and then drawing the graph afterward. That helps a good deal after a while.
The next hurdle is to understand the difference between AND and OR inequalities. The most effective approach so far has been a combination of the above mentioned insistence on a list of actual, specific numbers that satisfy the conditions, and lots of problem quartets like the following:
x > 2 and x < 5This way, we always get one inequality with no solution, one satisfied by all real numbers, and a couple plain vanilla and- and or-inequalities for the same pair of numbers. It's not magic, but it does seem to help to vary one thing at a time.
x > 2 or x < 5
x > 5 and x < 2
x > 5 or x < 2
Then enter the absolute value inequalities, and what a mess they are. There are so many different ways of solving them, and of talking about them, and I've made the mistake of covering several instead of sticking to one geometric approach and one algebraic approach. Now students are, quite predictably, using messy combinations of these.
Geometrically, the absolute value of x - 2 can be understood as the distance of x - 2 from zero, or the distance of x from 2. Last semester, with the Intermediate Algebra class, I relied on the former and had the students set up inequalities such as | x - 2 | > 3 by drawing a number line, placing "x - 2" more than three units away from zero on either side of zero, reading the resulting inequalities ( "x - 2 > 3" and "x - 2 < -3" ) from their sketch, and solving from there. It didn't really stick. I am not sure whether that was due to inadequate repetition or due to this approach being conceptually confusing.
Anyhow, with the Algebra 1 group this semester, I instead belabored the geometric interpretation of | x - 2 | as the distance between x and 2. I taped a large number line under the blackboard and we checked this definition by walking back and forth: -1 is three steps from 2, and sure enough, | -1 - 2 | = 3, and so forth. In order to solve inequalities such as | x - 2 | > 3 I had two students walk three steps from 2 on the number line in either direction, and we talked about what numbers were more than 3 units away from 2. This was difficult for many students (and not only the small group that always tunes out when I use any concrete representations because they think that's too middle school). My hunch is that there's some relation between this confusion and the difficulties Mr. K's students had with the meaning of "more."* Once students did pick up the idea it seemed to stick, but many never really got it. Maybe it's harder to ask when you're confused about what the walking up and down the number line is supposed to mean than when the material is more evidently academic.
I had first hoped to rely on this geometric approach to help students remember the direction of the inequality sign for the two linear inequalities in terms of which they will rewrite their absolute value inequalities, but gave up on that and introduced the approach that follows naturally from the algebraic definition of absolute value. If | x - 2 | > 3 then either x - 2 is greater than 3 or else the opposite of x - 2 is greater than 3.
Now I wish I'd used the geometric approach only for predicting and interpreting answers and stuck religiously to the algebraic approach to setting up the inequalities - because now students are using strange combinations of the two, such as x + 2 > -3 or -x + 2 > -3. In other words, complete confusion, with neither a clear concept nor a clear method to rely on. That's pretty discouraging even before thinking about the many students who did not even acknowledge the fact that there are two solutions to absolute value problems, that a distance can be in either of two directions... So, math teachers, what do I do now?
*We tend to assume too much about students' immediate grasp of the very idea of comparing quantities, let alone the isomorphism from this ranking of magnitudes to a spatial ordering along a line. Bob Moses, with his interest in pre-mathematical concepts that must be in place in order to succeed at Algebra, would presumably have a lot to say about this.
Sunday, February 3, 2008
Sharing worksheets
Discovered Box quite serendipitously a few weeks ago, while figuring out something for my Midsession class - and this solved part of that problem of having multiple versions of worksheets on multiple computers, only some of which are connected to a printer. It turns out it can also solve the problem of sharing materials. My goal is still to contribute to I Love Math, but my materials are mostly written for the first time this year, and I'm constantly fixing typos, and constantly updating files once they're posted to I Love Math is just not going to happen this year. Meanwhile, the worksheet versions I'm using, warts and all, can be made available to anyone by a simple click if I store them at Box, as I had started doing anyway. Here are two assignments for my Algebra classes for this week - and when I get time to tidy up my files a little and tag them somehow, I'll put more materials in the public folder. It won't happen this week - there's an ed class deadline looming now - but hopefully sooner rather than later. Constructive criticism will be welcome.
Update (August 2): Moved everything except tests over to a public Box at after some cleanup this week. Not that rummaging around in other people's files is generally very edifying, but making decisions worksheet by worksheet about whether to share it or not has ended up meaning that it just hasn't gotten done, and besides the very idea that my files are public just might make me keep them a little more organized. Maybe. Without guarantees (of anything), though, they're at http://public.box.net/hmath
Saturday, January 26, 2008
Pink Dragons and other Real World Applications
In order to determine what would make science classes appear relevant to learners, researchers of the ROSE project actually asked the students. What they found was that teenagers care little for learning about plants in their local area, how car engines function, and how chemicals interact. They are significantly more interested in learning about how atomic bombs function, why stars twinkle in the night sky, and phenomena that science still can not explain. And the item of most interest to the young learners was the possibility of life outside earth.
So much for the mantra that science must be made "relevant to students' daily lives."
I should not be surprised. I can dutifully and with some determination work up a bit of interest for the functioning of car engines, but only a little bit. I majored in Physics.
Why do we think that math problems will be more engaging to students if they are about bake sales, CD shopping, and other real world applications? And those little vignettes in the textbook that purport to explain how useful and applicable all this math will be - why do they always seem so contrived? Who thinks that a note in the margin stating that "If you become an ornithologist, you may use polynomial functions to study the flight patterns of birds!" will be more convincing to the kids than it is to us? And if the value of high school math for students' daily living were so clear cut, why isn't the case made more forcefully after so many years of textbooks?
Svein Sjøberg of the ROSE project argues that the reason why all students should learn science is not primarily that this knowledge will be so useful to them in their daily lives, nor should it be society's need for a sufficient supply of engineers and technicians. He instead emphasizes 1) the cultural argument and 2) the democratic argument. All citizens need to learn science because science, like arts and history and poetry, is a part of our common human heritage. Also, political decisions about issues involving science ought to be made by an informed electorate.
By the same line of reasoning, primary rationales for learning math could also be the cultural and political weight that this subject carries. Humans have calculated, devised and solved puzzles, and developed multiplicities of algorithms in all kinds of cultures throughout thousands of years. Accessing some of this heritage is part of the enculturation of a person in today's world - it is a privilege, not something we need to excuse or justify with awkwardly implausible future employment scenarios. As for the democratic significance of math, must not an informed electorate be able to interpret data displays and ask critical questions about statistical statements?*
There are times when I feel that my subjects are gatekeeper courses rather then essential components of a well-rounded education, as when I see a student aspiring to be a nurse struggling with logarithmic functions, and I wonder who ordered this, who has an interest in setting up this barrier between a dedicated and in many ways talented student and her choice of profession? On the other hand thinking of math in other terms than job training makes teaching it so much more interesting. I can happily create ridiculous word problems about pink dragons and syrup fountains, and remember that "relevance" for a teenager need not have much to do with usefulness in some narrow technical sense. The "relevance" of a math problem may have to do with the investment in completing it faster than the neighboring team, the joy of working together with a classmate on it, or the beauty of the graph when it is done in colored pencil.
*If we take the democratic argument seriously, maybe we should consider replacing most of Geometry with Applied Statistics as a graduation requirement and make formal, proof-based Geometry a college prep class rather than a course mandated for all citizens.
So much for the mantra that science must be made "relevant to students' daily lives."
I should not be surprised. I can dutifully and with some determination work up a bit of interest for the functioning of car engines, but only a little bit. I majored in Physics.
Why do we think that math problems will be more engaging to students if they are about bake sales, CD shopping, and other real world applications? And those little vignettes in the textbook that purport to explain how useful and applicable all this math will be - why do they always seem so contrived? Who thinks that a note in the margin stating that "If you become an ornithologist, you may use polynomial functions to study the flight patterns of birds!" will be more convincing to the kids than it is to us? And if the value of high school math for students' daily living were so clear cut, why isn't the case made more forcefully after so many years of textbooks?
Svein Sjøberg of the ROSE project argues that the reason why all students should learn science is not primarily that this knowledge will be so useful to them in their daily lives, nor should it be society's need for a sufficient supply of engineers and technicians. He instead emphasizes 1) the cultural argument and 2) the democratic argument. All citizens need to learn science because science, like arts and history and poetry, is a part of our common human heritage. Also, political decisions about issues involving science ought to be made by an informed electorate.
By the same line of reasoning, primary rationales for learning math could also be the cultural and political weight that this subject carries. Humans have calculated, devised and solved puzzles, and developed multiplicities of algorithms in all kinds of cultures throughout thousands of years. Accessing some of this heritage is part of the enculturation of a person in today's world - it is a privilege, not something we need to excuse or justify with awkwardly implausible future employment scenarios. As for the democratic significance of math, must not an informed electorate be able to interpret data displays and ask critical questions about statistical statements?*
There are times when I feel that my subjects are gatekeeper courses rather then essential components of a well-rounded education, as when I see a student aspiring to be a nurse struggling with logarithmic functions, and I wonder who ordered this, who has an interest in setting up this barrier between a dedicated and in many ways talented student and her choice of profession? On the other hand thinking of math in other terms than job training makes teaching it so much more interesting. I can happily create ridiculous word problems about pink dragons and syrup fountains, and remember that "relevance" for a teenager need not have much to do with usefulness in some narrow technical sense. The "relevance" of a math problem may have to do with the investment in completing it faster than the neighboring team, the joy of working together with a classmate on it, or the beauty of the graph when it is done in colored pencil.
*If we take the democratic argument seriously, maybe we should consider replacing most of Geometry with Applied Statistics as a graduation requirement and make formal, proof-based Geometry a college prep class rather than a course mandated for all citizens.
Wednesday, January 16, 2008
Emergency Math
Sarah at Mathalogical has suddenly gotten her course load increased to four preps (General Math being the latest addition) with little curriculum attached, and she's asking for suggestions. I'm responding here because the comment got too long.
First, four preps without textbooks or curriculum is rough. I did that last year, am veryvery glad it's over, and wasn't proud of the results. On the positive side, it gives you exposure to a large range of typical conceptual hurdles in a short amount of time, and your toolkit will grow very quickly. You'll know a lot more about just what your students in later courses aren't getting due to your experience with this course. In order not to get too discouraged it may sometimes be necessary to remind yourself of how much you're learning when you don't get enough time to prepare what it takes to have the students learning enough, selfish and futile as that may sound. And starting this marathon now rather than in August means you can try things out knowing that you can start over again in just one semester.
The three resources I found of most use last year were
There was no time for dreaming up a coherent curriculum with much by way of unifying themes or red threads, so in the General Math type courses I prioritized according to what skills I thought were hindering students the most in accessing more math. Some areas I focused on were
First, four preps without textbooks or curriculum is rough. I did that last year, am veryvery glad it's over, and wasn't proud of the results. On the positive side, it gives you exposure to a large range of typical conceptual hurdles in a short amount of time, and your toolkit will grow very quickly. You'll know a lot more about just what your students in later courses aren't getting due to your experience with this course. In order not to get too discouraged it may sometimes be necessary to remind yourself of how much you're learning when you don't get enough time to prepare what it takes to have the students learning enough, selfish and futile as that may sound. And starting this marathon now rather than in August means you can try things out knowing that you can start over again in just one semester.
The three resources I found of most use last year were
- I Love Math
- The Math Worksheet Site (this costs $20 per year), and
- The National Library of Virtual Manipulatives
There was no time for dreaming up a coherent curriculum with much by way of unifying themes or red threads, so in the General Math type courses I prioritized according to what skills I thought were hindering students the most in accessing more math. Some areas I focused on were
- Integers on the number line. The Math Worksheet Site has neat pages of number lines with addition and subtraction problems that the students solve by diagramming the problem on the number line. A large number of 10th graders could not deal with negative integers, and in most cases these number line problems helped. The very idea of associating the numerical operations of addition and subtraction with the geometrical idea of motion along a line is the Big Idea that students just have to get in place, it's much less obvious than we like to think, and missing skills in this area really holds the students back.
- Place value, and decimal numbers on the number line. First, placing these on the number line was a priority - though in many cases I did not succeed in teaching this. Dan Greene has great stuff on it (as you would already know) - but teaching place value just is not easy. It's awfully important, though, as the kids trip badly over this missing skill when they attempt to do more advanced stuff, so if you can do anything for them in this area, you're helping, even if it sucks up quite a bit of time. The Math Worksheet Site has lots of practice sheets for translating between Decimals, Percents and Fractions, and they're tidy and neat for what they do. As for resources for placing the numbers on the number line, the worksheets at this site aren't that satisfying. There must be animations out there that let you zoom in on a piece of the number line to study place value - but I haven't found anything great, and spent quite some time searching for it last year.
- Solving simple linear equations. The common student error that bothered me the most was students' insistence on subtracting the coefficient of the variable instead of dividing by it - my explanations just did not work, and they were inelegantly wordy. What did work for many students was practicing with the Algebra Scale Balance at the National Library of Virtual Manipulatives. After working on this site the incidence of that error went down very noticeably, and it's the concrete representation that does the trick - doing a verbal version of this lesson, well, good luck. For practice problems, the "Partner Problems" worksheet for equations at I Love Math is great. It has two columns of problems of increasing difficulty, and horizontally aligned problems have identical solutions, so that the students can get near immediate feedback on their solutions. The students liked that sheet, and would gladly redo it if I photocopied it onto paper of a different color (and yes, they did need the repetition).
- The basic operations. Many kids were more likely to settle down and do something when their assignment was a boring worksheet on practicing multi-digit multiplication, a fact that always puzzled me - my "interesting" discovery activities were much less likely to elicit absorbed concentration (they would involve reading a line or two of directions for each task - bad, bad idea :) The Math Worksheet Site has lots of practice worksheets, at various levels of difficulty, and the card game Top Deck at I Love Math (in the Middle School Folder) is a lot of fun. (Digression: The card games for practicing skills with fractions worked less well, because students tended to devise their own rules that defeated the purpose of the activity: for example, they'd agree to match denominators of different fractions rather than matching fractions for equivalence, as I wanted them to do!)
- Area and Perimeter. If students can just get the difference between the two, nevermind formulas for calculating anything, that helps - it was a defining moment for me that October day when I realized that the students truly were unable to distinguish the two - that was when my ideologically rigid commitment to grade level standards started to give. A hands-on activity (measure the area of your desk in terms of number of colored paper squares you need to cover it; measure the perimeter of your desk in terms of number of standardized pieces of string you need to reach around it) did some good, but only some. A worksheet from a colleague, which involved drawing rectangles on a grid that all had the same area but different perimeters, or the same perimeter but different areas, did more good. There were still plenty of students who had plenty of trouble with just counting up line segments to find a perimeter of an irregular shape, though, and - well, I don't know what to do about that.
Friday, January 4, 2008
Student-friendly blogs?
My school has a two-week "Midsession" between the fall and spring semesters, during which time we get to teach pretty much anything that we can persuade enough students to sign up for for two hours per day. It's one of those really-too-much-fun-to-get-paid-for things, for sure. I've got a gathering of some 10 students for "Technology for Communication," wherein we'll be reading and writing blogs, playing around with PowerPoint, and maybe - just maybe - creating a simple Podcast, though since I've never ever done that before myself and have no idea how to do it that might be wildly unrealistic. I was thinking it would be fun to teach a course that I'd learn a lot from myself, and for these two weeks anything that the students are enjoying as well as learning something from seems to be okay.
The students' familiarity with technology is going to be all over the place, with some students barely able to use e-mail and others - actually, I have no idea about the other end of the spectrum. My plans are still somewhat vague - in part because I'm half expecting to have to rewrite them in an intensive night after learning about the students during the first class.
One of the first things we'll do is subscribe to a few blogs, and now I'm looking for good reads for high school girls - preferably clustered around a theme or three. I was thinking Study Hacks, Cake Decoration (I used to be somewhat into novelty cakes before starting to teach), and - I don't know about a last theme. My question to the all-wise blogosphere is: what themes or blogs would you recommend for this reader group? I mostly read edublogs of various kinds, with a very small number of political blogs sprinkled in. Not terribly exciting for my students, I'm afraid. Of course, I could delay this part of the course and find out about the students' interests, first - maybe that would be better...?
So - any suggestions (on any aspects of the course, actually)?
The students' familiarity with technology is going to be all over the place, with some students barely able to use e-mail and others - actually, I have no idea about the other end of the spectrum. My plans are still somewhat vague - in part because I'm half expecting to have to rewrite them in an intensive night after learning about the students during the first class.
One of the first things we'll do is subscribe to a few blogs, and now I'm looking for good reads for high school girls - preferably clustered around a theme or three. I was thinking Study Hacks, Cake Decoration (I used to be somewhat into novelty cakes before starting to teach), and - I don't know about a last theme. My question to the all-wise blogosphere is: what themes or blogs would you recommend for this reader group? I mostly read edublogs of various kinds, with a very small number of political blogs sprinkled in. Not terribly exciting for my students, I'm afraid. Of course, I could delay this part of the course and find out about the students' interests, first - maybe that would be better...?
So - any suggestions (on any aspects of the course, actually)?
Thursday, January 3, 2008
Approaching word problems
My students tend to give up in frustration as soon as they see a word problem, and so I increasingly avoid assigning such problems for homework and make sure we spend class time on them instead. There's a strategy for working with word problems that I read about somewhere - can't remember where, unfortunately - that involves paraphrasing the word problem within the constraint of an upper word limit, then paraphrasing the shorter version with an even tighter word limit, and so on. After a sufficient number of iterations, use of mathematical symbols becomes necessary to condense the information further, and so the word problem becomes translated into algebraic formalism.
I have not tried this method as stated, but it would be interesting to do that some time. The graphic organizer* I used a few weeks ago for systems of linear equations is inspired by this idea, however. There are little boxes** for each of the following:
It took a while for most students to realize that the variables they were to define were directly related to the questions stated in the previous box, that the variables basically were symbols for these quantities. Many tried to assign variable names to known quantities instead. I might try and rearrange the worksheet to visually reinforce the idea that the box containing the question and the box where variables are defined belong together.
In response to the prompt to list the given information, students were again inclined to be somewhat long-winded, and we'll need to work more on extracting the essential information and writing a table. Maybe insisting on a table is moving a little too fast, actually - once that is done we're practically in the next box already. As an intermediate step, maybe just listing the numbers in the problem together with a key word for what they quantify might be better.
The next part, writing down equations relating the known and unknown quantities, remains somewhat hard - but at least it's easier now that the students don't jump directly from skimming the problem to this step! I've given the students 2-3 out of 5 points on test items just for completing steps 1-3 above. That may sound like watering things down, but it really has resulted in more students even attempting the word problems - and once they have completed the first 3 steps they are much more likely to be able to complete the rest anyway.
The "what is your answer" box is for a sentence answering the question in the first box, and this answer has to make sense in the real-world context of the problem: units are included, and answers of the kind "4 remainder 2 buses" wouldn't work there, of course.
*inconveniently on my school computer just now.
**there's nothing like little boxes for prompting students to write something and not skip a step!
When I make up my own "real-world" problems they often involve pink dragons with purple wings and silvery scales. Some students roll their eyes then, but the dragon problems make me happy, and at any rate it would take a lot to make problems more boring than the ones in the textbook. Why are they all about ticket sales, long-distance phone calls, and cars? Yawn.
I have not tried this method as stated, but it would be interesting to do that some time. The graphic organizer* I used a few weeks ago for systems of linear equations is inspired by this idea, however. There are little boxes** for each of the following:
- What exactly is the question? (What are you asked to find?)
- What are your variables?
- What information is given? List it or write a table.
- What equations can you write relating the quantities?
- Solve the equations.
- What is your answer?
It took a while for most students to realize that the variables they were to define were directly related to the questions stated in the previous box, that the variables basically were symbols for these quantities. Many tried to assign variable names to known quantities instead. I might try and rearrange the worksheet to visually reinforce the idea that the box containing the question and the box where variables are defined belong together.
In response to the prompt to list the given information, students were again inclined to be somewhat long-winded, and we'll need to work more on extracting the essential information and writing a table. Maybe insisting on a table is moving a little too fast, actually - once that is done we're practically in the next box already. As an intermediate step, maybe just listing the numbers in the problem together with a key word for what they quantify might be better.
The next part, writing down equations relating the known and unknown quantities, remains somewhat hard - but at least it's easier now that the students don't jump directly from skimming the problem to this step! I've given the students 2-3 out of 5 points on test items just for completing steps 1-3 above. That may sound like watering things down, but it really has resulted in more students even attempting the word problems - and once they have completed the first 3 steps they are much more likely to be able to complete the rest anyway.
The "what is your answer" box is for a sentence answering the question in the first box, and this answer has to make sense in the real-world context of the problem: units are included, and answers of the kind "4 remainder 2 buses" wouldn't work there, of course.
*inconveniently on my school computer just now.
**there's nothing like little boxes for prompting students to write something and not skip a step!
When I make up my own "real-world" problems they often involve pink dragons with purple wings and silvery scales. Some students roll their eyes then, but the dragon problems make me happy, and at any rate it would take a lot to make problems more boring than the ones in the textbook. Why are they all about ticket sales, long-distance phone calls, and cars? Yawn.
Subscribe to:
Posts (Atom)