The Keys to College and Career Readiness

keys 1

By Anna Litten
My 9 year old doesn’t know how to type.  I don’t often worry about the typing skills of kids, but since he’s taking the PARCC test on a computer this week, his words per minute really count. 

This year, my son is one of 4,370 third graders in Boston taking the PARCC assessment. According to the Massachusetts Department of Education, PARCC *require(s) students to speak and write in a variety of formats and support their ideas with evidence from authoritative sources.* PARCC calls on students to bring high-level thinking skills to their work.  I love the push to ask kids to think and engage with information but I also wonder about that typing.  Will poor typing get in the way of real assessment? 

Throughout the loud debate around Common Core-aligned instruction and assessment I have been a moderate.  I believe that in the interest of equity, teachers must have tools to assess student learning in order to improve and adjust their instruction to better meet the needs of all students.  At the elementary school my children attend, lower test scores for black and brown children has been a rallying cry for improvement.  I see teachers and administrators I know and trust use assessment data to focus work on improving teaching and to provide supports for families so we can narrow the achievement gap between kids of color and their white peers.  This is what assessment should do: give our schools the information to identify where we fail students, and fix it. 

Computer-based PARCC asks kids to answer the kind of bubble questions adults often think of when they hear *standardized testing* but PARCC also asks kids to answer open response questions.  Third graders are asked to read short passages and write brief responses based on question prompts.  As a librarian and a writer, I love asking kids to read, think and write.  As a parent, I have no idea how asking my son to do this on a computer will provide anyone with a useful assessment of his skills and abilities.  Like many younger children he simply isn’t a good enough typist to show off his work in an essay. 

My son isn’t alone.  A February, 2016 Education Week article, *PARCC Scores Lower for Students Who Took Exams on Computers,* examined discrepancies in test results between computer-based test takers and pencil and paper test takers.  Reasons for lower scores for computer testing are not always clear but the evidence points to familiarity with the testing platform as a key marker for a child’s success in this assessment.  For my son and many others, typing skills will certainly affect PARCC scores. 

He’s a budding magician and I’d much rather see him practice his fine motor skills on shuffling a deck and wowing me with a card trick than achieving 50 words per minute. 

My son’s class uses Typing Club and other apps in class to boost familiarity with the keyboard and touch typing.  He’s got the hang of touch typing, but for the most part, he’s still hunting and pecking away. We haven’t really pushed typing at home.  He’s a budding magician and I’d much rather see him practice his fine motor skills on shuffling a deck and wowing me with a card trick than achieving 50 words per minute. 

In many ways, I don’t care that my son won’t show his best work or that the testing will be difficult for him.  A child’s job is to learn to problem solve, or better yet to problem-find, and navigate difficult situations. My hope for him is that he takes important lessons from his PARCC experience in doing his best with the resources he has, asking questions about what he needs to do better, and advocating for himself and others. 

My child, who is aware of his strengths as a reader as well as his limitations as a typist, finds the idea computer-based testing kind of funny.  I’m not worried that he’ll find computer-based testing stressful or disheartening.  I do question the usefulness of this type of assessment for 8 and 9 year olds.  If we truly want to use assessments as a tool for discovering where we fail kids, we need to find another way to assess students that will allow teachers and administrators to learn about what students know aside from familiarity with a keyboard and assessment platform. 

Anna Litten is a librarian and the mom of three Boston Public School students. Send comments to Jennifer@haveyouheardblog.com

32 Comments

  1. It seems to me that no matter the communication technology, whether it typing, writing by hand, or reading, there is a danger that that shortcomings in using the technology might be misunderstood as shortcoming in a student’s understanding. I don’t think there is a way around this as long as communication from one person to another is part of the assessment process.

  2. Beyond the typing part, I’ve heard that the PARCC test writing will be computer-graded.

  3. Typing aside, what useful information have your children’s teachers ever gotten from standardized testing? Specifically, what information have they gotten that they couldn’t have gotten without it, simply by being teachers?

    1. Dienne,

      I don’t think a teacher will learn much about the students in their classes, but I also don’t think that is what standardized tests are for.

      Teachers can learn something about themselves and how they teach from standardized exams. Teachers, and I certainly include myself in this, are very susceptible to the God complex. (If your interested in this idea, you might look at this TED talk: https://www.ted.com/talks/tim_harford?language=en . Another talk that has convinced me that standardized tests can tell a teacher a lot about themselves and their teaching is this talk by Professor Mazur: https://www.youtube.com/watch?v=WwslBPj8GgI .)

      People outside the classroom can learn about what is going on inside the classroom. What does it mean that a student has graduated from New Trier High School last year? Does it mean the same thing that graduating from Marshal Metropolitan High School, another traditional public high school only 19 miles away?

      1. What standardized testing tells us about the kid from New Trier vs. the kid from Marshal is that the kid from New Trier is affluent (and probably white), while the kid from Marshall is poor and his school was probably underfunded.

        Of course, we could figure out those things without standardized tests….

        1. Dienne,

          And what impact does affluence have on education? Is it simply that the relatively poor are less likely to graduate from high school, or are there substantial differences in the academic achievement of graduates from the two schools? How do you define “underfunded” and what impact does it have? These are questions that require you to compare students across schools, and this is why a standardized assessment is useful.

          1. There are substantial differences in the academic achievements of graduates from the two schools, but testing the students year after year has done little or nothing to alter this fact. A 9th grade English teacher at New Trier is starting out with students who have been taught how to write essays (and not just “extended responses” for tests), who have attended school environments rich in literature and the arts, who have for the most part been brought up in stable homes, who have adequate shelter and food and rest and medical care and dental care and mental health care when they need it, who have always been encouraged to speak their minds (politely), who have positive experiences with school and schooling, and many of whom — for good measure — attended one of the oldest progressive school systems in the nation, where early learning is supported through play, conversation, and other social interactions. The 9th grade teacher at Marshall has students with few or none of these advantages — and then is hampered in his or her own work by having fewer school resources and a heavier teaching load. At this point, it is clear that having test-score data from the two schools has not led to any influx of public resources or support for Marshall, its teachers, or its students. So what is the point of continuing to administer the tests? Why waste the money?

          2. Gloria,

            I don’t think tests are intended, by themselves, to improve student learning. They can help a teacher understand if they are teaching effectively or not, but the teachers must be willing to change if they are confronted by evidence that other approaches work better. Tests can pressure policy makers to make changes, but the policy makers must respond to that pressure. I agree that it has not YET led to substantial changes at Marshal (though the percentage of student who are chronically truant has dropped for 100% to 91% and surely that will help a bit), but hope remains for the future.

            I am curious about a couple of things in your comment. You say there are substantial differences in academic achievement between graduates of the two high schools. Do you base this statement on standardized test scores or do you have another metric? You statement about teaching loads at the schools is also interesting. The average size class at New Trier seems to be about 21 students, while at Marshall it is 17 students. Do Marshell teachers teach more classes or is your statement based on the relative ease of teaching a student at New Trier compared to a student at Marshall?

  4. Teachingeconomist: My parents and grandparents went to New Trier. I know graduates of and teachers at New Trier. I mention differences in academic achievements because I know that students’ and teachers’ work at New Trier is different from the work in most public high schools. I know that many students at New Trier are engaged in reading, writing, and analysis that go beyond what is typical at public high schools — because that is what many of their students are ready for at 14 years of age. It is unreasonable to ask the teachers at Marshall to accomplish the exact same educational goals when their students’ starting point is not the same. That is not to say Marshall’s students will never be capable of demanding intellectual work, or should not be engaged in learning that will get them there. But many of them are not there at 14. If we want all children to reach the peak of high school readiness before they begin 9th grade, we should be honest about the many factors, in and out of school, that have enabled New Trier students to get there. You cannot replicate New Trier without also replicating the conditions in which most of its students have grown and thrived.

  5. Teaching Economist,

    One ninth grader writes a good essay. How much of that can we attribute to the ninth grade English teacher? Good organization? Maybe she learned that in 4th grade. Good ideas? Maybe, if the essay was about softball, the ideas came from the softball coach. Good sentence structure? Maybe the parents are eloquent and model elaborate sentences. Good word choices? How much of the ninth graders’ vocabulary (say it’s 30,000 words) was learned in ninth grade English class. It seems to me that you cannot make any valid inferences about the teacher from a writing test. Similar with reading tests. It’s somewhat different with math and subjects that entail discrete bodies of knowledge, though even here it’s fraught with problems.

    1. Ponderosa,

      If a student writes an essay at the beginning of 9th grade that is as good as any student could write at the end of 9th grade, why on earth is the student in the 9th grade class? Student should enroll in classes because they will learn things in the class. If they do not learn anything in the class, it was a waste of time for the student and the teacher. No doubt we can agree on this, right?

      The issue is always how taking a class changes the abilities of a student. If there are no changes, the class was pointless for the student. It seems to me that one has to entertain the possibility that the pointlessness of the class might be result of the person teaching the class.

      1. The question is not whether or not there are any lousy teachers in this country who waste the time of their students. Common sense tells us that there must be some of them somewhere.

        The question is whether or not the current trend of devoting ever more time and money to standardized testing does more harm to students than good. Certainly the analysis by the American Statistical Association suggests that standardized testing is not likely to be some side-effect-free magic bullet for ferreting out pedagogical time-wasters:
        http://www.amstat.org/policy/pdfs/asa_vam_statement.pdf

        1. Newark,

          I have read the ASA statement and the response of the Chetty et al. I think we might have a good discussion about the relative merits of the two points of view. Would you like to engage in that discussion? In case you have not read the response, here is a link: http://www.rajchetty.com/chettyfiles/ASA_discussion.pdf

          It seems to me that most of the issues about using increases in test scores to evaluate teachers is the increase in the proportion of class devoted to explicitly prepare students to take the exams. The solution to this problem is to stop doing this test prep. The opponents of using these tests should not have any problem with this, as they routinely argue that teachers have no significant impact on the test scores. If teachers have no impact on test scores, test prep has no significant impact on test scores.

          1. Teaching Economist,

            What you don’t seem to understand is that some things are more testable than others. A test can tell if I taught my students the rivers of Europe. I don’t believe any test can tell if I taught my students to write good essays because the mental ingredients of writing a good essay are so complex. No one teacher can claim credit for such an accomplishment. If you narrow the definition of good writing to “has a topic sentence”, then maybe you can nail a teacher for an essay that lacks one. But then you’re not really testing writing ability; just a tiny component of it. Such tests beg test prep because a teacher who bestows lots of the other untested ingredients will be labeled a failure.

          2. Ponderosa,

            Certainly some things are more testable than others, but how is it that you are able to assess that a student’s ability to write a good essay has improved over the last school year? My plan would be to compare an essay written at the beginning of the year to an essay written at the end of the year.

            Are you saying that it is in principle impossible to tell good teaching from bad by looking at the progress students make during the school year or that the current tests designed to look at the progress students make during the school year do a poor job, but other exams or methods would do a good job? This is an important distinction

  6. I’m saying it depends on the subject. If the subject is a discrete body of knowledge, e.g. the rivers of Europe, then you MIGHT be able to make some valid inferences about the quality of the teacher (however, if the class is saddled with a maniacal psychopath child who constantly interrupts instruction and will not be suspended because of our national allergy to discipline, then it’s not really the teacher’s fault the material does not get transmitted, is it?). But if the subject is something complex and nebulous, e.g. wisdom or character, then it’s definitely not possible to make inferences about the teacher by comparing pre- and post-tests. Suppose there were a class in the course catalog called Wisdom 101 and its aim was to impart wisdom. Absurd, right? Wisdom is not something that can be transmitted from a teacher to a student through direct instruction or any other means. A teacher might be able to nudge a student in the direction of wisdom, but we all know wisdom is something that’s elusive and slow to develop, and that its sources are shrouded in mists. I’m arguing that writing is more like wisdom than geography –i.e. that its sources are hidden to most of us, and that it’s slow and uneven in its development, so that even if we call a class Writing 101, the teacher cannot take much credit or blame for a student’s writing ability at the end of the course. Just because Writing 101 (or 9th Grade English) is there in the course catalog right next to Geography 101 and Economics 101, that does not mean that it is the same species of learning endeavor.

    1. Ponderosa,

      If schools routinely taught Wisdom 101 I would be more concerned with the difficulty in evaluating how much wisdom a student had learned. Luckily, that is not a course that is frequently (if ever) taught.

      I think it would be a worthwhile exercise to think about the subjects where a student’s progress can not be evaluated by observing a student’s performance at the beginning of a class year and comparing it to a student’s performance at the end of a class year. It seems to me that we can do that with writing, and teachers often do that with writing and assign students grades based on the teachers assessment of the student’s ability to write. Other subjects, like music, would require an audio/visual record of a students work. My youngest had to do playing tests regularly in his four years of orchestra. These could be inexpensively recorded and evaluated by someone outside the classroom.

      Your point about teachers not being able to take much credit or blame for a student’s progress in a class is part of a fundamental dilemma that I think faces the folks arguing against changes in education. If teachers can not take credit or blame for a student’s progress, than why require special forms of education for teachers and encourage teachers to pile on even more education through the salary structure? I think that you have to choose a horn here: either teachers have an important impact on student outcomes, so encouraging teachers develop the craft is something society should be willing to pay for (this is the horn I select) or teachers do not have an important impact on student outcomes, so society should reduce resources devoted to teachers developing their craft beyond a basic level and use those resources in other ways that do have an important impact on student outcomes.

      1. I’d like to throw a spanner in the works here in the form of a fascinating study on non-cognitive ability, test scores and teacher quality: http://www.nber.org/papers/w18624 I learned of it from Paul Tough’s new book but didn’t have room to include it in my interview. The researcher created a proxy measure of how engaged a student was in school: attendance, discipline problems, how hard s/he worked in class, etc. The proxy measure turned out to be a much more accurate predictor of whether kids went to college, how much they earned as adults, arrests, etc than test scores did. But the economist who did the study also found that the teachers who were doing the best job of raising test scores were not the same as the teachers who were increasing student scores on the noncognitive measure. And, alas, since this study was based on teachers in North Carolina, I’m guessing that most of the latter group are gone since they don’t show up in what the state values, which is the ability to boost test scores. So while I agree with Teaching Economist (for once) that there is a fundamental dilemma when teachers end up arguing that they have no effect, the narrow definition of what gets measured is a big problem too. I’ll be interested to see what you make of the study. My favorite finding was that English teachers were more likely to influence students’ levels of engagement and persistence than algebra teachers, which is not a big surprise. My English teachers helped me to persist in school despite the algebra teachers 🙂

        1. I suspect that we agree on a great many things, so I am not surprised that you agree about this.

          I agree that Kirabo Jackson’s paper is interesting, and note that another thing we apparently agree on is the use of econometrics in analyzing teacher impact on students. My one worry is that his non-cognitive measure ends up simply identifying African American male students.

          If you look at short term suspensions, for example, in the 2008-9 school year there were 211,841 short term suspensions given to boys and 80,784 to girls, 166,844 to African American Students, 85,897 to white students. Combining the two, there were 5.7 African American boys suspended for every 10 African American boys enrolled in public education in North Carolina in 2008-9, 1.6 white boys suspended for every 10 white boys enrolled in public education in North Carolina, 2.57 African American girls suspended for every 10 African American girls enrolled in public education in North Carolina, and .51 white girls suspended for every 10 white girls in public education in North Carolina. Long term suspensions follow a similar pattern.

          I know that Kirabo Jackson included student demographics in his study, but I did not see the results in the paper. Perhaps when and if it is published the journal referees will ask him to include that work as well.

          Source for suspensions by race and gender in North Carolina: http://www.ncpublicschools.org/docs/research/discipline/reports/consolidated/2012-13/consolidated-report.pdf

  7. TE:

    I teach my students how the introduction of Islam changed West African civilization. Before the lesson, they did not know; after the lesson, they knew (I can tell because I orally quiz them and then quiz them a week later on paper). Clear impact.

    With writing it’s different. Sure you can teach and test for a handful of the accoutrements of “good” writing –e.g. do they have an intro, a body and a conclusion, and use topic sentences ? –but I think those are just proxies, and bad ones, for the essence of good writing. I think Paul Krugman is a good writer on economics and politics (who defies the conventional teachings about paragraph structure, BTW). What makes him so? A naturally sharp mind with a good memory. A ton of knowledge about the topics he writes about, especially economics. Time spent digesting that knowledge and thinking even more deeply about it to clarify the thoughts (a writing teacher can prescribe organization, but only understanding the topic and thinking about it can reveal a fitting organization of the material –a writing teacher cannot give that) . And, critically, a rare command of the English language. Which of these did his 9th grade English teacher give him? I’m sure she helped –probably with teaching grammar, something we neglect these days. The novels she assigned maybe added 1,000 or so words to his 100,000 word vocabulary (but which 1,000 words? No test can tease this out). The compositions she assigned limbered up some writing muscles. What made a big difference in my own expository writing was reading the New Republic for decades after college –those pieces imprinted templates in my mind that inform my now-less-bad writing. No doubt Krugman’s extensive reading has done similar. We trip ourselves up with labels. Just because it’s called “writing” doesn’t mean its origin must be in a writing class. I suspect you are reluctant to think hard about what writing is because it fatally complicates your beautiful vision of evaluating teachers with tests.

    I love the study that Edushyster brought up. It shows that there’s a deeper alchemy going on in the classroom than what tests scores reveal. And that this alchemy is real and important. Seeds are germinating. Just because the sprouts haven’t broken the surface of the soil does not mean that the rain and the light did not make their impact.

    1. Ponderosa,

      So when you assign a student a grade in ELA, what is that based on? The NBER working paper points out that the research is that student grades are generally based on a mix of student academic performance and how biddable the student is in class. Perhaps being biddable figures prominently in ELA classes.

      Would you really want to follow through with the policy recommendations give in the Jackson working paper? They are 1) to identity those observable teacher characteristics associated with effects on the non-cognitive factor and select teachers with these characteristics 2) to incentivize teachers to improve the non-cognitive factor, and 3) to identify those teaching practices that cause improvements in the non-cognitive factor and encourage teachers to use these practices (through evaluation, training, or incentive pay)

      All seem reasonable to me, but the only observable characteristic of teachers that impact the non-cognitive measure of students appears to be scores on certification exams. A teacher’s years of teaching experience, for example, have no discernible impact on these non-cognitive measure of student ability, though of course more years of experience are associated with higher cognitive test scores.

  8. I have never seen the word “biddable” before. It’s rare that I encounter a word I haven’t seen. Exciting. I had to look it up. You didn’t teach it to me, TE, but you made the assist…I wouldn’t have looked it up if you hadn’t used it. So, in effect, you advanced my reading education. Thank you, reading teacher.

    Maybe we can evaluate the chemistry and geography teachers with pre- and post-test. What I’m saying is, please don’t evaluate the ELA teachers with reading and writing tests, because it’s unfair. These abilities have myriad “mothers and fathers” and it’s ridiculous to pretend that they originate strictly from ELA class. Deal?

    1. Any response to the substance of my post? Is the grade for your ELA course entirely based on how biddable the student is?

  9. An ELA grade could be based on grammar quizzes, reading quizzes, projects, homework completion, essays, short stories, poems, literary analyses, participation in class discussions, among many other things. Some of these may be, in essence, credit for being biddable –i.e. you did the HW so you get points, regardless of whether it’s right or not.

    1. Ponderosa,

      As I thought, you are able to evaluate student’s written work, the essays, shot stories, poems, literary analysis, etc, and summarize your evaluation in the form of a grade. It seems to me that if you can evaluate student writing, others can as well, so it is possible to meaningfully compare samples of the student’s written work at the beginning of the year and at the end of the year to see how much improvement there has been over the course of the year.

  10. No. Our school actually used to do this. It was a very time consuming waste of time for the teachers who had to score and re-score hundreds of essays at the start of the year and then at the end. The essay prompt at the beginning was different than the one at the end, so right there it was apples and oranges. A fashion-minded kid might have a lot to say about the school dress code, but flounder when asked to be persuasive about a change to the school calendar. The same kids who used advanced vocabulary at the beginning used advanced vocabulary at the end. The ones who wrote crudely at the start still wrote crudely at the end. Sure, if you narrow your focus to something like “does it have five paragraph structure”, and you drill that all year, you might infer something about the teacher’s efficacy on that one criterion. But progress on that one criterion does not, to me, signify becoming a better writer. It signifies becoming better on that one shriveled little criterion. We used rubrics, but on each criterion there is so much subjectivity and guess work that the data we generated amounted to mush. It told us almost nothing. There was only the illusion of precision. Read Todd Farley’s Making the Grade to see what a farce essay grading is among the “pros” at ETS and Pearson. Until we’re honest about what writing is and what its sources are, all these attempts to extract data about it are going to be a waste of time. Writing ability is not a discrete packet located somewhere in the brain. For any given writing task there is a convergence of disparate elements located in diverse parts of the brain — vocabulary, background knowledge, familiarity with certain conventions, memories of things one’s read, and it is variable within an individual depending on the unique profile of these factors in their brain. A “bad” writer who plays baseball may write better about baseball than a “good” writer who does not.

    1. My university depends entirely on teacher assigned grades for admission. K-12 students depend on teacher assigned grades to let them know if they can benefit from future education. Apparently that is a foolish idea for both my university and for the students as actually assessing students is too much trouble for K-12 teachers.

      Is there a stronger argument for standardized tests than Ponderosa’s post? Perhaps, but I am hard pressed to think of any.

      1. Now you’re talking about something else. I’m saying tests that purport to test “writing ability” are very tricky, and that it’s easy to make false inferences from them, not that chemistry tests are bad measures of what a kid has learned. An A in AP Literature and Composition may well mean that the student is a good writer (at least in the genres and topics addressed in that class) but it would be wrong to credit all of that “value” exhibited to the AP teacher since a ton of it may come from the student’s language rich household, his ability to read the novels he wrote about with understanding (largely a function of slowly-accumulated background knowledge and vocab)…all of which was acquired before entering 11th grade. And gauging the AP teacher’s “added value” in the area of writing ability with a pre-test and post-test is fraught with difficulties for the reasons I mentioned above. Your mania for measuring may work in some realms, but not all. Personally I think it suffices to put a well-educated English major in front of the room and trust that their passion for the subject will engender good things. If the teacher stinks, the local authorities will hear about it. Somehow civilization has survived for a long time without quantitative measures for every stinking thing.

        1. Ponderosa,

          When you grade student work you produce a quantitative measure of a student’s writing ability. Comparing student work at the beginning of the year to student work at the end of the year is fraught with exactly the same difficulties as grading student work for the purpose of assigning a grade in the class.

          There are a few high schools that do not assign grades to students, but generally we have given quantitative measures to every stinking class a high school (and college) student takes, and have been doing it for a very very long time.

          1. We cannot accurately measure the “value” an ELA teacher “adds” to a student’s ability to write. This fact does not inhibit many measurement maniacs who would rather use deeply flawed measurements (e.g. SBAC and PARCC, as well as the old state ELA tests) than have no measurement at all. Must the economic mind-set have imperial dominion over everything? Will everything go to hell if it’s not converted into data and graphs?

          2. Ponderosa,

            Perhaps we should take this one step at a time. When computing a grade, can you measure a student’s writing ability accurately enough to be confident that the student should not graduate from high school (if you assign a failing grade in a required class) or that a student should not be admitted to my university (if you and your colleagues assign grades that result in a GPA in academic classes below 2.0)?

Comments are closed.