October
4th 2008
Assess this

Posted under: jobs, students

We had quite an interesting conversation this summer about Centers for Teaching and Learning, and their evil twin timewaster, “assessment.”  For those of you who aren’t yet in the know, “assessment” is an administrative exercise in which faculty have to find data to prove that their departments and programs are effectively educating students. 

Never mind that your department has 600 majors despite having only 24 tenure lines.  Never mind that you, all of you, several times a semester administer quizzes, tests, and essay assignments to your students, which you then “assess” so that you can assign them “grades.”  Never mind that you take all of those graded assignments and use them in your individual, personalized assessment of each student at the end of the term, which you deliver in the form of a “final grade.”  Never mind that a whole bunch of your majors manage to graduate every year, and that they’re replaced by even greater numbers of enthusiastic majors.  That’s not enough evidence that you’re doing your job!  No, you have to find “data” outside of numbers of majors, the grades they receive, and their graduation rates.

Well, I have nothing new or more positive to offer than this incisive critique of assessment by an anonymous faculty member at a large Southeastern university (via Rate Your Students–stop me if you’ve heard this one before!)

The money quote is at the end:

The appetite for data is only strong when the decisions about power and paradigm have already been made.

Those of you who have been roped into assessment excercises before will know exactly what that means.  What I want to know is:  what other industries demand “assessment” beyond the obvious proof that products are being produced and/or sold, clients are being served, buildings and roads are being built up and torn down, and that money is being made.  Is it because our work doesn’t have an obvious bottom-line monetary value that we’re forced to engage in these redundant exercises that take time away from our time and energy to serve students, publish research, and improve our teaching by reading a recently published book or two?  Just asking.

15 Comments »

15 Responses to “Assess this”

  1. e.j. on 04 Oct 2008 at 8:48 am #

    Maybe we should return to the medieval model of university assessment and personally charge for our services by the class. Or follow the legal model, and bill hours.

    Both of those, however, would most likely just illustrate how much more money we should be making, so I’m guessing administrators wouldn’t get on board.

  2. Delilah on 04 Oct 2008 at 9:08 am #

    I spent many years as a district and state curriculum specialist (MA in French/German/Research Methods; BA in English/German/Theatre), and I’m married to “Dr. DC Demographer” (who was a univ. prof when we married). Assessment has peppered our conversations for over 2 decades now.

    The inherent barrier teachers face regarding assessment is that we have no long term capability of proving our worth as educators. Consequently, administrators can question our methods (teaching “skills”) and our in-house results (student grades) with impunity. They know they have the upper hand in every teacher evaluation meeting because of this.

    That’s why teachers are the perfect target for politicians, school board wannabes, and all administrators.

    The main problem I faced when moving from the classroom into curriculum was to convince the teachers that they could trust me not to screw them. That took 3 years. GGGRRR!!!

    I’m convinced that every industry assessment is probably based on lies –or, at least, on exaggeration of data. My husband agrees.

    He now “assesses” several major foundation grant proposals and annual reports submitted to the Justice Dept., and the only way that these foundations can provide the detailed 3-5 year budget plan requirements is to make shit up. Who knows how much chartering buses for field trips in Ohio is going to cost 3 years from now?

    Bottom line: Good teachers know who’s mastered the material and offered a spark of enthusiasm and/or delight in class. And yes, some of the assessment is subjective.

    Administrators, in order to justify their existence, must reduce everything to its lowest form: the objective check list.

    When a principle asked me once why I bothered to teach the subjunctive case because no one uses it today, I answered, “I would that it were so.”

    He left me alone after that.

  3. Indyanna on 04 Oct 2008 at 11:05 am #

    I’d just offer three things on this:

    a) Assessments purport to dowse up and evaluate something called learning “outcomes,” but outcomes, as they are understood in the rhetoric of this business, epistemologically don’t exist. The actual outcome of (an) education is the lived life, and the best assessment instrument yet devised is the obituary. When I offer this to our cookie-baker wing in the department (I’m using Wild Walter’s term, not referring to the immediate previous post), they literally blanch, as if some heresy had been uttered. But the burden is on them, via some 49-year Framingham Study sort of longitudinal exercise with willing assessees (and -ors) to demonstrate that they exist. By that time, unless my TIAA-CREF thing has *really* tanked, I won’t be interested any more.

    b) A lot of instructional units (I’d say departments, but I’m trying to pretend that this is not an actual allegation), just make this crap up (the assessments) and hit the send/flush button. Google “Electronic Evidence Room” to see where it gets flushed to. And from there it’s stripped of identifiers, aggregated, and represented to be “data.” How this can be distinguished from what’s called research misconduct is beyond me. The professional associations would be justified in intervening and calling it such, and demanding that their members refrain from participating in it. But a lot of them are in bed with the Ed.Docracy.

    c) Even the supposed “demand for accountability” used to justify this is largely chimerical, beyond the special interest groups that support and benefit from assessment. What’s left in my TIAA-CREF will be sent (via my brother-in-law, a general in Nigeria) to anyone who can find anyone on their block who will say–without being prompted as in push-polled–”the three things that worry me and the missus are the cost of gas, our adjustable rate mortgage, and whether or not that little college on the edge of town can prove that it’s actually educating its students.” The demand is made up by Ed.Ducrats, with their allies in the state legislatures, who will back anything that can generate an annual stream of self-serving press releases.

  4. Buzz on 04 Oct 2008 at 11:24 am #

    We spent most of our faculty meeting yesterday talking about assessment. It’s a time sink for our department, but at least it isn’t being used by administrators to interfere with our operations. We need to prove to the accreditation agencies that we’re doing some form of assessment, but we are the only ones who are using the actual assessment data. And the data itself is not exactly useless. I and a couple other professors have been plotting out some changes to our graduate program, based on what does or does not seem to be working.

  5. Erica on 04 Oct 2008 at 12:54 pm #

    What I want to know is: what other industries demand “assessment” beyond the obvious proof that products are being produced and/or sold, clients are being served, buildings and roads are being built up and torn down, and that money is being made.

    That gave me a good laugh! I recently finished working as an engineer for the automotive industry. EVERYTHING gets assessed there — assessed stupidly.

    I’d get a good review for having only one quality failure in a month (despite that failure costing us a hundred grand), then a bad review for having over three failures one month (failures which didn’t affect functionality or cost anything). I even got judged on the absenteeism of workers in my department, which I had NO control over (I even promised to bake them cookies if they would come in more, didn’t work!)… but it was a number, so they slapped it on the evaluation list.

    I must admit, industry productivity is much more quantifiable than humanities (or even science!) education. So assessment really ought to be straightforward — are we selling quality things and making money, or not? But it ends up turning into a fiasco because they don’t assess the right things. (And when they do ask the right questions, the managers don’t know enough about the math to see through creative manipulation by engineers, so it’s garbage anyway…)

    Everybody wants statistics so they can have an easy, nice-looking number. Whether that statistic is meaningful is completely irrelevant!

  6. Indyanna on 04 Oct 2008 at 3:16 pm #

    I think there is or can be a difference between program assessment–rational beings sitting around talking about what is and isn’t working–and “outcomes” (sic) assessment, in which the prof. who just entered the final grade that’s deemed insufficiently probative of whatever then–before going off on holiday–applies a different (but doubtless proxy) gridded rubric that’s another way of saying the same thing but that IS suddenly deemed probative. As far as not being used by administrators, at my institution you can see the slippery-slope elements like the melted chocolate chips on a kid’s face. Everything is understood to be beta version and just to get the accreditors off the Ed. School’s back until the next visitation. Then in the next breath we’re hearing about the need to establish a “culture of assessment” institution-wide. I wouldn’t buy into anything about just for internal use. My senior colleagues (now mostly retired) recall how course evaluations came in on little cat’s feet, just for faculty self-critique, etc. Wouldn’t think of making them into personnel criteria, etc. Right. What amazes me is why faculties nationwide don’t conspire to topple these regimes that bleed in through the side membranes of the suite, without consultation to say nothing of consent. If they (as was said in the old good days) gave an assessment and nobody hit the “submit” button, coast to coast, is there an “Air Traffic Controllers” solution (cf. Reagan, 1982-1983) that could be used to bust the strike? Fire ‘em all? If there is, it would be a dream come true for all the nervous dissertators out there, almost like the earthquake we imagined hitting the AHA back in ‘??. But, as Nikita Kruschev once said, the living would envy the dead in the nuclear winter of the reconstructed university.

  7. PZ on 04 Oct 2008 at 7:42 pm #

    One of the departments I work for came up with a good one for this: the final assessment is done by the student, and they assess the major and what it has done for them.

  8. Historiann on 05 Oct 2008 at 8:16 am #

    Thanks for carrying on the conversation here without me yesterday!

    I like PZ’s suggestion, although it sounds suspiciously non-quantitative and so I don’t know if it would fly. Indyanna’s “Electronic Evidence Room” numbers generator approach might be the way to go. Who would notice, I wonder?

    I also like ej’s notion of billable hours, but you’re right: that’s not the kind of data “assessment” demands. “Assessment” is an exercise in which we’re supposed to identify “areas for improvement” instead of congratulate ourselves on a job well done. Erica’s stories about “assessment” among engineers was depressing, but it highlights the arbitrary (if not totally perverse) nature of the data we’re supposedly collecting.

    Deliliah, thanks for your insider perspective on assessment. I especially liked your diagnosis as to why assessment is a particular disease of the Educrats:

    The inherent barrier teachers face regarding assessment is that we have no long term capability of proving our worth as educators. Consequently, administrators can question our methods (teaching “skills”) and our in-house results (student grades) with impunity.

    Perhaps Indyanna’s prankish suggestion that we submit obituaries of majors is the way to go. (But, the problem with that approach is that today obituaries are, God willing, an assessment of a department at work 50 years ago…)

    I guess the question is, when historically did people in higher education feel the need to explain why and how specficially education is of itself a good thing? (This is a trend that goes farther back than Republican takeovers of state legislatures, although that’s been a potent recent reality for many of us in public higher ed.) Why did we neuter ourselves politically by deigning to answer questions like that? That’s like asking farmers to prove that food is good for human growth. Just because some of us make our livings providing food and education doesn’t make the endeavor suspicious or corrupt.

    My guess is that this has something to do with the professionalization of education in the twentieth century, as well as feminism. In many ways, the democratization of education in the 20th C relied on the underpaid or free labor (of nuns) of women teachers. That is, around the turn of the 20th C as women moved into education and men moved out of it, it became cheaper and therefore feasable to offer to more people, thus making secondary school standard instead of just elementary ed. for all. But, by the 1960s and 1970s, when other professional options besides “teacher” or “nurse” were open to bright young women, those bright young women abandoned education as a career for higher-paying jobs. This put public school districts and private schools in a position of having to pay higher wages, thus the demand for “accountability,” “assessment,” etc. No one much cared about education until teachers had a modicum of leverage to insist on better wages. The fact that many of those bright women went to graduate school, got Ph.D.s, and to some extent, feminized (some of) the humanities is part of the bigger picture for the advent of “assessment” in colleges and universities.

    But, this is all just a guess. Those of you who are twentieth century historians and/or historians of education may disagree and will perhaps offer a more compelling explanation.

  9. Roxie on 05 Oct 2008 at 10:01 am #

    Indyanna nails it with her comments on the slippery slope and the brilliant suggestion of a nationwide strike against the idiocies of LOA. LOA is at best a huge sink-hole that takes time away from faculty research and departmental action on serious issues. At worst, it’s an assault on academic freedom and a covert means of assessing teachers and programs. The program administrator in our house fears that those “results” we all more or less make up may be used to justify cuts in instruction or programs somewhere down the line. But don’t tell her dean she said that.

  10. Notorious Ph.D. on 05 Oct 2008 at 10:17 am #

    In all seriousness, I like Indyanna’s idea of a strike, but it only works for the tenured. Last year, in my last professional review before tenure, I got glowing reviews, except that I needed to specify my “expected learning outcomes” in my syllabi — already usually 6 single-spaced pages. This year, for tenure review, they’re in there. I seriously considered putting in a paragraph-length ode to assessment in my narrative, but restrained myself.

  11. Historiann on 05 Oct 2008 at 10:34 am #

    Notorious, I’ve seen these “Expected Learning Outcomes” on other (usually more junior) people’s syllabi, and I’ve wondered, “um, isn’t that what a syllabus IS already?” After all, syllabi at least roughly describe the course content, then list the assigned books, and then list specific topics for study, sometimes even lecture titles and study questions, and of course specific assignments (especially on detailed 6-page syllabi like yours). Now I understand: “ELO” is a code restating what any sentient reader could pick up from reading the rest of the syllabus.

    I’m so sorry that you’ve been put through this pointless exercise in pointless hurdle-clearing. I take pleasure in the pretty certain knowledge that if anyone on my department’s T & P committee complained that a junior colleague didn’t have a statement of ELO, that that comment would be roundly derided as pointless nitpicking and that the commenter would be told “DUH, read the frackin’ syllabus.”

    Again, I ask: when did we consent to this self-neutering and second-guessing as a way of life? Sometimes I think that many of our colleagues don’t see the value in their work. (How adding an ELO statement addresses that, I have no idea. I think therapy would be a better investment of one’s time and money.)

  12. Indyanna on 05 Oct 2008 at 12:32 pm #

    If people want to see a foundational document in this national project for the “high-schoolization of the collegium” check out _Greater Expectations_ a group written paean (?) to the goal of “alignment” as an uber value of higher education. Sponsored by the Carnegie Something or some other such think tank and staffed by a who’s who of the academic managerial class. Just read the “Executive Summary” if you don’t have time to wade through. It begins with a chilling attack on the concept of “faculty ownership” of courses. By this is not meant the specialized question of copyright or intellectual property rights in distance education materials. Rather, it refers to the traditional idea that a university hires presumed experts in this or that field and then gives them career-long chunks of time to decide what is best to teach and how to do it. (You know, what you experienced up there in college). That was good enough back in the 20th century when only a large fraction of the population went to college. Now that it’s as necessary as high school was in your grandpa’s day we need to organize college more like high school. So you’ll come back from a summer in the archives and the new executive vice dean of curriculum management [think of an Assistant Superintendent] will clap sharply and say: listen up, this is what we’re going to do this year, I need you to do this, you all to do that, etc. That’s the new Greater Expectations vision statement of how it needs to work. The newly-implemented (if a bit Orwellian) “alignment” of everything with just about everything else will flatten the speed bumps that traditionally made some students decide to do something else besides stay in college. This is good for retention. _Greater Expectations_, it turns out, provided the entire template for my institution’s ongoing (and just aborted) project to revise its “Liberal Studies Program” in order to embed this “culture of assessment” in it.

    Oh, yeah, and forget about that part of coming back from summer in the archives. Just joking. G.S. prominently features reams of the newly obligatory boilerplate whine about the regrettable cult of institutional reward for research and publication over continuing alignment studies. So we’re going to cut that part out too…

  13. Fratguy on 05 Oct 2008 at 5:25 pm #

    Erica, I appreciate your perspective on the place of asessment and quantification in private industry. Indeed it is the mantra of most of my friends in the business world that it is impossible to know anything or do anything without data. You are absolutely right as well that the data generated seldom serves the purposes of those from whom it is gathered, it is much much for the benefit of the management class.

    Any exercise in quantification and prediction runs into the dichotomy of applicability vs generizability. A hypothesis is tested to generate data, for example how efective a particular teaching technique is. If more a priori stipulations and specifications are placed on how a particular technique is tested (ie this particular technique, by professors from this particular school of thought, to this set of students with a minimum level of education etc etc etc) the data that is generated will more accurately reflect expected student outcomes WITHIN THE DEFINED CONFINES OF THE EXPERIMENT. In order to turn the experiment from a mere parlor trick into something that is generalizable, that can be picked up off the shelf and used universally, preconditions need to be lifted and the data becomes subsequently much less robust.

    By the descriptions in this post is appears that there are no experimental conditions imposed on the data that is gathered. It is infinitely generalizable and therefore nothing more that a numeric description of what has happened. Without preconditions or descriptions of the experiment the “data” is utterly meaningless. It cannot tell you how to improve your your teaching, it does not even define “improve” other than as a change in the data, maybe. People who traffic in tautologies should be called out for the BS artists that they are.

    The resulting data, or rather descriptive numbers, though meaningless are nonetheless very potent in the wrong hands. It is clear that management loves this stuff as a means of justification of self and of predetermined ends. If the data is meaningless it can be made to mean anything. In my field this “data” is used as a bludgeon to alter behavior (how the hell else are you going to get through the skulls of a generally male and upper middle class workforce ?) Sometimes this is a good and necessary thing, when the desired outcome is demonstrably desireable and measureable. We have all been conditioned to slobber when the grade bell is rung. On the other hand when the ends are not as neatly defineable, or demostrably good, I sit in meeetings, let the drone of numbers wash over me and pretend that I am the HUD secretary under LBJ. The droning CEO becomes Secretary McNamara talking about kill ratios. I can only hope these people will be judged as poorly by history.

  14. Indyanna on 05 Oct 2008 at 8:03 pm #

    I like that last analogy, Fratguy. It set me to thinking that if I was in that cabinet room I’d maybe like to be channelling Ramsey Clark going off-message, or Stew Udall. Poor Dean Rusk even. HUD didn’t even have an “Edu-function” in those days. Who’d have thought we’d reach a point where you could make a case that one of the country’s big mistakes in the last half century was ignoring Ronald Reagan’s plea to just say yes and kill off the federal DOE?!?

  15. Are you part of the solution, or part of the problem? : Historiann : History and sexual politics, 1492 to the present on 27 Aug 2009 at 9:47 am #

    [...] of you who remain blissfully ignorant of “Outcomes Assessment,” allow me to explain:  academic departments are asked to invent new tests and measures by which to measure their students&#….  That’s right, friends!  It’s redundant work for everyone, except for the [...]