Trinity Logo
 
President's Speeches & Writing Archive | Remarks: Middle States Conference 2002

What to Expect When the Evaluators Arrive….

or… The Seven Deadly Sins of Accreditation Visits

Many thanks to Middle States for inviting me to share a few thoughts at the close of this conference. It’s late on a Friday afternoon, and I’ve been asked to address the topic of what to expect when the evaluators arrive now that the new Characteristics and assessment handbook are our guiding lights. Well, I could be short; you know the drill: “Hi. We’re from Middle States. We’re You’re friends. Have you validated your measurements today?” I feared my audience would be long gone by this point, and I needed a hook to get the few remaining accreditation afficionados to stay. So, since it’s also Lent, I decided to take another tactic and entitle my remarks “The Seven Deadly Sins of Accreditation Visits.”

Now, it’s obvious that accreditation has always included elements of the Deadly Sins — who among us has not felt Envy when a neighboring school bragged about it’s excellent Middle States report, and we may even have found ourselves Lusting after their assessment plan. Our Pride is always at stake on these visits. Yet, that old devil Sloth sometimes prevents the task forces from really doing as good a job as they might have. So, of course you, the chief academic officer, feel Anger that a better self-study could not have been prepared — or maybe, heaven forbid, you feel a generalized anger at the whole process, but certainly never, ever at Middle States —- and then, feeling badly that you’ve been angry at hard-working colleagues, you may even proceed to relieve that feeling with Gluttony — has anyone been able to prove that a well-done two-year process of accreditation preparation and visitation might actually add to the girth, if not the stature, of the academic vice president and steering committee chair?

The Seven Deadly Sins I have in mind are not quite so dramatic — or fun. But they can be traps for the unwary or harried chief academic officer, the frustrated or inattentive president. And, appropriately, in light of the topics you’ve been discussing the last two days, all of these sins somehow emanate from the true Original Sin of Accreditation — dismissing outcomes assessment as unworthy of our prestigious, tradition-bound, elite intellectual enterprise.

But, of course, no one in this audience would do that — so let me shift to the other pitfalls, the hidden dangers that may not be quite so obvious until the visiting team lands on your campus.

The First Deadly Sin: The Mind-Numbing Descriptive Narrative

I could also call this “Boring your team to death.” Not a good idea. But I see this problem often in self-study reports. The writers substitute lengthy description in place of insightful analysis, and the result is page after page of catalog-like copy:

Fictional example: “The Academic Resource Center is the Center where students access Academic Resources. Academic Resources include Learning Skills Support, Advising and Career Services. The Office is open on weekdays, evenings and weekends. As part of the Center’s effort to reach more students, the director has instituted a ‘Bagels with Buffy’ program every Thursday. The Center receives more than 100 magazines and catalogs about careers annually. We have a website.”

Well, you get the drift, it goes on and on. Bagels with Buffy might be nice, but that’s not what the team is really interested in. What we want to know is how the resources provided through this Center help to improve student learning outcomes. A more fruitful paragraph might read, “As a result of the ARC’s math tutorial programs, more than two-thirds of the freshmen who had previously received grades of “F” on one or more Calculus I exams improved to passing grades, with nearly half achieving grades of “C” or higher.”

What does this mean for visiting teams? How does it relate to the new Characteristics?

I think the new Characteristics and assessment guidelines hold out the promise of a truly intellectually stimulating self-study and site visit. Your visiting team might actually arrive in a high state of excitement and eagerness to dialogue with you about your success with retention or improvements in sophomore performance on statistical methods courses. Of course, teams have always been eager — but done properly with the new standards and guidelines, an assessment-based self-study should have more character than the old “inputs” focused reports, perhaps even a little drama (“Will Buffy’s Bagels play a role in improving post-graduate employment rates? Stay tuned for the next chart and graph….”), and, certainly, a more complex layering of analysis of how the institution measures its performance comprehensively, programmatically, and in individual coursework.

But I think it’s going to take some time for steering committees and institutional self-study writers to be liberated from the old affection for descriptive narrative about inputs, because it sure does help to lay it all out for the visiting team: let’s describe what we do, isn’t that what they want to know? Frankly, no, we don’t want to know what you do so much as we want to know how what you do has impact on your students, and whether what you do is related to the goals you set out in your plan and mission statement.

You do have goals, don’t you?

More on that one, later.

Final point: while my topic today is headlined “What to expect when the team arrives,” I have suggested to Jean Morse and others on the staff that the new Characteristics and assessment framework may actually cause some dramatic change in the pre-visit activities, starting with the preliminary visit of the team chair, and, also, what happens when the final self-study lands on the chair’s desk. These pre-visit activities should become even more focused as a result of the new standards, and could even result in some changes for the timetable of visits if the preliminary work shows serious gaps in the institution’s ability to address outcomes and analysis properly.

Moving along, the sin of mind-numbing descriptive narrative is, in fact, a fig leaf covering the real sin:

The Second Deadly Sin: Defending the Data-Free Campus

Herein lies the heart of assessment: we are being asked to prove that our students know what we claim to teach them. The proof must be data-driven; anecdotal observations, while mildly interesting (“Although we don’t collect any data, the ARC knows that most of the students do get good jobs sometime after they graduate….”), are no substitute for cold, hard facts: “The ARC conducts an annual survey of the last three graduating classes, and the results for the last three years show that, on average, 92% of the class is employed on a full-time basis within three months of graduation. Moreover, 40% of those who are employed are in jobs that closely match their major fields….” (All quotes are hypotheticals.)

What does this mean for the visiting team? How does it relate to the new Characteristics?

The new Characteristics and handbook talk about the ‘climate for evidence’ on the campus, and this is a very significant new term to consider. I think one of the biggest changes we might forecast in visiting team behavior within the rubric of the new Characteristics will be an increased demand level for evidence. Now, we’ve always wanted to see the figures behind the asserted facts, but let’s face it: for some institutions, getting the data act together has been more heavy lifting than they can do. We are a highly verbal profession, and many of us are quite good at explaining ourselves without any data safety net at all. I have certainly read self-study reports where, but for the required data for IPEDs and the Middle States annual survey, not too many numbers appeared to trouble my brain.

Teams will no longer accept data-free reports. Moreover, our visits are quite likely to spend more time in the reading room reviewing the back-up data reports, rather than simply doing a string of polite interviews that don’t often elicit much more information than what was in the written report. Similarly, our on-campus interviews are more likely to be focused on an analytical probing of the data and methods used to gather it, and how the evidence will be used to provoke institutional change and improvement, rather than the old interview methods with open-ended questions that evoked widely discursive answers.

The Third Deadly Sin: Reams of Raw Data, No Analysis

On the opposite end of the spectrum from those who live in a data-free environment are those who hide in the thicket of data reports, not helping the intrepid team captain figure out how to distinguish the oaks from the pines from the cedars. I like the new Characteristics, but I fear that in the wrong hands, they could give rise to data fatigue, a condition as potentially dangerous as boring your team chair with the deadly narrative.

A corollary problem is the danger that the Institutional Research guru is the only person who understands the data, and the entire self-study and visitation process becomes a parallel universe experience, in which the IR people speak in numbers and the front line people — faculty and staff — continue to speak in descriptive narrative. Your visiting team will expect everyone to be conversant with the full report and the impact of the evidence on your plans, and the real value of the site visit becomes interpretive dialogue that facilitates analysis of the indicators. But if the IR person is the only one who knows the data, how could it possibly be useful for improvement at every level of the institution? It’s not enough to write a good report, your campus constituencies have to understand the report as well.

A related danger that I felt tugging at me as I read the draft of the new handbook on student learning outcomes — will we have “reliability and validity” police on our teams, experts in the nuances of difference among “construct validity,” “face validity” and “criterion validity,” and all wrapped in high-minded probing of institutional understanding of the difference between formative and summative assessment? My nightmare as a team chair always is when one “expert” on the team suddenly starts driving the process, and I don’t have the technical expertise in the area that’s suddenly taken over the visit. Most presidents are not statisticians, but we chair a lot of teams; vice presidents, too. We will have to go back to school on some of this — I urge Middle States to have very thorough new training for teams and chairs on the new material —- but we also have to insist on a healthy balance. I, for one, sure would work hard to steer team members away from too much preoccupation with the technical nuances and semantics of assessment, for fear we’d truly be lost in the forest.

Analysis stated in plain English is key. Having done the counts and crosstabs and correlations and validations and extrapolations, the question is, what did you learn about your students and their achievements, your programs and their effectiveness, your mission and goals and whether they are being achieved in your classrooms and co-curricular programs? You must do the translation from statistical jargon to analytical reporting. Otherwise, your team visit will become preoccupied with micro-data points we don’t understand and we’ll miss the big picture.

I also am concerned that we will wind up spending enormous amounts of time and energy chasing down the wrong end of the assessment funnel, pouring over course syllabi and individual course assessment methods, rather than focusing on the macro. Here again, as a team chair, I will be wary. I certainly think it’s part of our job to be sure, through review of syllabi and the institutional assessment plan, that appropriate course assessment methods are in place. But that’s not where I want my teams spending their time, and I hope that the training of teams for the new era will focus more on the macro programmatic and institutional assessment evidence, and how that evidence impacts strategies and tactics for continuous quality improvement across the enterprise.

The Fourth Deadly Sin: No Strategic Plan or Process

As is very clear in the Characteristics and assessment materials, all good assessment has to start with clarity about the institution’s mission and goals, which are the critical framework for ongoing work in strategic planning, benchmarking and assessment. The new Characteristics are more specific than ever before about the need for a written strategic plan, assessment plan, and other documentation of institutional plans and processes.

Yet, I can’t tell you how many hours I’ve spent looking for a strategic plan as part of an institutional review, only to have to draw the conclusion that there is no plan. Not that there’s a lack of words — we do words very well! In this day and age, certainly no institution would actually have the hubris to admit NOT having a strategic plan — any more than they would admit to NOT having an institutional research office. But much of what gets presented as a plan is, most decidedly, not.

Teams are very eager to see the strategic plan in full, and to understand its relationship to all other planning documents and processes. We expect the curriculum design to relate to the strategic goals, and the assessment plan to flow from that.

We also expect the strategic plan to state goals that are sufficiently measurable that institutional performance in relation to the plan is clear. This requires some careful consideration of what is a goal, what is a mission statement, what is a value best articulated in the vision statement. Too often, lofty statements of values, which are certainly important, are set forth as if they were strategic goals, e.g., “Middle States University will educate students for global leadership.” So do we all. Such a sentence sounds more like a vision and value than a strategic goal. Here again, I do not want to get tied-up in semantics, but somewhere in your strategic plan you truly need to reflect strategies, not simply aspirations.

Mission is sometimes a real stumbling block in this whole process. Over the course of a number of visits to different campuses, my teams have detected a real lack of understanding in the campus communities about mission and how mission drives goals. This is particularly true for the “special mission” institutions, of which there are many in the Middle States region — women’s colleges, religiously-affiliated colleges, historically black colleges, institutions devoted to a particular profession or discipline. Smaller institutions feel a great conflict between cherishing their specialness and traditions, and wanting to emulate the big universities.

More to the point, smaller institutions that don’t have the resources of bigger places often wind up feeling that they will be judged more harshly in accreditation because they simply don’t look like what they perceive as ‘normative’ by other people’s standards. So, they sometimes wind up feeling conflicted about these very special missions that make them distinctive, yes, but that also limit growth and market share. In the self-study, they wind up sounding very apologetic and ambivalent about mission, and they focus entirely too much, at times, on the limitations that a special mission imposes on resources and growth potential. Self-flagellation seems to be a particular specialty of the smaller institutions, and while it’s not the deadliest of accreditation sins, it sure gets in the way of honest analysis as much as self-congratulation and puffery.

The Fifth Deadly Sin: Be So Unique that You Have No Benchmarks

Now, the related problem among those institutions that resist planning, and therefore resist assessment, is the resistance to benchmarking. I can tell the story on ourselves at Trinity, that this was one of the hardest bad habits to break. For the longest time, we had a pervasive culture that simply assumed that we were so different from any other institution that we could not possibly find anything useful in seeking comparative data sets. A colleague called this “State of the Art Syndrome.”

Some of the problem arose from the fact that the available benchmarks that are widely published tend to be for cohorts that really are not good comparisons. So, for example, using Peterson’s Strategic Indicators might be useful to establish certain kinds of norms, but many institutions have characteristics that cause variations outside of the traditional four-year private or public model.

When we did our last strategic plan at Trinity, Beyond Trinity 2000, our Board insisted that the plan had to have measurable goals, and in order to do that, we had to do benchmarking. It took us two years, and we used a wide variety of reference materials. Peterson’s Strategic Indicators, the data that’s readily available in US News, various NACUBO models, and an exceptionally valuable ongoing study by the Women’s College Coalition, all provided excellent benchmarking data. But in addition to those sources, we found it quite useful to construct our own cohort using similarly-sized institutions with similar characteristics.

You can and must do benchmarking to have a solid strategic plan, which is the basis for institutional assessment; you can create excellent benchmarks by defining a cohort of institutions that truly fits your circumstances while also provoking growth and healthy change.

The Sixth Deadly Sin: Letting the Flat Earth Society Prevail

No administrator in their right mind every intends for this to happen, of course. No progressive faculty chairs of task forces on steering committees want it, either. Yet, invariably, the rumor of formative and summative assessment language in the air stirs up the bones of the Flat Earth Society, who manage to rise once more, rattle their boxes of chalk, and proceed to denounce the whole expedition as….. “Soooo typical of Middle States, trying to tell us what to do!” or “The greatest assault on academic freedom since Joseph McCarthy,” and “A good example of the decline of Western Civilization as we know it.” The latter might actually be true, and worthy of a celebration! But I always believe that, as you set sail for new horizons, you need to wave goodbye to the Flat Earth Society hugging the shoreline. When you get to your destination, send them postcards from the edge…
But, sometimes, in some institutions, the weight and sheer nastiness of the Flat Earthers wind up grinding down those who understand accreditation and assessment, with debilitating results for the whole accreditation process. Flat Earthers have a variety of tactics: they paralyze the self-study process with endless demands for review and consultation, preferably with the full faculty assembled continuously throughout the process; they object to the use of data as “trendy” and assert that their time-honored practices (captured once and forever on long yellow pads) are venerable and “true” or “authentic” liberal education; perhaps, worst of all, they take up the time of visiting teams by having circuitous debates with their colleagues through the team interview process about whether the academic dean or president usurped the ‘traditional’ power of the faculty to teach whatever and however they want because now “this assessment stuff” has been “forced on us ” through — scandale! — “administrative fiat.”

Team chairs and members know that the Flat Earth Society has cells on every campus; we have our own, we are unimpressed with yours. We usually do not let the huffing and puffing distract us, so do not get overly distracted yourselves. But our posture of studied disregard can quickly evaporate if the president, vice president or deans, or steering committee chairs, use the resistance as an excuse for not getting the job done.

Let me be clear about this point: your job is to see that the processes work, and that the accreditation moment is healthy and life-giving for your institution. Accreditation is a group process, yes, but you are accountable, and you cannot let any individual or sub-group stand in the way of the whole. Do not let too much obeisance to hoary, dysfunctional governance structures get in the way of getting the job done well. I have another whole speech on the need for reform in the conceptual framework and reality of governance in an era of educational transformation, but let me boil it down to this philosophy statement: when reform becomes impossible, revolution becomes imperative!

The Seventh Deadly Sin: Blame it on the Faculty-Driven Steering Committee

When all else fails, some presidents and deans, knowing their assessment plans are weak and their self-study reports are flawed, will blame the whole thing on the presence of faculty on the steering committee. I am not making this up; I’ve heard it with my own ears. And, of course, the very existence of this complaint, aside from its substance, is illustrative of a self-study process gone awry, and the visiting team is likely to view this symptom as something that warrants much deeper investigation. It’s a very bad move to try to justify an inadequate self-study as somebody else’s fault. You are accountable for the results. Your job is to see that the process works, and we’re not particularly interested in why it didn’t work, except insofar as that failure then tells us the institution has some serious problems.

The blame game is a very serious challenge on team visits. The new Characteristics and assessment guidelines will require some very hard work on the part of institutions, and more teamwork among disparate groups on campus than ever before. Some faculty will be resistant, (see Sin # 6). That’s not an excuse for allowing the process to fall off the tracks, or for a final product that does not satisfy the expectations of the accrediting process.

Now, lest you leave thinking only of sin, and deadly sin at that, let me leave you with the thought that you can always avoid these seven deadly sins of accreditation visits if you practice the cultivation of the accreditation virtues. These are very simple.

First, you must have faith in the peer review process — it really does work, and it can be a highly effective means to ensure that your institution meets its goals for student outcomes continuously.

Second, have hope that the visiting team is skilled in its task, and comprised of individuals who are true peers with experience in institutions that are similar to yours, so they’ve walked in your shoes and know the challenges and opportunities inherent in your mission and goals.

Finally, is there a place for charity in thinking about accreditation and assessment? Well, sure. But not “charity” in that least-common-denominator sense of giving someone a break who maybe hasn’t quite made it. That’s not at all what we mean in this context.

The charity inherent in good accreditation and assessment is really that profound sense of love — love for our students and the mission we strive to accomplish in their lives each day — that drives our work and makes us want to do even better, because our students will receive even greater benefits as we continuously improve the quality of our educational programs. Our work in planning, assessment and accreditation is truly our labor of love, all done for the sake of our students and those whose lives they will influence for the better because we’ve done our jobs well.

Thanks for listening!


Patricia A. McGuire, President, Trinity, 125 Michigan Ave. NE, Washington, DC 20017
Phone: 202.884.9050   Email: president@trinitydc.edu

Admissions

Academics

Student Services

Campus Services

About Trinity