The Inefficiency of Injustice

More excerpts from my revisions to the Social Justice and Standardized Testing manuscript, which should be off to Routledge next week.

The points here are applied to testing, but this is a very broad trend in social systems. Efficiency ought to be subordinate to justice. When priorities are reversed, both suffer: overvaluing efficiency tends to create injustices, which then in turn tends to undermine efficiency. When efficiency trumps justice the result is injustice, and the result of injustice is inefficiency. This is one ‘side effect’ that results from an overemphasis on efficiency, discussed below as the inefficiency of injustice. This is one of the most powerful insights yielded through the study of the relations between efficiency and justice: injustice is inefficient, while justice promotes efficiency.

(That is Charlie Chaplin stuck in the gears, from Modern Times.)


The inefficiency of injustice

Because of the injustices they create, efficiency-oriented testing infrastructures can counterintuitively result in higher levels of organizational inefficiency, typically as a result of escalating surveillance and enforcement costs (Bowels & Gintis, 1986). These costs are incurred by an organization when its members view the organization’s structures as obstacles in opposition to their interests, or when any member is systematically disincentivized from performing their role. This was mentioned during the discussion of efficiency-oriented testing in Chapter 3, where we saw that in general quality-control measurement infrastructures tend to be expensive to build, maintain, and implement. This inevitably leads to concerns about the cost of surveillance needed to exercise certain kinds of quality control. In this case, NCLB can be understood to have created an educational environment in which teachers and students were the subject of quality-control surveillance on a massive scale.

The costs of surveillance are not only financial (ibid.). There are significant impacts on organizational cultures and individuals’ self-understandings when surveillance (and related methods of enforcement) become necessary aspects of organizational functioning. This leads to organizations wherein individuals do their jobs a certain way not because they agree with the effectiveness of the technique and appropriateness of the task, but rather because the “quality” of their work is closely monitored and deviations from the mandated methods are punished. If they were not so strictly surveilled they would prefer to do things differently––not out of laziness or incompetence, but because they believe there really is a better way. The result is that less work gets done at a lower quality accompanied by steadily increasing costs of surveillance, employee turnover, burn out, and discontent. Distractions and low morale impact enforcement practices, as do the increasing probabilities of active expressions of worker discontent, such as sabotage (e.g., as in the Introduction, cheating under NCLB could be understood as a kind of sabotage). Bowels and Gintis (1998, p. 6) speak to the literal price (in dollars) of injustice:

Institutions supporting high levels of inequality [and injustice] are often costly to maintain. [There is] a cost in enforcing inequality, in such forms as high levels of expenditure on work supervision and security…. [But there is a] positive relationship between efficiency and equality in that more equal societies may be capable of supporting [higher] levels of cooperation and trust…. Cooperation and trust are essential to performance [and efficiency]. Of course trust and cooperation do not appear in conventional economic theory.

Costs of surveillance go up even more if the objectivity of the instruments used is hard to maintain. Poorly built equipment or the likelihood of human error (or deception) requires that quality testing be done under more strict and exacting conditions, which are more expensive both financially and psychologically. Costs also go up faster in industries where the ‘output’ being measured is not a simple object (like grain or a car), which can be tested and measured by means of uncontroversial physical instruments. In so-called service industries, as some would have education become, where the product is more intangible, quality-control monitoring is not so easy and is often much more invasive, subjective, and expensive (ibid.). Moreover, as mentioned in Chapter 3, there is always a tradeoff between the damage done to the product by testing it and the gains in quality that can be made through more testing (Busch, 2011). Apples must be tasted, fuel burned, and medicines used in order to determine and improve their quality. The more you test something the better sense you will have of its quality and of how to improve it, but in doing so you will also have destroyed more of the product. The product begins to lose value due to increasing numbers of tests as the overall process becomes increasingly expensive. All of these lessons about the dynamics of institutionalized measurement apply in thinking about NCLB, where testing as quality-control surveillance was exercised in blanket fashion throughout the nation’s educational system.

As discussed in the Conclusion, the latest testing infrastructure being built as part of the CCS&A continues what has been a more general trend of investing in surveillance technology. ‘Improved test security’ is a major leg of the argument for investing in, at the federal and state levels, an entirely computerized testing infrastructure. The technology specifications are modeled on the platforms pioneered by ETS and its test security and computer center subsidiaries. It is not a coincidence that these new high-tech tests will make it impossible for teachers to get their hands on students’ answer sheets at exactly the time when students’ answers will begin being used to officially determine each teacher’s value-added. While there is a small countervailing discourse about the need to include teachers in building and evolving assessment practices (see the work complied by FairTest), the general trend is quite the contrary. The systemic disempowerment and de-professionalizing of teachers is understood as part of an effort to improve the quality of their practices. This is the kind of theory/practice inconsistency that marks an institutional configuration as unstable and crisis-prone (Bhaskar, 2013).

No doubt, it is important to monitor the quality of the educational processes that take place in schools, to ask questions such as: How good are the teachers in this school? How much has this child learned? This is essential. But the use of testing infrastructures as the dominant index of quality leads to a distortion of value. This is a distortion in the meaning of what counts as a good education, and it creates a new ideal of what teachers and students ought to be and do. Testing can distort the perception of value to such an extent that ‘quality control’ becomes a counterproductive undertaking.

Attempts to ‘steer’ a complex system (such as a school system) typically fail when they are undertaken by focusing narrowly on one aspect of the system. This is ‘steering’ based on feedback that tracks only a few true but partial representation; it is bound to fail (Buck & Villines, 2007). When the system being steered is one constituted by complex human relationships (like those between teachers and students), a narrow measure of quality control will distort these relationships, leading to an increasing sense of injustice. False measures engender false-consciousness, disingenuousness, and systematically distorted communication (in the fully Habermasian sense of this term); or they occasion widespread discontent, disruption, push-back, subversion, and revolt, as discussed in Chapter 1 and as witnessed in the Atlanta Public Schools.

In the long run, ‘steering’ organizations like schools according to limited and misleading measures will create situations in which even the limited functions that are officially and objectively monitored start to decline. This decline in efficiency is in fact a result of systemic disruptions in the culture and social relations of the organization. These disruptions were initiated as part of an attempt to improve the very functions they are diminishing. This is the inefficacy of injustice: when the injustices that result from a policy undermine the goals for which that policy was initiated.

Recall the discussion from Chapter 1 about the unjust bureaucracy. Efficiency experts in agriculture bought (or seized) vast tracts of land, precisely measured and parceled out lots, set metrics for production, gave the occupying rural peasants heavy machinery (mostly unfit for their local conditions), and then asked them to more than quadruple their output. These modern efficiency techniques led to peasant revolts, equipment damages, and widespread crop failures and famine. They created conditions far worse than those before the “scientific improvement” of what were ancient practices.

The inefficacy of injustice is a common and problematic pattern that besets many modernizing practices (Bowls & Gintis, 1998; Porter, 1995), especially authoritarian forms of modernization (Scott, 1998; Apple, 2001). NCLB was beset by inefficiencies stemming from injustice. Cheating, test-prep pedagogy, and rising costs of surveillance ultimately resulted in the ‘side-effect’ of inefficacy due to injustice. Some of these problems can be resolved by improving testing practices—making better and more secure tests across a greater range of subjects. And this is the direction in which testing is headed. Once again bolstered by technological advances and opportune political climates, testing infrastructures are expanding and increasing their size, scope, and significance.