Living in an Extreme Meritocracy Is Exhausting
Basing decisions on impartial algorithms rather than subjective human appraisals would appear to prevent the incursion of favoritism, nepotism, and other biases. But as O’Neil thoughtfully observes, statistical models that measure performance ...
Victor Tan Chen
A century ago, a man named Frederick Winslow Taylor changed the way workers work. In his book The Principles of Scientific Management, Taylor made the case that companies needed to be pragmatic and methodical in their efforts to boost productivity. By observing employees’ performance and whittling down the time and effort involved in doing each task, he argued, management could ensure that their workers shoveled ore, inspected bicycle bearings, and did other sorts of “crude and elementary” work as efficiently as possible. “Soldiering”—a common term in the day for the manual laborer’s loafing—would no longer be possible under the rigors of the new system, Taylor wrote.
The principles of data-driven planning first laid out by Taylor—whom the management guru Peter Drucker once called the “Isaac Newton … of the science of work”—have transformed the modern workplace, as managers have followed his approach of assessing and adopting new processes that squeeze greater amounts of productive labor from their employees. And as the metrics have become more precise in their detail, their focus has shifted beyond the tasks themselves and onto the workers doing those tasks, evaluating a broad range of their qualities (including their personality traits) and tying corporate carrots and sticks—hires, promotions, terminations—to those ratings.
But beyond calculating how quickly, skillfully, and creatively workers can do their jobs, the management approach known as Taylorism can be—and has been—“applied with equal force to all social activities,” as Taylor himself predicted. Today, Taylorism is nearly total. Increasingly sophisticated data-gathering technologies measure performance across very different domains, from how students score on high-stakes tests at school (or for that matter, how they behave in class), to what consumers purchase and for how much, to how dangerous a risk—or tempting a target—a prospective borrower is, based on whatever demographic and behavioral data the credit industry can hoover up.
In her illuminating new book, Weapons of Math Destruction, the data scientist Cathy O’Neil describes how companies, schools, and governments evaluate consumers, workers, and students based on ever more abundant data about their lives. She makes a convincing case that this reliance on algorithms has gone too far: Algorithms often fail to capture unquantifiable concepts such as workers’ motivation and care, and discriminate against the poor and others who can’t so easily game the metrics.
Basing decisions on impartial algorithms rather than subjective human appraisals would appear to prevent the incursion of favoritism, nepotism, and other biases. But as O’Neil thoughtfully observes, statistical models that measure performance have biases that arise from those of their creators. As a result, algorithms are often unfair and sometimes harmful. “Models are opinions embedded in mathematics,” she writes.
One example O’Neil raises is the “value-added” model of teacher evaluations, used controversially in New York City schools and elsewhere, which decide whether teachers get to keep their jobs based in large part on what she calls a straightforward but poor and easily fudged proxy for their overall ability: the test scores of their students. According to data from New York City, she notes, the performance ratings of the same teachers teaching the same subjects often fluctuate wildly from year to year, suggesting that “the evaluation data is practically random.” Meanwhile, harder-to-quantify qualities such as how well teachers engage their students or manage the classrooms go largely ignored. As a result, these kinds of algorithms are “overly simple, sacrificing accuracy and insight for efficiency,” O’Neil writes.
How might these flaws be addressed? O’Neil’s argument boils down to a belief that some of these algorithms aren’t good or equitable enough—yet. Ultimately, she writes, they need to be refined with sounder statistical methods or tweaked to ensure fairness and protect people from disadvantaged backgrounds.
But as serious as their shortcomings are, the widespread use of decision-making algorithms points to an even bigger problem: Even if models could be perfected, what does it mean to live in a culture that defers to data, that sorts and judges with unrelenting, unforgiving precision? This is a mentality that stems from Americans’ unabiding faith in meritocracy, that the most-talented and hardest-working should—and will—rise to the top. But such a mindset comes with a number of tradeoffs, some of which undermine American culture’s most cherished egalitarian ideals.
First, it is important to recognize that the word “meritocracy,” coined by the British sociologist Michael Young in his 1958 book The Rise of the Meritocracy, originally described not some idealized state of perfect fairness, but a cruel dystopia. The idea was that a society evaluated perfectly and continuously by talent and effort would see democracy and equality unravel, and a new aristocracy emerge, as the talented hoarded resources and the untalented came to see themselves as solely to blame for their low status. Eventually, the masses would cede their political power and rights to the talented tenth—a new boss just as unforgiving as the old one, Young suggested.
The new technology of meritocracy goes hand in hand with the escalating standards for what merit is. To hold down a decent job in today’s economy, it is no longer enough to work hard. Workers need brains, creativity, and initiative. They need salesmanship and the ability to self-promote, and, of course, a college degree. And they need to prove themselves on an ever-expanding list of employer-administered metrics.
A mindset of exhaustive quantified evaluation naturally appeals to elite tech companies like Amazon, where work life—judging from a New York Times exposé last year of the online retailer’s corporate culture—can be nasty, brutish, and often short. In the article, former Amazon workers described how supervisors would put them through relentless performance reviews across a wide range of measures. The critiques were so sweeping that they included the secret assessments of coworkers—who apparently weren’t above backstabbing with a confidential tip—and so savage that employees regularly cried at their desks. One human-resources executive summed up the office culture, approvingly, as “purposeful Darwinism.”
While Amazon’s fixation on metrics and merit, as described by the Times, was especially intense, it speaks to a variety of trends that have ramped up the competition in workplaces everywhere. At more traditional corporations like Accenture, Deloitte, and G.E., managers may have grown skeptical of the hidebound annual performance review, but many have simply replaced it with a more or less perpetual performance review—at G.E., reviews are conducted through a smartphone app. In the service sector, too, workers have found their daily lives meticulously regulated and continually assessed, from just-in-time scheduling at Starbucks to instant customer-service rankings at Walmart. In fact, Silicon Valley aside, alienation by algorithm may be a greater problem for this latter, less-educated group of workers: “The privileged,” O’Neil notes, “are processed more by people, the masses by machines.”
In the job search, the sorting is particularly relentless. Employers want a seamless work history and snazzy résumé, of course. But they want a spotless personal record, too—not just a clear background check, but also an online identity free of indiscretions. This is not just the case for tech workers in Seattle. Those further down the economic ladder are also learning that a good work ethic is not enough. They can’t just clock in at the factory anymore and expect a decent paycheck. Now they have to put on a smile and pray customers will give them five stars in the inevitable follow-up online or phone survey./atlantic
Comments (0 posted)
Post your comment