Home | Society | The media–technology–military industrial complex

The media–technology–military industrial complex

image
In a world of so-called fake news and post-truth politics, the influence of largely invisible qualities of concentrated power over media, public and policy agendas, warrants renewed and urgent scrutiny.

 

 

 

Justin Schlosberg 

 

 

In a world of so-called fake news and post-truth politics, the influence of largely invisible qualities of concentrated power over media, public and policy agendas, warrants renewed and urgent scrutiny.

 

Invisibility is the essence of the radical view of power developed in 1959 by US sociologist C. Wright Mills, according to which concentrated power in late capitalist democracies was invisible, and no longer to be found in the observable decision-making and conflicts of day-to-day partisan politics. Two years later, it was echoed in the concept of a ‘military–industrial complex’, first articulated by the then US Republican President Dwight D. Eisenhower. In his farewell address in 1961, Eisenhower issued a famous warning to the American people: 

 

We must guard against the acquisition of unwarranted influence, whether sought or unsought, by the military-industrial complex. The potential for the disastrous rise of misplaced power exists and will persist.

Mills, like Eisenhower, reflected on the exponential growth and consolidation of corporations, the military establishment and government bureaucracy during the post-war period, along with the rapid development of communication technologies and infrastructures. These were not coincidental and autonomous processes but mutually constitutive of an ever more integrated elite power structure; and one that transcended the formal checks and balances of the political system.

 

But for critics of Mills, the suggestion of any kind of definable club at the top echelons of state–corporate power lacked empirical foundation and flew in the face of what seemed to be an opposite and prevailing trend. This was characterized by growing disunity among elite factions as the political economy became increasingly complex and fractured. As Daniel Bell observed in respect of corporate power in post-war America: ‘I can think of only one issue on which the top corporations would be united: tax policy. In almost all others, they divide.’

 

Bell pointed out some of the fault lines that divided industrial interests in the post-war period, including those between railways, truckers and airlines; or between coal, oil and natural gas. In this essay I address similar fault lines in the digital information economy, which have manifested themselves in public squabbles and legal battles between content owners (especially publishers), intermediaries (such as search and social networking sites) and network operators (including Internet Service Providers (ISPs) and app platforms).

The essential characteristics of revolving doors, intimate social relations and strategic partnerships remain as pertinent today as they did in the 1950s.

 

From net neutrality to ancillary copyrights, these titanic struggles suggest – on the surface at least – a far more profound disunity among the established and emergent gatekeeping powers than the industrial tensions to which Bell pointed. In short, the media–technology complex hardly seems to reflect anything like an ‘interlocking directorate’ that Mills ascribed to the power elite, much less a hegemonic consensus that radical critics of the media have long identified.

 

But on closer examination, the picture is much less fractious than it appears. In the discussion that follows, we review the underlying and overall consonance of interests between different players in the information economy, as well as evidence of an intensifying alliance and collaboration that extends to the wider military–industrial complex. Although the composition of the power elite inevitably varies according to place and time, the essential characteristics of revolving doors, intimate social relations and strategic partnerships remain as pertinent today as they did in the 1950s.

 

This does not mean that the tensions between corporate interests, both within and across communications sectors, are a charade. But, just as Mills suggested, these are not the whole story, and perhaps not even half the story. In a world of so-called fake news and post-truth politics, the largely invisible qualities of concentrated power that Mills highlighted, along with its potential influence over media, public and policy agendas, warrants renewed and urgent scrutiny.

 

The blood, the veins and the heartbeat

To get to the heart of the matter, we have to consider how concentration and consolidation in media markets is intensifying under the shadow of digital monopolies like Google and Facebook.

 

Indeed, what is truly unprecedented about the market power of these platform monopolies is not the extent of dominance within their own core markets (search and social networking), but the immense influence they wield over others. This is precisely because they occupy the hinterland between industries built on network and copyright control. In so doing, they have assumed control of something of far wider consequence: the means to connect these industries with end users. If ‘referral traffic’ is the blood that now sustains much of the cultural industries, and the pipes and networks through which that traffic flows are the veins, then intermediaries provide the heartbeat. And there are no industries now more dependent on that heartbeat than news. Facebook and Google together account for more than 70% of users directed to the websites of major news publishers. From any perspective this translates into a stunning degree of market influence.

 

To understand the impact on concentration of news markets, we have to get to grips with how dependence on referral traffic has raised capital costs in the world of digital journalism and erected new barriers to market entry. Although newsgathering may be cheaper than ever before, this is countered by the growing costs of competing on volume, while the ever-expanding information noise means that prospective new entrants often need sky-high marketing budgets in order to compete.

 

This is seen not only in rising advertising costs, as major brands out-bid smaller players in keyword auctions; but also in the development of new marketing specialisms, namely strategies of search engine and social media optimization that have particular resonance for the news industry. These in turn have spawned a whole new professional class of skilled marketers and agencies that make competing with the big names a very costly business.

 

The tyranny of automation

In spite of these obstacles, the last decade or so has seen the rise of a small number of new entrants in mature digital publishing markets, from the Huffington Post to the Intercept.com. But their overall audience still tends to be marginal compared to dominant television and newspaper brands, and it remains to be seen how much of a challenge they present to mainstream consensus agendas.

 

What is clear is that offering such a challenge is, from a commercial perspective, a high-risk business. This is partly because major news algorithms disproportionately favour not only established large-scale brands, but also a consensus news agenda. In May 2016, five whistle-blowers revealed the existence of a specialist ‘curating’ team within Facebook, responsible for manually editing its trending topics. Housed in the basement of its New York offices, this team was widely accused of peddling an anti-conservative editorial bias, although this proved to be more a reflection of the personal political sensibilities of the curators than any top-down editorial directive.

 

What was fed down from the top was explicit instruction to defer to a mainstream agenda consensus: curators were to ensure that stories that were attracting substantial coverage in mainstream media and on Twitter were given a boost if they were not trending on Facebook ‘organically’.

Major news algorithms disproportionately favour not only established large-scale brands, but also a consensus news agenda.

 

Deference to a mainstream news consensus can also be embedded inadvertently in algorithmic design. Arguably the closest proxy for a news agenda in the social media world is Twitter’s trending topics (a forebear of Facebook’s equivalent). These highlight the most popular issues discussed on the social network in any locality or region, at any given time, as denoted by the hash-tag label for particular topical discussion threads. In 2011, considerable controversy was stirred when activists from the Occupy movement – a global direct-action protest network born out of the fallout from the 2008 financial crash – noticed that the hash-tag for Occupy Wall Street (OWS) never seemed to make it onto the trending topics list in New York. This seemed particularly bizarre because OWS was at the heart of a movement that was attracting significant attention from mainstream media at the time. #OccupyWallStreet had also been ‘trending’ regularly all over the world, but never in the city where its direct action and protest activity was taking place.  Even more bizarrely, the same thing was happening with the #OccupyBoston hash-tag, which was regularly trending in cities and regions other than Boston but never in Boston itself.

 

Not surprisingly, the social network was accused of cooperating with local authorities in censorship and efforts to suppress the movement. Part of the suspicion stemmed from the fact that the technical apparatus of trending topics has always been hidden from public view. But in a brilliant ‘reverse engineering’ data analysis, Gilad Lotan showed how the anomalies in Boston and New York were not in fact the function of any intentional manipulation by Twitter or the authorities, but rather the unintended consequences of a particular algorithmic feature.[i]

 

Contrary to what might be assumed, Twitter’s determination of ‘trending’ is not based exclusively on the volume of tweets attracted by any given hash-tag at any given time. This is because one of Twitter’s principal concerns with trending – as the term suggests – is to do with ‘newness’. So its algorithm rewards particular terms and topics that experience ‘spikes’ in users’ attention and participation, rather than those that attract consistent and prolonged activity. The reason that #OccupyWallStreet and #OccupyBoston had never trended in their respective cities was because they had, from the start, attracted a gradual and sustained growth of local attention, as opposed to simply spiking around particular events that attracted broader mainstream media focus. As Lotan remarked, ‘There’s nothing like a Police raid and hundreds of arrests to push a story’s visibility’.

 

So this was not, after all, censorship – or at least not in the way that many had suspected. But it did reveal an important feature of Twitter that has potentially profound implications for the news agenda at large, and for the way that information flows across the network. Trending topics have become a key mechanism by which certain ideas or perspectives gain visibility in the digital domain. They have become a symbol of newsworthiness. Most would assume that they reflect the most popular topics at any given time in any given place, but that’s not strictly true. Spikes are more likely to be driven by headlines that are still predominantly determined by editors in traditional newsrooms. So, rather than offering a challenge to the editorial agenda set by mainstream media, trending topics may serve in many ways to reinforce that agenda.

 

Size matters

As for Google, its news-service algorithm has for some time been weighting news providers according to a broad spectrum of what it considers reliable indicators of news quality. But one look at Google’s most recent patent filing for its news algorithm reveals just how much size is used as a proxy for quality in the world of digital news: the size of the audience, the size of the newsroom, and the volume of output.

 

In relation to audience, Google rewards providers with an established record of click-throughs from its pages; those that feature prominently in user surveys and data collected by market research agencies; and those with a relatively global reach as detected by clicks, tweets, likes and links from users based in other countries. For newsroom capacity, Google embeds metrics into its algorithm that ‘guesstimates’ the number of journalists (with reference to by-lines) as well as the number of ‘bureaus’ operated by the news provider.

 

It’s not hard to see how these metrics can disproportionately favour mainstream news providers over more specialist or alternative outlets. Above all, Google’s quality weighting hangs on volume. According to the patent filing:

 

A first metric in determining the quality of a news source may include the number of articles produced by the news source during a given time period […] [and] may be determined by counting the number of non-duplicate articles […] [or] counting the number of original sentences produced.

Some volume metrics favour long-form and original news, which are fairly uncontentious indicators of quality (even if they still favour news organizations with relative scale and resource advantage). But others are more problematic. For instance, Google rewards organizations that provide a ‘breadth’ of news coverage, which penalizes more specialized news organizations. Specializing in this sense is really the only way that potential new entrants, which lack the resources and scale of existing providers, can compete by offering an in-depth and ‘quality’ news alternative.

 

Perhaps the most contentious metric is one that purports to measure what Google calls ‘importance’ by comparing the volume of a site’s output on any given topic to the total output on that topic across the web. In a single measure, this promotes both concentration at the level of provider (by favouring organizations with volume and scale), as well as concentration at the level of output (by favouring organizations that produce more on topics that are widely covered elsewhere). In other words, it is a measure that reinforces both an aggregate news ‘agenda’, as well as the agenda-setting power of a relatively small number of publishers.

‘Importance’ is a measure that reinforces both an aggregate news ‘agenda’, as well as the agenda-setting power of a relatively small number of publishers.

 

Google favours automated indicators because they rely less on human subjective interpretations of news value. But while they may be free of subjective bias in one sense, they rely on quantitative indicators of quality, which produce their own bias towards large-scale and mainstream providers.

 

Google engineers may well argue that the variety of volume metrics embedded in the algorithm ensures that concentration effects counterbalance pluralizing effects, and that there is no more legitimate or authoritative way to measure news quality than relying on a full spectrum of quantitative indicators. Rightly or wrongly, Google believes that ‘real news’ providers are those that can produce significant amounts of original, breaking and general news on a wide range of topics and on a consistent basis.

 

At face value, that doesn’t sound like such a bad thing. In a world saturated with hype, rumour and fake news, it’s not surprising that most people are attracted to media brands that signal a degree of professionalism. But there is little evidence to suggest that mainstream media brands have offered a meaningful corrective to fake news stories and considerable evidence to suggest that they have served to amplify them.

 

Consider, for example, an open letter calling for the re-election of the Conservative Party during the 2015 British general election campaign. The letter was published on the front page of the Daily Telegraph and presented as a spontaneous initiative by the small business community with apparently 5,000 signatories and a statement that implored voters to give the Conservatives a chance ‘to finish what they have started’. It was duly picked up by the BBC and other television news channels and largely covered without critical scrutiny, on a day when the Conservatives happened to have launched their small business manifesto and incumbent leader David Cameron gave a speech to an audience of small business leaders in London.

 

Within hours, however, it emerged that the letter had in fact originated from the Conservative Party’s campaign headquarters, and it was not long before Twitter users identified several duplicate signatories, as well as references to companies that no longer existed or claimed not to have signed. They even found Conservative Party candidates among the signatories. But by then, the uncorrected news story had already reached many more millions of prospective voters, courtesy of the mainstream broadcasters. For its part, Google pre-emptively regards major news brands like the BBC as more likely to produce what it considers quality news. The company made clear as much when it stated in its patent filing that ‘CNN and BBC are widely regarded as high quality sources of accuracy of reporting, professionalism in writing, etc., while local news sources, such as hometown news sources, may be of lower quality’.

When major western news brands are held up as a definitive benchmark of news quality, we start to run into real problems from the perspective of media diversity.

 

When major western news brands are held up as a definitive benchmark of news quality, we start to run into real problems from the perspective of media diversity. For one thing, Google’s quality metrics give favoured news organizations a prior weighting, which means that the ranking of stories is not exclusively matched to the keywords of any given search. An article by a relatively unknown provider may thus find itself out-ranked by competitors with greater scale and brand presence, even if the article is more keyword-relevant, in-depth and original.

 

Perhaps of greatest concern, Google’s news algorithm discriminates against providers that focus on topics, issues and stories beyond or on the fringes of the mainstream agenda. Even its ‘originality’ metric – which purports to favour diverse perspectives in the news generally – is limited to measuring the number of ‘original named entities’ that appear in any given article in comparison with related coverage on the same story or issue. 

 

This underlying alliance between Google and major news publishers is very much at odds with the public war of words that has surrounded issues such as ancillary copyright. In 2013, the German government passed a law attempting to force Google to pay publishers for the use of cached content in its search listings. Yet within a matter of weeks, the law was rendered defunct after publishers lined up to issue Google a royalty-free license. It became clear that much as Google values the news content of major publishers, the latter are even more dependent on the referral traffic that Google provides.

 

Double speak

Arguably, even testier than the relationship between Google and publishers in recent years has been that between Google and the US and British governments in the battle over surveillance and encryption.

 

In 2013, classified documents leaked by Ed Snowden suggested that the US National Security Agency (NSA) had surreptitiously tapped into the backbone infrastructure of a number of intermediaries, including Google, prompting a chorus of outrage over what appeared to be a hacking of their servers. Intermediaries also responded by installing or upgrading encryption of their servers and software, prompting the US government to look to the courts in order to force open the ‘back door’, and the British government to enshrine similar measures in proposed new legislation.

 

Google in particular reacted with characteristic outrage to the Snowden revelations, decrying the US government for its surveillance over-reach and failure to protect the privacy of its users. Yet at the very same time, we now know that the company was actively seeking to collaborate with state surveillance programmes. 

 

On 18 February 2014, hundreds of privacy and civil liberty activists filled City Hall in Oakland, California, protesting against the local government’s state of the art surveillance system known as the ‘Domain Awareness Center’. The programme was based on a centralized hub receiving real-time CCTV (closed-circuit television) and other audio, video and data feeds from around the city, and integrating them with a range of surveillance applications including face-recognition software. Funded by the federal government, officials hailed it as an innovative and comprehensive public safety initiative.

 

This was not, however, enough to convince concerned local citizens for whom the scope and reach of the programme posed, from the outset, unprecedented threats to privacy and civil liberties. But the protestors at this particular meeting had even bigger worries on their mind. After reams of internal email disclosures were enforced by the Public Records Office, it became clear that the programme was not just about protecting residents in the event of a natural disaster or terror attack, as officials proclaimed. It seemed to be aimed at least as much at political activists and civil disobedients in a way that touched a nerve for a city with a troubling history of police brutality. In the event, the protestors won a significant concession from the authorities, which agreed to limit the project to cover surveillance only at the city’s port and airport rather than its entire metropolitan area as originally planned.

The programme was not just about protecting residents in the event of a natural disaster or terror attack, as officials proclaimed. It seemed to be aimed at least as much at political activists and civil disobedients.

 

But there was a little-noticed sting in the tale. Among the thousands of emails disclosed was an exchange between a City Hall official, Renee Domingo, and Scott Ciabattari, a ‘strategic partnerships manager’ at Google. In one email in particular, Domingo asked Google for a presentation of ‘demos and products’ that could work with the Domain Awareness Center, as well as more general ideas of ‘how the city might partner with Google’. The company appeared eager to participate in the very practices of blanket public surveillance that it had publicly scorned in response to the Snowden revelations.

 

The Interlock

This was no isolated example of Google’s keenness to develop partnerships with the surveillance and military state. Consider Michelle Quaid, Google’s Chief Technology Officer for the Public Sector between 2011 and 2015 and voted the most powerful woman by Entrepreneur Magazine in 2014. Before joining Google, she had built a prodigious career in roles spanning the Department of Defense and several intelligence agencies. At Google, she self-styled her job as that of a ‘bridge-builder’ between big tech and big government, especially the worlds of military and intelligence.

 

Other senior positions in Google’s ‘Federal’ division exemplify the company’s efforts to cash in on lucrative partnerships with the military and security establishment. The most senior is perhaps Shannon Sullivan, head of Google Federal, the company’s government-facing division. Sullivan was a former defence director for BAe Systems, the world’s largest arms manufacturer, and a senior military adviser to the US Air Force.

 

But it’s not just the security state that has developed entrenched links with Google. Notwithstanding the temporary spat over surveillance revelations in 2013, the Obama administration had from the outset forged a long-term love-in with Silicon Valley. The regular exchange of senior staff between the top branches of government and the boards of big tech companies has produced not so much a revolving as a spinning door between Big Tech and the White House. Loisa Terrell, former legal counsel to Obama, joined Facebook as Head of Public Policy in 2011 before being appointed Advisor to the Chairman of the Federal Communications Commission (FCC) in 2013. And in 2015, Facebook hired former FCC Chairman Kevin Martin to direct its mobile and global access policy.

The regular exchange of senior staff between the top branches of government and the boards of big tech companies has produced not so much a revolving as a spinning door.

 

Tech companies have also ratcheted up their political donations in recent years, establishing ‘political action committees’ or PACs to front their political lobbying efforts and campaign contributions during election cycles. Not surprisingly, Google’s is the largest PAC and has grown exponentially since its inception in 2006. In the 2014 mid-term elections, Google spent $1.6 million compared to a mere $40,000 in 2006, and in the 2016 election cycle, it spent $2.2 million, most of it on Republican candidates. Two years earlier, Google’s Michelle Quaid joined the board of the campaign technology company Voter Gravity, which provides services to Republican candidates and technological support for a number of conservative groups.

 

During the 2015–16 electoral cycle, Google spent almost $12 million on lobbying US representatives, and three out of four of its lobbyists had previously held senior government posts. In 2015, Google had 10 employees devoted to lobbying European politicians, an investment that appears to have borne some fruit at least with the British government. According to an investigation by the Observer newspaper in 2015, ‘Britain has been privately lobbying the EU to remove from an official blacklist the tax haven through which Google funnels billions of pounds of profits’. In 2014, towards the end of his stint as EU Competition Commissioner, Joaquín Alumnia complained bitterly of the pressure applied by member state governments to go easy on Google. Alumnia had spearheaded anti-trust investigations into the company during his four-year tenure and, coincidentally perhaps, was also revealed to be one of the victims of the British Government Communications Headquarters (GCHQ) and NSA surveillance in a target list leaked by Ed Snowden.

 

There have also been a number of recent key cross-appointments between intermediaries and media organizations. In 2010 Google hired Madhav Chinnappa, former head of development and rights for BBC News, to lead its partnerships team for Europe, the Middle East and Africa, while in 2015, senior Google executive Michelle Guthrie was poached by Australia’s leading broadcaster ABC. The following year, Facebook recruited the editor of Storyful – Newscorp’s social media news agency – to manage its journalism partnerships, while Google’s vice president for communications and public affairs in Europe, the Middle East and Africa is (at the time of writing) Peter Barron, former editor of the BBC’s Newsnight.

 

Communications and PR roles have also sustained a bridge between newsroom and government employment. In Britain, the conviction and imprisonment of former News of the World editor Andy Coulson in 2015 was a PR disaster for the then Prime Minister, David Cameron, who had hired Coulson to direct his communications after he had left the paper in 2010. But less prominent is the interlocking directorate between media, the state and the defence industry. William Kennard, for instance, has served on the boards of the New York Times, AT&T and a number of companies owned by the Carlyle Group, a major US defence contractor. His full-time roles have included serving as Chairman of the FCC (1997–2001), managing director of the Carlyle Group (2001–2009’, and US ambassador to the EU from 2009 to 2013.

 

Perhaps more significant than the formal links between big tech, media and the state are the various milieus and forums in which their representatives congregate, both socially and professionally. The annual Sun Valley conference in Idaho, for example, is credited with spawning major tech–media mergers such as Comcast’s purchase of NBC in 2009, and the deal that put the Washington Post in the hands of Amazon founder Jeff Bezos in 2013.

As for social cliques, Britain’s ‘Chipping Norton Set’ refers to a gang of media and political elites based in the upmarket Oxfordshire village of the same name.

 

As for social cliques, Britain’s ‘Chipping Norton Set’ refers to a gang of media and political elites based in the upmarket Oxfordshire village of the same name. Its members include David Cameron, Elizabeth Murdoch (daughter of Rupert), Rebekah Brooks (now CEO of Murdoch’s UK newspaper operations), and Rachel Whetstone (former Google director of communications and public policy). The resilience of such intimate ties in the aftermath of the phone-hacking scandal was demonstrated in December 2015, when the Murdochs hosted Cameron, among others, for a Christmas drink. This followed on-going and persistent meetings between Murdoch and senior government ministers in the year leading up to the 2015 general election.

 

Of course, there is nothing legally or perhaps even ethically wrong with politicians having meetings or developing close friendships with media executives. The problematic question concerns the degree to which this kind of interaction – which takes place beyond public scrutiny or participation – yields a trickle-down influence both over media and policy agendas. One of the most striking features of testimony given to the Leveson Inquiry in 2012 by former prime ministers (including close friends of Rupert Murdoch) was the frank admission that their views were affected by, in the words of Tony Blair, ‘how we are treated by them’.

 

Though the examples pointed to above are by no means exhaustive, they paint a picture of a complex network of institutional power with media, communications and technology players occupying key nodes and playing crucial enabling roles within it.

 

This does not mean that the ‘club’ functions as an entirely exclusive, cohesive, centralized and coordinated vehicle of elite power. It does not even tell us much about how or to what degree power is mobilized to produce an agenda consensus. But these are all empirical questions that are raised by the emergent media­–technology–military–industrial complex. And they are questions that are overlooked by those who assert or imply that the concept of a power elite or ideological hegemony belongs to an outdated ‘control paradigm’ in media studies.

 

Both activists and researchers must remain vigilant in a world where established media brands still account for the vast majority of news consumption on all platforms; where the peddling of fear-mongering nationalism in much of the commercial press has been exploited by far-right political actors; and where there remain heightened concerns about journalists’ autonomy against the background of austerity, technological disruption aand, in Pentagon-speak, ‘the long war’.

 

[i] Lotan, G. (2011) ‘Data reveals that “occupying” Twitter trending topics is harder than it looks’,  Giladlotan.com, 12 October. Available at: http://giladlotan.com/2011/10/data-reveals-that-occupying-twitter-trending-topics-is-harder-than-it-looks/Luckerson, V. (2015) ‘How Google perfected the Silicon Valley acquisition’. Available at: http://time.com/3815612/silicon-valley-acquisition/ (retrieved 2 January 2017). /OD

Subscribe to comments feed Comments (0 posted)

total: | displaying:

Post your comment

  • Bold
  • Italic
  • Underline
  • Quote

Please enter the code you see in the image:

Captcha
Share this article
Rate this article
5.00