based on his words, what seems to be parris’s motivation for inadvertently causing the hysteria?

Reading this tweet by Maciej Ceglowski makes me want to set down a conjecture that I've been entertaining for the last couple of years (in part cheers to having read Maciej's and Kieran's previous piece of work also as talking lots to Marion Fourcade).

The conjecture (and it is no more than than a plausible conjecture) is simple, but it straightforwardly contradicts the commonage wisdom that is emerging in Washington DC, and other places too. This collective wisdom is that China is becoming a kind of all-efficient Technocratic Leviathan thanks to the combination of machine learning and authoritarianism. Authoritarianism has e'er been plagued with problems of gathering and collating information and of beingness sufficiently responsive to its citizens' needs to remain stable. Now, the story goes, a combination of massive data gathering and machine learning will solve the bones authoritarian dilemma. When every transaction that a citizen engages in is recorded by tiny automatons riding on the devices they comport in their hip pockets, when cameras on every corner collect data on who is going where, who is talking to whom, and uses facial recognition technology to distinguish ethnicity and identify enemies of the state, a new and far more than powerful class of authoritarianism will sally. Absolutism then, tin emerge as a more than efficient competitor that can vanquish democracy at its home game (some fear this; some welcome it).

The theory behind this is one of strength reinforcing strength – the strengths of ubiquitous data gathering and assay reinforcing the strengths of authoritarian repression to create an unstoppable juggernaut of nearly perfectly efficient oppression. Yet there is another story to be told – of weakness reinforcing weakness. Disciplinarian states were always particularly prone to the deficiencies identified in James Scott's Seeing Like a State – the desire to make citizens and their doings _legible_ to the state, by standardizing and categorizing them, and reorganizing commonage life in simplified means, for instance by remaking cities so that they were not organic structures that emerged from the doings of their citizens, but instead thousand chessboards with ordered squares and boulevards, reducing all complexities to a square of planed wood. The grand state bureaucracies that were built to behave out these operations were responsible for multitudes of horrors, simply also for the crumbling of the Stalinist state into a Brezhnevian desuetude, where anybody pretended to be carrying on as normal because anybody else was carrying on also. The deficiencies of country activeness, and its need to reduce the world into something simpler that it could comprehend and act upon created a kind of feedback loop, in which imperfections of vision and action repeatedly reinforced each other.

So what might a similar analysis say about the union of authoritarianism and machine learning? Something similar the following, I recall. There are 2 notable problems with motorcar learning. One – that while it tin practise many extraordinary things, it is not nearly every bit universally effective equally the mythology suggests. The other is that it can serve as a magnifier for already existing biases in the data. The patterns that it identifies may be the product of the problematic data that goes in, which is (to the extent that it is accurate) often the product of biased social processes. When this information is then used to make decisions that may plausibly reinforce those processes (by singling e.g. particular groups that are regarded as problematic out for particular police attending, leading them to be more liable to be arrested and so on), the bias may feed upon itself.

This is a substantial trouble in democratic societies, just information technology is a problem where in that location are at to the lowest degree some counteracting tendencies. The not bad reward of democracy is its openness to opposite opinions and divergent perspectives. This opens up democracy to a specific fix of destabilizing attacks simply information technology also means that there are countervailing tendencies to self-reinforcing biases. When there are groups that are victimized by such biases, they may mobilize against it (although they will observe it harder to mobilize against algorithms than overt discrimination). When there are obvious inefficiencies or social, political or economical problems that result from biases, then in that location will exist ways for people to signal out these inefficiencies or problems.

These correction tendencies volition be weaker in disciplinarian societies; in extreme versions of authoritarianism, they may barely even exist. Groups that are discriminated against will have no obvious recourse. Major mistakes may go uncorrected: they may be well-nigh invisible to a state whose data is polluted both by the means employed to discover and allocate it, and the policies implemented on the basis of this data. A plausible feedback loop would meet bias leading to error leading to farther bias, and no ready ways to right it. This of course, volition exist likely to be reinforced by the ordinary politics of absolutism, and the typical reluctance to correct leaders, even when their policies are leading to disaster. The flawed ideology of the leader (We must all written report Comrade Xi thought to discover the truth!) and of the algorithm (machine learning is magic!) may reinforce each other in highly unfortunate means.

In brusque, there is a very plausible prepare of mechanisms under which machine learning and related techniques may plough out to be a disaster for authoritarianism, reinforcing its weaknesses rather than its strengths, by increasing its trend to bad decision making, and reducing further the possibility of negative feedback that could help correct confronting errors. This disaster would unfold in two means. The first volition involve enormous human costs: self-reinforcing bias will likely increase discrimination confronting out-groups, of the sort that nosotros are seeing against the Uighur today. The second will involve more than ordinary cocky-ramifying errors, that may lead to widespread planning disasters, which volition differ from those described in Scott'southward account of High Modernism in that they are not as immediately visible, but that may besides be more pernicious, and more damaging to the political health and viability of the regime for only that reason.

So in short, this conjecture would suggest that  the conjunction of AI and authoritarianism (has someone coined the term 'aithoritarianism' yet? I'd actually adopt not to take the blame), will have more or less the opposite furnishings of what people expect. It will not exist Singapore writ large, and perhaps more barbarous. Instead, it volition be both more radically monstrous and more radically unstable.

Like all monotheoretic accounts, you should treat this mail with some skepticism – political reality is always more complex and muddier than any abstraction. There are surely other effects (some other, specially interesting one for big countries such as China, is to relax the assumption that the country is a monolith, and to think about the intersection between machine learning and warring bureaucratic factions within the heart, and betwixt the heart and periphery).Nevertheless I think that it is plausible that it at least maps one meaning set up of causal relationships, that may push (in combination with, or against, other structural forces) towards very different outcomes than the conventional wisdom imagines. Comments, elaborations, qualifications and disagreements welcome.

waughsurtes81.blogspot.com

Source: https://crookedtimber.org/2019/11/25/seeing-like-a-finite-state-machine/

0 Response to "based on his words, what seems to be parris’s motivation for inadvertently causing the hysteria?"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel