The Value-Neutrality Thesis: A Response to Morrow’s Refutation

The Value-Neutrality Thesis asserts that technology is intrinsically value-neutral, where any positive or negative consequence resulting from its usage can be entirely attributed to the blameworthy or praiseworthy intentions of the user. In this paper, I argue against David Morrow’s refutation of the Value-Neutrality Thesis, primarily dismantling the 1st and 5th premises that he presents in his argument, which he builds around behavioral changes as responses to changes in incentives. Moreover, I utilize a system similar to Rawls’ veil of ignorance as a way of reinforcing the value neutrality of technology by introducing the notion that rather than proving that technology is value-laden, Morrow’s argument simply indicates that societal values are reflected in the preferences of technology users rather than an intrinsic value system in technology itself.

In his paper, Morrow argues that the Value-Neutrality Thesis is false, and rather that there exists a non-negligible number of technologies that can result in bad consequences even when used by fully knowledgeable moral individuals. Morrow’s argument relies upon 5 premises, arguing that 

(1) people respond to incentives, in the aggregate and in the long run, 

(2) technologies change people’s incentives by changing how easy it is to do certain things, (3) technologies change people’s behavior, on average and in the long run, in particular ways, (4) induced behavioral changes can result in good or bad consequences,

(5) the changes that technologies cause in an individual’s behavior can lead to good or bad consequences even without ignorant, blameworthy, or praiseworthy preferences,

and thus the Value-Neutrality Thesis must be false (Technology Ethics 18). Morrow continues by identifying collective action problems and short-term thinking as two instances which support the fifth premise where morally decent individuals using technology with morally acceptable preferences may still bring about bad consequences (Technology Ethics 19). Collective action problems occur in situations where cooperative group action may achieve some good outcome yet each individual has a self-interested incentive to behave contrary to the group. Short-term thinking bases itself in the phenomenon of discounting, where individuals tend to place higher value on the present and near future than the distant future. 

Although Morrow’s premises in his refutation of the Value-Neutrality thesis seem to progress naturally, further examination into his explanations reveals that, especially within the 1st and 5th premises, Morrow creates overly generic and sweeping claims that disregard differences in individual values and preferences. I take issue first with Morrow’s first premise, in which he argues that, in the aggregate and in the long run, people respond to incentives, and that this principle lies as a bedrock principle in many social sciences. While Morrow acknowledges that this is not to say that everyone will change their behavior, he asserts that, looking at a group of people as a whole, changing people’s incentives will change their behavior (Technology Ethics 19). Although I agree with the idea that people respond to incentives, it is critical that we recognize that this is not to say that everyone responds to every incentive, nor that individuals who respond to the same incentive will respond in the same manner. Further, I argue that those who respond to incentives respond due to some unique personal desire, and thus we cannot group them together. 

Morrow attempts an illustration of his first premise through the example of an instructor in a large class who has decided to offer extra credit for students who complete a certain section of a reading that takes a large amount of effort to access (Technology Ethics 19). Then, let’s imagine the instructor makes the reading much easier to complete, say by publishing it online. Morrow claims that in each case, a change in the students’ incentives changes the students’ behavior. Here, though, we have to be extraordinarily careful with associating and averaging a group’s response to an incentive. If we are to consider a student in the class who has no interest in completing the extra credit assignment, they will be completely unaffected by the change in accessibility offered by the teacher. While Morrow’s conclusion that, on average, more of the students in the group will do the reading is true, it seems that the students who have been swayed by the change in incentives are only those that considered completing the task in the first place. To clarify, it seems that any student who completes the assignment did not complete it simply because the teacher posted it and any student who decides to complete the assignment after the change in accessibility did not do so simply due to the change in accessibility. In both situations, the student’s decision to complete the task stems from a personal self-interested desire to, in this case, increase their own grade. Though these desires are not necessarily blameworthy nor praiseworthy, we have to recognize them as the source behind any change in behavior. Technology and changes in incentive only affect me if I am vulnerable to being affected. 

The importance of recognizing deeply ingrained desires as the root behind consequences manifested in the usage of technology applies to short-term thinking and collective action problems as well, two phenomena Morrow uses as examples of his fifth premise, though these complicate the matter greatly. In the articulation of the Value Neutrality thesis, we define the users we are addressing as those who are reasoning agents that are fully knowledgeable about the potential consequences of using the technology. Using this, it seems that neither of these instances sufficiently holds in Morrow’s argument. 

In the case of short-term thinking, Morrow argues that discounting the future can lead people to act in a manner that brings about bad consequences, even if they prefer something different (Technology Ethics 22). However, given that these individuals are fully aware of technology’s role in exacerbating and increasing these short-term gratification problems, it seems that we can place fault in their “weakness of will” and flawed intentions. Though I agree with Morrow’s assessment that technology is made for humans, not saints or supercomputers, I would maintain that even the vast majority of people have the strength of will to overcome instant gratification in scenarios where consequences can differ drastically; I would point them towards the most basic example: a child who knows to do their homework and study before watching television or playing video games.

 In the case of collective actions problems, I turn to a system similar to the veil of ignorance, a thought experiment devised by John Rawls as a method of considering how societies should operate. The veil of ignorance is a hypothetical state that promotes impartial decision making where people make decisions about issues such as resource allocation without knowing their own position, prompting decisions to be made from the perspective of the worse-off. When we consider the ways that technology can be used societally behind the veil of ignorance, it seems our conclusion is highly dependent on the values and norms that the society values. In addressing collective action problems, especially intractable ones where a singular individual’s choices do not make a difference, Morrow posits technology as influencing—either exacerbating or reducing—collective action problems and thus leading to good or bad consequences even if all users are morally decent acting on morally decent preferences. While technology does play a role in influencing collective action problems, I maintain this is not indicative of technology itself being value-laden. Rather, given that the involved members are entirely knowledgeable about the consequences of their collective action, it seems that the problem lies with the societal norm of deliberate and knowledgeable avoidance of consequences; something that would likely occur regardless of technology’s role. Just as norms, values, and behaviors differ from society to society, the usage and resulting consequences of the same technology can be drastically different, an idea that again starkly contrasts with a value-laden view of technology, instead placing the reason behind resulting consequences again to user preferences. 

Technology Ethics: A Philosophical Introduction and Readings, Edited by Gregory J. Robson and Jonathan Y. Tsou