nonsensical demarcation

practical ethicists

Technology is not only a cultural object, it's also political.

This might seem confusing, you might ask, how can my toaster be political? And you'd be correct to ask that, though I mean political in the sense that every single technology has values embedded into their design. It's the reason why we have human-centered design, open source software, or privacy by design. These are ways of making technology which put forth a certain kind of value above others.

So, to answer you, yes your toaster is political, its design has values embedded into it. It could have a minimalistic design so is not the center of attention in your kitchen, or it might be designed in a way that it fails at a certain point so you would have to buy a new one or repair it, or it might be designed in a way that you can't even repair it, etc.

Furthermore, technology is not an abstract concept or something out there which does not affect us. In the case of the toaster, for example, it affects how we eat, we would be inclined to use it just because we have it; we would buy bread or bagels because it has become part of our routine, etc. Technology, therefore, mediates our relationship between us and the world in a very significant way.

Not only that but they continuously shape our moral landscape. In 2008, philosopher of technology Peter-Paul Verbeek published an article showing the moral relevance that the obstetric ultrasound had in decisions about abortion and medical care. In a later book he summarized this point as follows:

"This technology is not merely a neutral interface between expecting parents and their unborn child: it helps to constitute what this child is for its parents and what the parents are in relation to their child. By revealing the unborn in terms of variables that mark its health condition, like the fold in the nape of the neck of the fetus, ultrasound ‘translates’ the unborn child into a possible patient, congenital diseases into preventable forms of suffering (provided that abortion is an available option) and expecting a child into choosing for a child, also after the conception."

materializing morality

The implications of this way of analyzing reality is that as tool-makers, the design decisions we take will have far-reaching consequences in the life of users. Technologies would not be brute objects that just happen to exist, but rather they mediate our relationship with the world. And by doing so, they change or co-construct our moral landscape and how we think about our life. In Verbeek's words:

“Since technologies are inherently moral entities, designers have a seminal role in the eventual technological mediation of moral actions and decisions. Designers are in fact practical ethicists, using matter rather than ideas as a medium of morality”

In a recent paper in the Ethical Theory and Moral Practice Journal, Danaher and Skaug, developed a taxonomy of six mechanisms of techno-moral change found in this emerging field. In essence they found that technology affects our moral landscape in three main domains: decisional (how we make morally loaded decisions), relational (how we relate to others) and perceptual (how we perceive situations). Their work is valuable as it starts to deepen the way we can think about our responsibility as tool-makers in a systematic way. On a similar line, Verbeek developed a "Guidance Ethics" to materialize his ideas. This guide is also valuable as it paints a possible way to engage with the question of how to create technologies that are morally sound. Having said that, their work (Verbeek and Danaher) are still abstract in nature.1

internet freedom space

These ideas should fit like a hand in a glove in our space, the question is how. We talk about values all the time, in the project's website, in events, in the title of our talks, in reports, when we try to explain what we do to relatives, etc. but we don't normally talk about this in the design process. Sometimes it's self-evident and a detailed map of features and values would be unnecessary, but I think that a framework with these ideas would be beneficial in moments where we come into crossroads between features. I'm thinking of those moments when there's an internal debate about the direction the project should take.

On the other hand, there are particular challenges of doing research in this space and it seems a rather abstract conversation to have with users. Though to be fair, in the ethical guidance Verbeek talks about discussing effects which is a lot more concrete, and I think it's possible to construct a guide or framework to open up a discussion with users in a tangible way with the taxonomy Danaher developed.

Both approaches would need exploration (internal and external) but it seems like this metaphysical structure would lever certain capabilities for the team, such as: making clearer design decisions by aligning visions; it would also help communicate transparently2 why the technology took a certain direction to users.

bonus track: 'open' ai

Last year, David Gray, Sarah West, and Meredith Whittaker published an article dissecting what does 'open' AI really mean. Their work is paramount as it informs what is really at stake to the community, but most importantly to policymakers that will eventually regulate the industry. What is interesting about our discussions on being 'practical ethicists' is precisely on how they proceeded to elucidate the confusing use of terms in the industry. It goes something like this:

They start off by contextualizing the tensions of defining these terms ('open' and 'AI'), as well as key actors involved and their goals, and why this discussion matters as it will define the industry for the future. Then they analyze how the technology works and some of the key limitations, problems, and risks that it entails to develop such technology. They contextualize the history of open source technology as it shows how deeply connected it is with business and profit compared to ideals of free knowledge, and they also show how the company OpenAI passed from a non-profit to a LP structure. Finally, having all this context, they analyze the arguments presented against and for "open" AI, demarcating realistically the claims against the marketing and propaganda these businesses are involved in.

I believe this paper is an example of everything I've discussed. When we talk about technologies we do so under these metaphysical concepts, some companies leverage this as a way to claim more than what they actually do and for their own benefit. We are keenly aware of the impact technologies have in the world (in terms of economic value but also in terms of social and political ones) and therefore the high importance of regulating an emerging market that is changing our lives. Thus a mapping of their claims and what the technology does is an excellent way to clear out what is really at stake.3

  1. Some context is needed; Verbeek's project is much wider thus the guidance is rather extremely practical. He stands within the longstanding field of ethics and a much newer field of philosophy of technology. He actually used the guidance with the Dutch ministry of health when they were developing their covid app. Finally, his approach puts expert ethicists to the side and brings forth users, this is highly unusual for a philosopher to do, it's normally the ivory tower of experts explaining or condemning a technology as ethical or not, but because his approach begins with the mediation it's conversation must therefore be with those that use it and the world that this technology creates to them.

  2. Additionally it could help clear out technologies that claim certain values but if one looks into their features and what they actually allow users to do, they are either false or overreaching. I'm thinking of two cases: (1) in the VPN ecosystem which was a problem identified in the 2022 VPN Community Initiative, and (2) the paper of what 'open' AI, actually means.

  3. Though, to be fair. Verbeek would still find this incomplete. His approach militates toward an analysis through mediation and therefore how the technology interacts with a person or a social group. Though the analysis on the paper does take this notion of how things can and are moralizing agents. They analyze it as a thing that constricts or allows certain kinds of experiences to happen. I find both ways of proceeding incredibly helpful, though they open up a different kind of analysis.