Ben Adida wrote an interesting piece about trust and responsibility concerning software engineers which also spiked a discussion on hackernews. Being a philosopher and all, I would like to put in my two cents. One thing he mentions is that we, as a society, put a lot of trust into a very small group of people, namely the people that develop, maintain and construct the software we all rely on.

Adida compares this trust to the one in we put in our physicians:

The trust society places in us is growing so rapidly that the only thing that looks even remotely similar is the trust placed in doctors. Except, most people have a pretty good idea of the trust they’re placing in their doctor, while they have almost no idea that every time they install an app, enter some personal data, or share a private thought in a private electronic conversation, they’re trusting a set of software engineers who have very little in the form of ethical guidelines.

There is no way around this trust. We as people need to trust in software as a whole. The same applies, for example to every peaceful society. Although we don’t do it explicitly, we need to trust that the person crossing us on the pavement will not attack us just because she could, otherwise we would’t be able to create a society. Political philosophers call the state in which people don’t have that kind of trust the original condition or state. (Fellow philosophers please excuse my inaccuracy here.) Note that this is a special kind of trust. It is not the same as entrusting your personal secrets to your best friend. It is an abstract form of trust. You don’t (and can’t) have to trust every single developer, admin, or IT supporter but you need to think that software in its entirety, is here to help you and that your information will not be used against you.

Think of all the times you entered your credit card information. If you don’t think that this information will be safe you don’t enter it. But, that would also mean that if you don’t trust the safety of your information, you can’t use credit cards online. This then goes on: Sellers wouldn’t be able to sell you things, unless they came up with some different kind of payment option but that would also require you to trust… It goes on and on. In short: Payment on the internet would not be possible without a certain kind of trust from the society towards the system that stores their information.

In a kind of original condition, we as a society, wouldn’t be able to use software at all. As long as we can’t trust the people (as a whole) that create software, again in an abstract sense, I’m not talking about trusting Developer X and his software here, we can’t use software.

I wanna talk in the following paragraphs about two factors of trust and software engineering. The first factor is about end user license agreements and how they relate to the user. I will make the same point I already made. Without trust we can’t use the software. The second factor relates trust to responsibility. Software Engineers have a huge responsibility most users can’t even begin to understand. It is of paramount importance that software engineers use their knowledge and expertise in ethical ways otherwise our whole system is going to collapse.

End User License Agreements and Trust

A lot of talk about End User License Agreements (EULA from now on) has already happened. I don’t intend to dive into those waters again. I want to link them to the abstract form of trust I already talked about. I bet 99% of end users don’t even read more than two lines of the EULA they’re presented with. This, I think, has to do with three different factors: One is sheer laziness, the other is an inability to understand the agreement, and the third is that you usually don’t have any other options than to accept the EULA. If I want to use my iPhone I have to accept whatever conditions Apple presents to me. This has certainly not much to do with the usual definition of an agreement but it is what it is. If I accept, I am then free (not really) to use the device. If I were to actually read through any kind of EULA, Terms of Service or Policy, I am pretty sure that I wouldn’t use the device, sign up for the service or purchase the app.

What happens here is an exchange of rights: I agree to grant the application, service, or company certain rights in order for them to grant me the right to use their software. This exchange will always be unequal. The basic principle of supply and demand applies. You want something from someone, it is up to them to decide under what terms they will give it to you.

I don’t want to bash software companies, providers, and the like here. First and foremost, most EULAs serve the purpose of ensuring that the licenser will not be liable for things the licensee might do with their product. It is also meant to protect intellectual property (man that’s a term) of certain individuals and/or companies. The rights taken from the user by the licenser are usually not intended to be used, but they can. And some companies did.

Maybe you too remember the public outcry that happened last year after news surfaced that Facebook experimented with their users for research purposes. Soon discussions emerged whether that was an ethical thing to do. Right-wise there wasn’t much of a problem, all users agreed to their being subject to research in the agreement they clicked through when they signed up for the company’s services. (Well, actually most didn’t. Facebook changed the terms of service later-on. But that is a different story.)

What upset most people here was that Facebook actually used the rights the users granted them. It was a breach of trust. Not an illegal one, all users accepted the terms, but most users either didn’t know what they accepted or they never thought that a company like Facebook would do such a thing as to make them subject to research. But it did. Legally speaking, Facebook was within it’s rights (Suppose that this is completely true for the sake of the argument) but it undermined the trust of users in acting this way.

The whole endeavour showed us users that our data is neither really save nor sacred and it opened up a pandoras box of trust issues.

Let me conclude and make my first point: EULAs are an exchange of rights. The user gives up a lot of rights because he trusts the licenser not to use every right he entrusts them with. Companies need entrusting users, otherwise they wouldn’t have any. Users have no choice but to accept every agreement that comes their way if they want to use the service, software, or website. That is in itself not a bad thing but as soon as trust is lost and users begin to see behind the curtain, the software can not be used again unless the agreement is changed. If the agreement is changed in a way that would make the user to trust again, chances are, the licenser could in some form be held liable or subject to copyright infringement. That is also not what we want. Or do we?

Responsibility and Trust

I already argued that software usage presupposes trust in the small group of people that develops, maintains and creates software. If that is valid is would be interesting to see how this kind of trust can be maintained (I am assuming that we want to keep on using software as society).

That’s why I would like to relate the issue of trust to responsibility. If my argument holds we need trust to keep using software which also means that software engineers have a responsibility towards our societal trust.

At the moment, liability is completely on the user’s side. If you’re lucky you may get some kind of guarantee or something like that but most likely the agreements you signed will hold that the company is not liable for illegal use of their product, not liable if it doesn’t work, not subject to restrictions in how they change their terms… the list goes on. Companies wanna be safe. That’s okay. Until you start thinking about it.

Most people don’t know shit about software. They don’t know how it works, who built it, who maintains it and honestly, I don’t think they give a fuck. But they trust it. They trust that the software won’t use the data it collects against them, that the passwords and payment information are safe, that nobody will know about their dubious browser history.

Yeah, that’s stupid. It is also why software engineers have a huge responsibility. People trust them to get things right. One’s responsibility feeds the other one’s trust. Their responsibility is to not seriously damage the trust people put in them or their products respectively.

The responsibility of software engineers is two-fold: The first one is not to abuse the licenser of their products (which means to not mistreat the rights they are granted by their users) and the other is to build good and trustworthy software.

I talked a lot about the first point, but I have neglected the second one. Software engineers have unique skill sets and expertise that actually makes them the real 1%. I guess I will write a follow-up on this.

My argument here is simple: As long as the trust is not seriously breached, nothing will happen. Should more trust issues come up, tough, the software system will collapse.

Software is such an integral part of who we are and where we are as a global society, we can’t imagine it not working or not existing anymore.

Closing remarks

What I wrote here is not as detailed as it should be, nor is it as fine grained as it should be but I hope to have made my point clear. We need to trust software as a system, otherwise it will collapse.

As always, I am happy to hear from you. You can either send me an email or hit me up on twitter. I’m looking forward to hear from you and see what you think about these issues and problems. Thank you for reading. If you enjoyed it, feel free to like and share below.