From a game-theoretic point of view, commons represent both a chance and a problem. Game-theory is a mathematical tool that allows us to analyze decision-making in situations with social interdependencies such as sports, where a team has to decide either to play defensively or offensively in a match against another team, or economic situations, where companies have to decide whether to enter into partnerships or do business by themselves. From this perspective, commons provide win-win opportunities. On the other hand, these opportunities may disappear if the players try to maximize their own narrow self-interests. In the latter case, the common welfare is not achieved; the gain of a single actor is allowed to thwart potentially greater gains for the entire community. This tension helps explain why commons are often called a social dilemma.
Trust is an important component of cooperation in the commons because commons are extremely vulnerable. As Elinor Ostrom put it in her 2009 Nobel Prize lecture: “The updated theoretical assumptions of learning and norm-adopting individuals can be used as the foundation for understanding how individuals may gain increased levels of trust in others, leading to more cooperation and higher benefits with feedback mechanisms that reinforce positive or negative learning. It is not only that individuals adopt norms but also that the structure of the situation generates sufficient information about the likely behavior of others to be trustworthy reciprocators who will bear their share of the costs of overcoming a dilemma.”
Trust can often be achieved in small groups, where people know each other, because informal norms (and feelings of guilt and shame in cases of defection) are sufficient to stabilize the common welfare. But what about the cases in which groups become sufficiently large and people do not know each other?
In such cases, institutional structures are extremely important for the maintenance of trust. The history of trade gives some examples of institutions that were invented over the centuries in order to enable investors to benefit while preventing trickery and piracy. In many cases, however, institutions are used to establish and consolidate the power of private interests that do not care about the collective interest of all. It can well be argued that many steps of this long history of trade are mirrored in modern commons and online platforms like eBay (which initially was a commons). Many modern commons not only mirror or reinvent traditional commons, they pioneer new institutional arrangements for building trust and cooperation. Open source software projects and Wikipedia are prominent examples.
In modern digital commons, often a small community starts out enthusiastically with great trust among its members. But after some time and considerable growth, instances of criminality or vandalism may occur. When the first defections (or misunderstandings about presumed defections) arise, the whole system is endangered. Solutions about how to handle defections have to be found. On one hand, sanctions have to be strong enough that they can credibly threaten those who intend to defect; on the other hand, they cannot be so strong as to signal general mistrust of the whole community, which has succeeded in cooperating at least some of the time. It is not easy to find benevolent and strong solutions, which is why many commons fail.
However, commons can be successfully stabilized if they adhere to key design principles, according to Elinor Ostrom’s landmark research (1990).1 A closer look at seven of her eight design principles shows that they fulfill two requirements mentioned earlier: strength and benevolence. Ostrom identified commons as needing (among other things) clear boundaries that separate members and non-members; the adaptation of rules to local conditions and needs; the involvement of the members in decision-making; and effective monitoring by members and graduated sanctions against rule-breakers.
The design principles for effective self-governance in a commons are somehow dialectic because they recognize the necessity of institutional strength and authority while also recognizing the harm of top-down directives and outside intervention. The same dialectic can be found with respect to benevolence in a commons. The starting point is the conviction that people are willing to cooperate, but this conviction is not blindly applied. Instead, the commons recognizes that people potentially may defect, and appropriate safeguards are thus adopted.
These considerations may seem to be simple at first glance. However, trust is not only a psychological phenomenon between persons; it is equally an institutional phenomenon. Well-designed institutions foster personal trust, and inadequate institutions may stifle or even destroy personal trust. The “interface” between the social psychology of commons and the functionality of institutions is therefore extremely important. It is analogous to the human-computer interface, which may be more or less ergonomic.
Poorly designed institutions may be functional in principle, but the assertion of top-down authority may provoke resistance to perceived threats to freedom (“reactance” sensu Brehm, 1966). Conversely, too little structural authority and too much benevolence may allow conflicts and defections to escalate, ultimately destroying the commons. Ostrom gives examples of village commons that failed because they did not have clear rules for the distribution of crops from the commons to individual offsprings (as opposed to distribution to families).
To put it bluntly as a first key insight: controls and sanctions are necessary components to protect the integrity of the commons.
From my point of view it is important that the members of the commons have a structural or systemic insight into the game-theoretic social dilemmas of commons. They should be aware that there is a potentially tremendous win-win situation, but they should also recognize that any win-win situation is extremely fragile and requires protection against defections.
The history of many failed commons demonstrates that stakeholders often do not see the potential win-win of collective action. They fail to appreciate scientific evidence and political analyses that say, “Take less and you will have more.” They perceive that they will be better off if they defect – and indeed, that a failure to defect could jeopardize their survival, as in cases of very poor fishermen who desperately need food.
This leads to a second key insight: Commons can be successfully maintained only if stakeholders have substantial insight into a potential win-win constellation.
This is not only a cognitive problem, but also a problem of emotionally feeling the relevance of commoning as a solution for their circumstances.
Some history of modern commons, like open source projects, started with the fascination about the win-win that the community achieves through collaboration. For instance, Wikipedia started with some enthusiasm that “we,” i.e., anybody in the world who wants to participate, “write our encyclopedia.” In this case there was no lack of psychological commitment to a potential win-win, but there was a reluctance to accept certain institutional structures or impose sanctions and controls. But this reluctance gradually disappeared following increasing vandalism of the website and cheating in editorial submissions, e.g., self-serving content. At first glance, the idea of rules and sanctions may seem to contradict the idea of an open source project. In fact, such things are what guarantee its survival. Controls and sanctions are a necessary component of successful commons.
The point of rules, sanctions and member participation is to engender trust, a social phenomenon that is both psychological and institutional. Understanding and designing successful commons requires a keen consideration of the interplay between psychology and institutions, or what might be called “institutional ergonomics.”
Martin Beckenkamp (Germany) is an environmental economics psychologist. He teaches at Cologne University and the BiTS Iserlohn and does his research at the Max Planck Institute for Research on Collective Goods in Bonn, currently on a project about biodiversity under the view of a socal dilemma. He lives in Bonn.
REFERENCES
- Brehm, Jack W. 1966. Theory of Psychological Reactance. Academic Press, New York.
- Ostrom, Elinor. 1990. Governing the Commons: The Evolution of Institutions for Collective Action. Cambridge University Press, Cambridge, U.K.
- —————. 2009. “Beyond Markets and States: Polycentric Governance of Complex Economic Systems,” Nobel Prize lecture, available athttp://www.nobelprize.org/nobel_prizes/economics/laureates/2009/ostrom-l….
First published in Wealthofthecommons.org
Licensed under a Creative Commons Attribution 3.0 License