In (Stochastic) Search of a Fairer Alife - A Digest

Authors: Dmitriy Volinskiy, Lana Cuthbertson, Omid Ardakanian 

Dmitriy and Lana are, respectively, Data Scientist and Director of Customer Experience Strategy with ATB Financial; Omid is Assistant Professor in the Department of Computing Science at the University of Alberta. A more formal version of the article was presented at NIPS 2018 Workshop on Challenges and Opportunities for AI in Financial Services: the Impact of Fairness, Explainability, Accuracy, and Privacy, held in Montréal on December 7, 2018.

Once confined to labs and campuses, Artificial Intelligence (AI) is now all but mainstream thanks to the arrival of the age of dirt-cheap high performance computing. Chances are that you’re sitting three feet away from the stainless steel beauty of your new AI-enabled fridge, or that you no longer adjust your thermostat manually because machine learning does an admirable job controlling it for you. Or that your espresso maker’s chatbot has just tweeted it hates the coffee blend you recently bought...Okay, this last one was a bit of a stretch.

When a major disruptive technology like AI rolls into collective consciousness, the latter will tend to re-balance itself. This may reinvigorate the debate about the true meaning and scope of fundamental values in view of the innovation. Can or should machine learning learn to be empathetic? Can a decision affecting human life be entrusted to alife, a piece of life artificial? Is there a mathematical foundation for fairness?

Ironically,  making social values the subject of scientific inquiry has been a challenge primarily in social sciences, particularly in economics.  Most utility-theoretic, risk and non-cooperative game models assume self-interest only. The pioneering behavioral economics research of the 80s helped economists broaden their horizons: cooperative reciprocity was introduced, various decision criteria featuring social considerations were proposed. Yet, unlike AI, fairness still has a long way to go to become mainstream. 

Economics or not, a conventional study would look into properties or a particular fairness-inducing decision criterion: a value function, an allocative mechanism, a twist to a classical game, or similar, through algebra’s lense. But more general questions that a policy-maker or a business leader may ask would remain unanswered. Would a society of a certain fair design grow and prosper? Would adopting or eliminating a certain business practice help us preempt future regulation? Enter the realm of Agent-based Computational Economics (ACE).

 ACE is the computational study of economic processes modeled as dynamic systems of interacting alife agents. The agents do not need to be individuals or businesses literally — an agent is primarily a behavioral rule,  so agents can be biological entities, physical entities, hierarchical structures, or even nodes in a decision process. 
Rich and flexible an interpretive platform as it is, ACE has not yet been largely adopted for studies of social system features. So let’s use it to try and help fill the gap: 

Let’s set up a simple alife society with some fairness features to be investigated, and then use the mechanics of evolutionary computation — an evolution-inspired trial-and-error type of search — to get some practical insight into what living in such a society would look like. 

So we do. An idea of a fair alife society was born circa 2010. It and a hefty chunk of code accompanying it had since then been living a nomad’s life, moving between the spare time and the backburner. But the time has come, and we now have a society of agents.  The agents derive utility, a quantifiable pleasure in life from their own consumption and leisure time, as well as those of their offspring. The parents decide how many children to have; the children are generally modelled after their parents. All agents are mortal, and the more one works, that is, the less leisure time an agent has cumulatively over their lifetime, the higher the chance of dying becomes.

Let’s now define four growth strategies for the newly created society, each representing a certain fairness feature in either production or allocation, or both:

  • {Strategy 0} Each agent maximizes their own utility assuming the myopic contribution to the society's production function.  This is the least “socialist” strategy and the baseline.
  • {Strategy A} We optimize the utility of a representative agent; however, the production gets optimized globally, by a “central planner” who determines everyone's contribution to the society's production function.
  • {Strategy b} Each agent assumes the myopic contribution to the society's production function; however, we choose to maximize the minimum utility in the society.
  • {Strategy Ab} We maximize the minimum utility in the society. The production gets optimized by choosing everyone's input by the central planner. This is the most “socialist” strategy of them all.

So how did our toy societies do? First and foremost, the least socially-oriented Strategy 0 has produced societies featuring a modest economic growth of 2.5% and the best by far level of the society well-being. The growth has been very sustainable, with recessions almost non-existent and very few failed societies (i.e. those societies failing to produce another generation of agents before the limit of societies was reached). Population has tended to grow linearly.

Strategy A has shown results reminiscent of Asian Tigers of the past century. Economic growth has certainly been at the Tiger level of 6%; many societies have experienced population booms. However, the quality of life with Strategy A is considerably lower than Strategy 0. The biggest contrast is mortality, which has increased by 56%. The cause is the consistently higher labor input demanded by the more optimal “central planner”. By construction, a lifelong lack of leisure time increases an agent's mortality, to which the 56% increase is a testimony. Also noteworthy is the 14% increase in the variability of consumption as well as the higher chances of a recession and a society failure.

In contrast, Strategies B and Ab both had a very poor showing. Growth has been nil or negative, agent mortality high, recessions and society failures abundant. Why is this happening given that maximizing the minimum utility in a society seems like a reasonable thing to do, to make the society better? To sustain a society under these strategies requires an extraordinary amount of labor input to the economy from all of the agents to raise the utility of a relatively few members who place very little weight on consumption, deriving most of their utility from leisure. Producing unnecessarily high amounts of output drives agent mortality up; the surviving agents tend to be mostly from the low utility group, and then the mating of the like to create offspring plays its role. The next generation already is more like the low utility group, and the process thus perpetuates itself.

Curiously, even though this study was never meant to be an exercise in political economy, the present discussion of results certainly looks like one. A critic may note —  which will be a fair comment to make —  that the way the experiment was set up may have biased it against the presumably fairer societies, and that the findings are only valid for the specific setup and cannot be generalized in principle. And we wholeheartedly agree, noting in turn that providing a neoliberal exposé on socialist planning has never been the, or even a purpose of the study. 

Economic and other social interactions are quite complex and diverse to provide a model for any situation or context. Models can help us understand the nature of general trends in society; models can help investigate specific aspects of human behavior and cognition. But in between these extremities lies a wide plateau of problems for which there is normally no theory nor empirical data. As we already alluded to in the introduction, ACE-enabled tools are not a mere academic curiosity —  ACE has good practical uses in both business and government. Considering building a municipal pool? Designing a new lending facility? Upgrading a wireless network or a network of retail locations? Test-driving the options in an artificial environment, even as simple and basic as the one we used, can provide the much needed quantitative analysis and help uncover potentially costly “surprises”. This is what our little excursion has aimed to demonstrate.

initializing
We are ATB transformation - innovating at the forefront
of robotics, AI, blockchain and the future.