“The mind may be compared to a pan of water. If you place the pan on a level and do not jar it, then the heavy sediment will settle to the bottom and the clear water will collect on top, so that you can see your beard and eyebrows in it and examine the lines of your face.“ - XunxiIn Freud's letter to Einstein in 1932, he said that human instincts are of two kinds: those that conserve and unify, and those that destroy and kill. The well known Love and Hate, attraction and repulsion. But he was quick to add that they are not good and evil, as "each is as indispensable as its opposite, and all the phenomena of life derive from their activity, whether they work in concert or in opposition." He went on: "If the propensity for war be due to the destructive instinct, we have always its counter-agent... All that produces ties of sentiment between man and man must serve us as war’s antidote... All that brings out the significant resemblances between men calls into play this feeling of community, identification, whereon is founded, in large measure, the whole edifice of human society."
I'm
reflecting on these polar tendencies in the context of computational
social science, the result of the work of people like Alan Turing, Judea
Pearl, Alex Pentland, Molly Crocket, and Radhika Nagpal (to name but a
few). Pentland, for example, focused on social networks in his paper on
promoting cooperation. One might say that these researchers are trying
to map out the "social landscape" (to contrast with Harris' book The
Moral Landscape). In our society today, despite the current social media
services available, there are many people with few meaningful
connections to others, people who replace social ties with unhealthy
addictions, and those who find the thrill of tearing down more appealing
than the work of building up.
In
a society with many broken parts, we need to chart a clearer course
toward healthy relationships. Freud made contributions in this direction
(arguably some better than others). And researchers like Elinor Ostrom
demonstrated that it takes a LOT of documentation from history and
ethnography, not just formal models. Now, combining detailed data with
new analytic tools has created the potential today to extend that effort
still further. For example, Iyad Rahwan described an algorithmic social contract.
In
"Rise of the Machines: A Cybernetic History" author Thomas Rid devotes a
chapter to one of the interesting and less well known results of the
social upheaval that occurred in the '60s and '70s. The psychedelic
counterculture incorporated cybernetic ideas, which profoundly affected
people like Kevin Kelly and Stewart Brand (of the Whole Earth Catalog)
and influenced many who later became Silicon Valley innovators. Is there
any similarity between John Lennon's idealistic lyrics in "Imagine" and
a hypothetical algorithm for world peace, or as Freud said in his
letter, "a formula for an indirect method of eliminating war?" They all
draw their influences from the same cultural trends.
Cultural Genesis and Evolution
Cultural Genesis and Evolution
Melvin Kranzberg's first law of technology states:
"Technology is neither good nor bad; nor is it neutral." I propose a
parallel law: "Culture is neither good nor bad; nor is it neutral."
There isn't perfect symmetry here though, since culture includes all of
technology but technology doesn't include all of culture. The
relationship between the two is very strong nonetheless, and will only
increase. Technology has
a growing influence on society, some aspects of which are obvious,
others are nearly imperceptible, but all contribute to shaping our
choices and actions. This much is generally well recognized. The
implication is that if you control these influences, you can steer the
future course of events in the direction of your choosing. This fact
isn't as well appreciated, but the potential for direct and often
invisible influence at the level of entire societies isn't new.
Religious and political propaganda has been around for a very long time,
directing cultural evolution into new directions. It was new technology that enabled the first revolution in agriculture about 12,500 years ago, directly precipitating large demographic and cultural shifts. What cultural changes will our new cognitive tools
precipitate?
We need to
reopen the conversation about what it means to create our culture. Who
shapes it? Who benefits from it? Why? How could this change? As Noah
Yuval Harari identified in his book Homo Deus, our accelerating ability
to shape our environment, ourselves, and our societies brings with it a
responsibility to do so intelligently. We are cultural engineers,
whether we like it or not. We debate this today in the religious and
political arenas, but there's more to it.
Cultural
evolution is open-ended. Where could it go? If we wanted to take it
somewhere, how would we do that? We know there are many wrong ways to
intervene. The more interesting question is then: Is there a right way
to intervene, to take our culture in the direction that leads toward
human flourishing? It would have to be transparent, democratic, and meet
the ethical standards society agrees upon. If such intervention is
possible, what forms might it take? This is the exercise in generating
scenarios. We've done that. The things we haven't done: 1) generate all the possible scenarios, 2) generate any single scenario in the
fine-grained detail needed to show how it would impact the daily activities of
any given individual. For example, if I wanted to ask "How would your
scenario for a sustainable future impact my life?" Would I get a broad
brush response, or would I get the details I'm most interested in? Would
it reflect a new culture, and would I be receptive to making any
changes if I knew what they were? For example, what are the socially sanctioned "default behaviors" when confronted by stress? What options does our culture first suggest? Find a distraction, consume food or entertainment, or respond constructively? A change in default behavior patterns that reflects a more constructive approach to stress and problem solving would require a society configured around supporting and encouraging these changes, not just in it's media and public discourse, but the social and material culture as well.
While it is true that many possible scenarios may lead to further entrenchment of
bad policies, there remain open avenues to a more prosperous
future. A lot of this comes down to communication and information availability, and so
far, persuasion tactics based on the truth haven't been nearly as
successful as those based on fictions. People need information that
communicates the reality of their situation, and we haven't always succeeded
there. From a social epidemiologist's point of view, many people are
living in unhealthy conditions, in the broadest sense of that term. We need to move beyond the standard measurements of social outcomes
and into measuring individual outcomes. The first question any voter asks
is "How will this affect me?" That being the case, we should be able to
tell them, and in as much detail as they want. If we can do that, then
maybe we can also tell them how it will affect their community, region,
state, international relations, and planetary health (within certain
confidence levels). That's the direction in which we are headed. This is the potential of fine-grained data paired with
counterfactual thinking to connect the dots in a new way, to reveal a more
attractive future than the current local policies, operating
procedures, and cultural context is able to permit.
A Beautiful Landscape: "taking no action contrary to nature," or "why wu wei?" (life, culture, aesthetics)
It is interesting to note that, in the Mūlamadhyamakakārikā, Nagarjuna uses the theory of "dependent arising" (pratitya-samutpada) to demonstrate the futility of metaphysical speculations, that things are neither empty nor not empty. Contrast this with the concept of non-action (wu wei), which is used to demonstrate that one should act without undue effort and let nature take it's course. This illustrates the characteristic difference in emphasis between early philosophical thought in India and China, the first focusing on the nature of reality and knowledge, and the second concerning the nature of embodied processes and subjective awareness. As we consider our unfolding cultural evolution, indirectly influencing factors such as these will play a role. The aesthetic qualities rooted in our early past can help guide our future course toward a culture that prioritizes health and adaptability.
Partial list of terms:
無常 (Japanese: mujō, Pali: anicca) meaning: transience, impermanence.
物の哀れ (Japanese: mononoaware) meaning: the pathos of things, an awareness of ephemerality. (Compare with lacrimae rerum, weltschmerz)
侘寂 (Japanese: wabisabi) meaning: traditional Japanese aesthetics - beauty that is "imperfect, impermanent, and incomplete."
無為 (Chinese: wúwéi, Japanese: mui) meaning: non-action
無為而無不為 or 无为而无不为 (Chinese: wúwéi ér wúbù wéi) meaning: "No action is undertaken, and yet nothing is left undone." Tao Te Ching, chapter 37 and 48.
無用の用 (Japanese: muyonoyo) meaning: “without use's use." Zhuangzi's short parables about the use of the useless (Carpenter Shih, Crippled Shu)
もったいない (Japanese: mottainai) meaning: "Do not destroy (or lay waste to) that which is worthy."
道 (Chinese: tao) meaning: As a way of life, it denotes simplicity, spontaneity, tranquility, weakness, and non-action (wúwéi). “Reversion is the action of Tao.”
Additional reading:
Semi-artificial Photosynthesis
Get Ready for your Digital Model by Pedro Domingos
A Beautiful Landscape: "taking no action contrary to nature," or "why wu wei?" (life, culture, aesthetics)
“Broad ways are extremely even, but people are fond of bypaths.” - TTC 53Peter Corning said that "life is a process with a purpose" (teleonomy), and that purpose is our collective survival enterprise. Does our aesthetic sense mediate our relationship to that purpose? If Barry Lord was right that the kind of art you make and value reflects your culture, then will our culture reflect the kind of art we make and value? Seigen Ishin implied there are two kinds of people who see "mountains as mountains, and waters as waters," the naïve and the wise. And Karl Schroeder said "any sufficiently advanced technology is indistinguishable from nature." All of this, taken together, would seem to reinforce the view that we should take nature as our inspiration.
It is interesting to note that, in the Mūlamadhyamakakārikā, Nagarjuna uses the theory of "dependent arising" (pratitya-samutpada) to demonstrate the futility of metaphysical speculations, that things are neither empty nor not empty. Contrast this with the concept of non-action (wu wei), which is used to demonstrate that one should act without undue effort and let nature take it's course. This illustrates the characteristic difference in emphasis between early philosophical thought in India and China, the first focusing on the nature of reality and knowledge, and the second concerning the nature of embodied processes and subjective awareness. As we consider our unfolding cultural evolution, indirectly influencing factors such as these will play a role. The aesthetic qualities rooted in our early past can help guide our future course toward a culture that prioritizes health and adaptability.
Natural, Artifactual, Coevolutionary: Algorithmic art, a narrow application of natural aesthetics to factors influencing our social landscape and evolution
“Die jetzt aufgezeigte Handlung ist thetisch, antithetisch und synthetisch zugleich." - Johann Fichte
Architecture is
considered a form of art. Why? After all, a building needs to provide
some kind of shelter from the elements, but it doesn't need to be
especially beautiful. Yet we all admit that some buildings are better
looking than others. In like manner, I want to suggest we should view
the coding for computer algorithms like architecture. In their basic
design, they are just a functional scaffolding, but because of the
social settings in which they perform their function, and the end
result, there are several examples of algorithms that I believe can be
considered genuinely beautiful.
The first example is Origamizer, an algorithm created by Erik Demaine and Tomohiro Tachi that can tell you how to fold a 2 dimensional sheet of paper into any 3 dimensional origami polyhedron. You read that right, ANY polyhedron. Want a paper rabbit? No problem. And now this algorithm is freely available as computer software.
The second example is Impartial Automatic Redistricting, an algorithm created by Brian Olson, a software engineer in Massachusetts, that can create "optimally compact" congressional districts in each state. Algorithm-based districts make so much intuitive sense that there are now many ways to generate them, like Wendy Cho's algorithm, which can do more or less the same thing.
Both of these examples may not be as iconic as the Taj Mahal in India or Sagrada Familia in Barcelona, but I think they qualify as the computer code equivalent of beautiful architecture. And they deserve wider recognition. Perhaps some day we'll have a phylogeny of algorithms, which will be displayed in museum collections or the equivalent of zoological gardens. Or maybe, in some sense (per Noah Harari) we already do.
Digital alter egos and the interpersonal utility comparison problem
At this point I hear several objections. "Since we already have an understanding of our goals for social and environmental health, why do we need a digital alter ego to suggest scenarios we already want? And furthermore, if our so-called "leader/decision-makers" can't hear the smartest among us (scientists) why would they listen to digital doubles?" Because, as Pedro Domingos and other AI researchers suggest, a digital double would know you better than anyone else. At the upper limit, it would know you better than you know yourself. Consequently, if it can’t persuade you, nobody can. By comparison, a scientist is hardly persuasive to those who are opposed, on ideological or tribalistic grounds, to the basic assumptions or conclusions offered. This is the big obstacle to effective science communication: bridging the empathy gap, solving the interpersonal utility comparison problem. A digital double could be the perfect device for finding a common language to unite groups at opposing ends of the political divide. By exploring all counterfactual possibilities, a better consensus can be reached, and in less time. With further iterations, these benefits become multiplicative. Here's Domingos describing this future trend that we are already witnessing:
Our digital models, imitating our individual lives as they do, will reflect both the good and less desirable aspects of our human nature. But just as when we see our faces in a mirror we have the ability to style our appearance to our liking, when we see our behaviors reflected in these computational models, we will have the ability to reinforce those behaviors that promote our common good, and exert greater restraint over those that detract from it.The first example is Origamizer, an algorithm created by Erik Demaine and Tomohiro Tachi that can tell you how to fold a 2 dimensional sheet of paper into any 3 dimensional origami polyhedron. You read that right, ANY polyhedron. Want a paper rabbit? No problem. And now this algorithm is freely available as computer software.
The second example is Impartial Automatic Redistricting, an algorithm created by Brian Olson, a software engineer in Massachusetts, that can create "optimally compact" congressional districts in each state. Algorithm-based districts make so much intuitive sense that there are now many ways to generate them, like Wendy Cho's algorithm, which can do more or less the same thing.
Both of these examples may not be as iconic as the Taj Mahal in India or Sagrada Familia in Barcelona, but I think they qualify as the computer code equivalent of beautiful architecture. And they deserve wider recognition. Perhaps some day we'll have a phylogeny of algorithms, which will be displayed in museum collections or the equivalent of zoological gardens. Or maybe, in some sense (per Noah Harari) we already do.
Digital alter egos and the interpersonal utility comparison problem
"Behavior is the mirror in which everyone shows their image." - Johann Wolfgang von GoetheA digital alter ego exists for each one of us, and these are engaged in countless virtual evaluations, product pairings, and scenarios to optimize our engagement with other people, service providers, etc. These digital models can also aid in confronting global issues. Here's how they can help us realize our values and aesthetic sense. Also referred to as a "digital double," they explore many "what if" scenarios that we are prevented from investigating in full due to limited time and resources. Often only one of these scenarios can ever become real. The overwhelming majority of them are destined to become unrealized counterfactual possibilities. Think of them as the latent potential that we only have fleeting glimpses of in the normal course of our lives. Some of this potential is the ability to realize, more fully than we do today, an environmentally and ethically responsible way of life that conforms to our aesthetic aspirations.
At this point I hear several objections. "Since we already have an understanding of our goals for social and environmental health, why do we need a digital alter ego to suggest scenarios we already want? And furthermore, if our so-called "leader/decision-makers" can't hear the smartest among us (scientists) why would they listen to digital doubles?" Because, as Pedro Domingos and other AI researchers suggest, a digital double would know you better than anyone else. At the upper limit, it would know you better than you know yourself. Consequently, if it can’t persuade you, nobody can. By comparison, a scientist is hardly persuasive to those who are opposed, on ideological or tribalistic grounds, to the basic assumptions or conclusions offered. This is the big obstacle to effective science communication: bridging the empathy gap, solving the interpersonal utility comparison problem. A digital double could be the perfect device for finding a common language to unite groups at opposing ends of the political divide. By exploring all counterfactual possibilities, a better consensus can be reached, and in less time. With further iterations, these benefits become multiplicative. Here's Domingos describing this future trend that we are already witnessing:
"The next decade is going to be one of accelerating change. Today each company has a little model of you based on just the sliver of your data that it has access to. Netflix has a model to predict your movie tastes based on your movie ratings. Amazon has a model to predict what you're going to buy based on what you've done on their site and so on. But all these little models are quickly coalescing into bigger and bigger ones and soon you'll have a complete 360 degree model of you that learns from all your data and assists you with everything that you're doing in your life, from buying things and making appointments to finding a job or a mate. Our "digital alter egos" will be even more indispensable to us than our smartphones, and the world economy will revolve around them."Our society will become a society of models. Everyone's models will be continually collaborating, competing, and negotiating in cyberspace to determine what happens in the real world. You click on the "find me a job" button on LinkedIn and your model instantly interviews for all the open positions that match your specs at the same time. Another copy of your model can be looking for a car for you, exhaustively researching all the options and haggling with the car dealer so you don't have to. If you're looking for a date your model will go on millions of dates with thousands of other people's models and select the most promising ones to try out in the real world."But your data, and your model, have to be under your control, not owned by some third party that may have a conflict of interest. Sergey Brin says that Google wants to be the third half of your brain, but do you really want part of your brain constantly trying to show you ads? Probably not. We need something different, maybe something like data banks that store your data and use it on your behalf in the same way that regular banks store and invest your money, or maybe we need data unions to even the balance of power between us and large companies in the same way that labor unions even the balance of power between workers and their bosses. And you need to be able to interact with your model, setting its goals, asking it to justify suggestions, telling it where it went wrong and why. All very different from the black boxes that we have today. And finally, as a society we're going to have to decide what kind of society of models we want to have. What's allowed, what's not, how do we make sure that everyone benefits, how do we smooth the transition? There is lots to figure out. If we do there's a bright future where our lives will be happier and more productive. If we don't it'll be a huge missed opportunity. It's in our hands."
Partial list of terms:
無常 (Japanese: mujō, Pali: anicca) meaning: transience, impermanence.
物の哀れ (Japanese: mononoaware) meaning: the pathos of things, an awareness of ephemerality. (Compare with lacrimae rerum, weltschmerz)
侘寂 (Japanese: wabisabi) meaning: traditional Japanese aesthetics - beauty that is "imperfect, impermanent, and incomplete."
無為 (Chinese: wúwéi, Japanese: mui) meaning: non-action
無為而無不為 or 无为而无不为 (Chinese: wúwéi ér wúbù wéi) meaning: "No action is undertaken, and yet nothing is left undone." Tao Te Ching, chapter 37 and 48.
無用の用 (Japanese: muyonoyo) meaning: “without use's use." Zhuangzi's short parables about the use of the useless (Carpenter Shih, Crippled Shu)
もったいない (Japanese: mottainai) meaning: "Do not destroy (or lay waste to) that which is worthy."
道 (Chinese: tao) meaning: As a way of life, it denotes simplicity, spontaneity, tranquility, weakness, and non-action (wúwéi). “Reversion is the action of Tao.”
Additional reading:
Semi-artificial Photosynthesis
Get Ready for your Digital Model by Pedro Domingos