Digitalisation of emotion?

By Roland Benedikter, December 2017

In January 2017, the digitalisation of emotion was discussed at Stanford University, a global leader in matters of information technology. Researchers presented its imminent amalgamation with technology as unavoidable – if not necessarily desirable.

Photo: © himberry /

Companies and researchers are working with all their might, on the one hand, on developing computers with emotions and, on the other, computerising human feelings. Both developments are intended to reinforce one another and, in the best case, combine. Billions are being invested to short-circuit the technical, economic and human futures and thus to achieve so-called convergence between humans and machines. Jonathan Gratch, director for virtual human research at the Institute for Creative Technologies of the University of Southern California, said the following about the new human-technology hybrid field of “affective computing”: “Can a machine understand human emotions? For what purpose? And can a machine itself ‘have’ emotions? What effect would that have on the humans that interact with it?”

The focus of the application of these questions is unmistakeable because they arise “in the context of very different domains, including medicine, health services, economic decision-making and the training of interpersonal skills.” The same applied with regard to the practical implications of (still to be developed) “human computers”, computer-mediated interaction and human-robot interaction. In total, this necessitated an interdisciplinary partnership between the social, human and computer sciences, according to Gratch.  

Gratch attempts – like many others meanwhile as well – to develop computer models of cognitive and social processes as well as of the social human being. The long-term goal is to give computers emotions and, above all, conversely to computerise human emotions – in order to be able to store, investigate, copy and, finally, sell qualitative experiences. Imagine, these researchers say, that we could store the inner qualitative experience of emotion by means of brain implants or other direct connections between computers and human brains such as brain computer interfaces (BCIs) or brain machine interfaces (BMIs), which are already becoming standard today in many application fields, in a virtual non- or hybrid-biological substrate and pass it on to others! That would be the transaction of our lives – in the full sense of the word.

The goal of such research is not primarily to deepen our understanding of humans by obtaining an insight into their emotions but to merge human feelings with artificial, mainly technological “agents” into “multi-agent systems”. Here the experience of ego-hood, which empirically precedes every emotion in humans in the process of reality, is significantly neglected or its importance with regard to the overall event of human emotion is wholly ignored.

Pervasive computing

The MIT Media Laboratory on Affective Computing with Rosalind Picard in Boston is considered to be the leader in the associated dual project of “computerisation of emotion” and “development of computer emotions”. In the digitalisation of emotion, Picard is expressly concerned with the separation of emotion and thinking. In her opinion, “just because every living intelligent system we know of has emotion does not mean that intelligence requires emotion. Although people are the most intelligent systems we know, and people’s emotion appears to play a vital role in regulating and guiding intelligence, it does not mean there might not be a better way to implement some of these goals in machines. [...] There may exist a kind of alien intelligent living system, something we have never encountered, which achieves its intelligence without having anything like emotion. Although humans are the most marvellous example of intelligence we have, and we wish to build systems that are natural for humans to understand, these reasons for building human-like systems should not limit us to thinking only of human abilities.”

In other words, the computerisation of emotion is not primarily intended for humans. And Picar’s second statement: human attributes are to be deployed and implemented also in non-human form. This crossing point, at which human attributes are mechanised, is called “pervasive computing” in current science. The clearly anti-humanistic character of the aim to digitalise emotion is clearly evident here in that she transfers human attributes into a non-human setting in that she seeks to separate thoughts from emotions – and thereby dehumanises both.

When Madhumita Murgia – like many other superficial current commentators – takes it for granted in the London newspaper The Telegraph of 15 January 2016 that “’emotional machines’ are about to take over our lives”, it therefore does not mean that the emotions of these machines are human emotions. And neither does it mean that they are not a mere pretext or appearance to give humans the “feeling” that they continue to live in a “natural” world. As conceived by researchers such as Picard, they are more part of an “interaction design”. The aim is to replace human communication through machine logic which “takes over” the public space and world we live in.

The actual human element, which consists precisely in the unity of thinking, feeling and will (and in the capacity of this unity in the ego process for emotional thoughts and “thinking feelings”) is ignored here and artificially picked apart. The overall negatively tinged character of this project as a whole is also revealed in the much quoted study Beyond the basic emotions: What should affective computing compute? by Sidney D‘Mello and Rafael Calvo. They write, reflecting many others who today are professionally engaged in “researching the emotions”: “One of the primary goals of Affective Computing (AC) is to develop computer interfaces that automatically detect and respond to users' emotions. Despite significant progress, ‘basic emotions’ (e.g., anger, disgust, sadness) have been emphasized in AC at the expense of other non-basic emotions. [...]The results indicate that engagement, boredom, confusion, and frustration (all non-basic emotions) occurred at five times the rate of basic emotions after generalizing across tasks, interfaces, and methodologies. Affective Computing will have to take account of that.” It is striking that both the “basic” and “non-basic” emotions are here all seen as negative emotions. Not a trace of positive ones such as affection, selflessness, the voice of conscience or, indeed, love. These would, after all, demand the activity of the I.

Human emotions – target for hackers?

This development, seen as a whole, gives little cause for optimism. For it not only opens up an incalculable number of (commercially) wanted application opportunities but also unwanted ones such as for example new forms of hacking of human emotions. This threat is no fiction but apparently very concrete. Thus Betsy Cooper, executive director, and Steve Weber, faculty director, at the Berkeley Centre for Long-Term Cybersecurity (CLTC) of the University of California, also presented their predictions in Stanford in January: “What will be the state of digital security in five and 10 years? Will it be a ‘Wild West’ where every person and organisation must fight to protect their own personal data? Will the Internet of Things advance so much into our homes and cities that everyone – at all times – is under surveillance? Are sensors going to be smart enough to determine and predict human feelings – opening the door to cybercriminals hacking human emotion?” In the seminar both researchers affirmed this outlook with more or less no qualification. Note that according to Cooper and Weber it is no longer just a case of sensors identifying human feelings using “learning” mathematical algorithms but that they can predict and thus anticipate them. The “anticipation” of feelings is one of the most worrying developments in the attempts to digitalise human emotion.

The trend to merge humans and machines in such a way that both form a “higher” unity, the idea being that current humans are to be taken beyond themselves towards a “higher” state, is called “transhumanism”. The outlook of current “transhumanist” human-machine transformation consists mainly in the development of artificial intelligence – thus interacting “intelligent” machines designed to be self-learning and a new kind of economic exploitation mechanism of basic human characteristics. “Digitalised emotions” humanise the machines, according to the propaganda of the researchers looking for funding for the work. But in reality they dehumanise humans because they turn the latter into guinea pigs and providers of resources and materials for machines.

Today’s global strategists and transhumanist scientists have long moved beyond the question as to whether emotions can be “digitalised” as such or whether that should happen at all. The main concern is the subsequent benefit; whether, and if yes, in what way emotions can be implemented in applications in digitalised form. The research field which today – with great political and social significance for the future – is called affective computing itself apparently has no emotions. Because it fails completely to notice their human value.

The digitalisation of feelings is not just a side-effect but rather a central building block of the transhumanist revolution – in other words, the campaign by international groups from technology, business and politics to move beyond human beings in their current form. But if that project is successful, what will the societal and individual – and, above all, what will the human consequences be?

Underestimation of emotion as a basic human characteristic

The full implications of this development only become apparent if the approach to digitalise emotion is linked with other “transhumanist” developments which are increasingly characterising our time and some of which have already shaped it. Among the most important trends, the futurist Gray Scott in his contribution Seven Emerging Technologies That Will Change Our World Forever also cites alongside transhumanism the development of many different kinds of implant to artificially “feed” the human brain with “desired” experiences.

Implanted feelings, in so far as they are technically feasible, would then stand alongside the digitalisation of health, the planned widespread introduction of robotics in education and healthcare, but also the rapid progress in the development of machines which can see and hear through self-learning, whereby the concept of “learning” is prudently never precisely defined here and remains extremely broad and vague in order to avoid being entangled in contradictions.

Even today many users of so-called “smartphones” (mobile phones) already speak with “Siri”, an artificial intelligence assistant, as if she were a “real” counterpart. But that is only one side of the coin. The other is the trend to directly connect the human nervous and sensory system with machines, such as by means of cochlea implants, visual prosthetic devices and bionic hands with “feeling” which, according to leading scientific journals, have achieved a breakthrough in recent years.

No wonder that, as reported in a BBC news item on 20 March 2017 quoting the Edinburgh University professor of robotics, Sethu Vijayakumar, we are at the beginning of a process which will transform how we live and work over the next two decades. "All of this confluence of robotics, AI, social network systems and knowledge sharing is driving a huge, new revolution," says Vijayakumar. “First came the Internet, then the Internet of Things. You can think of this research as giving those [immaterial] things arms and legs. [...] We have to invest in that [...] because if we don't do it here somebody else will do it and we'll be playing catch up."

The same old story: if we don’t do it, someone else will. Many of us thought that such “arguments” had meanwhile been exposed as primitive and thus no longer acceptable, but clearly this is not the case. For those who are driving this alleged “great revolution”, the emotions are the field in all of this in which the transhumanist idea of overcoming humans as they are now through extending and “exceeding” them, up to and including machine-human hybrid beings, is turning out to be the most promising – very pragmatic but with potentially profound effects on our current understanding of the nature of humans and society.

What is often called human-machine interaction in Germany (among others by the Innovation Dialogue of the German government under Chancellor Angela Merkel, actually a very balanced and thoughtful initiative) is thus – whether intentionally or not – heading for human-machine convergence in the greater international context: from interaction between to merger of human and machine in hybrid forms of which no one yet knows what they will be and in which direction they could develop. The research into the digitalisation of human emotion serves as the peg and motor for this.

In all of this, the actual expertise in and insight into the subject of emotion is, however, paradoxically left out in the cold. Normally emotions are experienced by their subjects as something that belongs to the most intimate part of their own I – although, strictly speaking, they are subordinate to our sense of I (that is, how we experience egohood).

Within the very general and diffuse talk of “emotions” in the discourse of today there are at least four different aspects which at the same time represent a qualitative gradation in relation to the I: from emotion (from outside), to sentiment (from inside), feeling (I quality) and higher perception (subjective-objective experience of the most individual as the most general). All of these things are “feelings” but very different in their reality and effects. All four manifest essential qualitative, if fluid dimensions of “feeling”; it is crucial to differentiate between them for understanding the humanity in the human being.

Yet today’s researchers appear to be unfamiliar even with what is in principle such a simple, well-known and, above all, always present differentiation in experience; primarily, however, it is the meanwhile globalised societal mechanism consisting of technological, economic and politico-cultural interests informing the life of these researchers which lacks such familiarity – or alternatively the inclusion of such differentiation is seen as completely unimportant in the growing global business with the human being.

In today’s research on the digitalisation of the feelings everything is reduced to the lowest level: emotion. This threatens to dehumanise the feelings – both outside humans through false feelings in machines and computers and in humans themselves through devaluation and virtualisation: their transformation into objects of barter and purchase. If young people such as many Stanford students are today under the influence of this research starting to see their feelings as objects of barter and artificial artefacts which can be generated just as well by computers and machines, then it is the privilege, honour and duty of education to counter this – and prove the opposite.

What is feeling?

The good news in all of this is that most of the researchers in this field clearly do not know what feeling is and what they are trying to “transform” into transhuman containers through technology. That protects the sphere of human feelings at stake here at least to some degree. It begs the question, however, for how long and, above all, whether such lack of knowledge won’t have all the more damaging consequences through intrusive technologies such as brain implants the less the inner dimensions of humans are understood.

Where is the sense of perspective? An art of education for our time will in the years to come have to focus particularly on human feeling in view of the advances of the natural sciences and technology. What is it? How is it connected with the individual self? And why is it – like the I which supports, pervades and overarches it – an inviolable part of the dignity of the human being (which is totally ignored by researchers today, thereby also undermining the basis of the international liberal political order)? How is human feeling different from the possible rational “self-reference” of future “rational”, partially self-learning machines which transhumanist circles anticipate will develop a kind of “singularity” in 2045-2050; that is to say, a kind of self reference and thus allegedly also a kind of “self-awareness” in the sense of a combination of memory with anticipatory behaviour?

These are questions which we will have to tackle in the coming years. In this context it is not just a matter of defending a humanistic art of education which takes the human being as its measure but possibly also of a historic opportunity self-confidently to confront the new “human technologies” and the “machine human” they aspire to and thus – taking the detour of negativity, the power which “always wills the evil yet always works the good” (Goethe, Faust I) – help a more profound image of the human being to break through.

About the author: Dr. Roland Benedikter is a global futures scholar at the European Academy for Applied Research in Bozen/Bolzano, research professor of multi-disciplinary political analysis at the Willy Brandt Centre of the University of Wroclaw, research affiliate at the Global Studies Division of Stanford University (SGS) und affiliate scholar at the Institute for Ethics and Emerging Technologies, Hartford, Connecticut.

More: %20Work%20(Readings)/1995_Affective%20computing_Picard.pdf