Emotions in Game AI

In the course of my graduate studies, I looked into how game AI might be augmented with emotions to enhance player engagement with virtual characters. To summarize my results, I wrote a paper describing a brief look at previous research in the field as well as my own concept for an architecture to model an emotion system.
There is no reference implementation as of yet, so the concept is pretty much hypothetical.

The paper

Preview of the paper | Download

Submission to CIG 2017

The initial version has been written in September of 2016 as part of a one of my studies subjects. In early 2017, I revised the paper and decided to submit it to a journal or conference, to see what the community had to say about it. In May, I submitted the paper to be considered for the 2017 conference of computational intelligence and games (CIG). Although the paper has not been accepted, getting feedback from experts in the field was very helpful. Below you can read what the assigned reviewers had to say about the content of the paper.


The review has been single blinded. So the reviewers have been anonymized, but they had my name. Personally I was expecting a double-blind review. Although some effects based on the authors prestige have been observed when using single-blind reviews, I guess there are arguments to be made for either system. To come to a conclusion after the reviews are made, each evaluation is given a score - negative for rejection, positive for acceptance. My score came out at a total sum of zero.

Reviewer #1 (Score: -2)

This paper describes an agent architecture that essentially is a large conglomerate of components like many known agent architectures (although missing components like a learner, relying instead only on memory without analysing it). There is no report on any implementation of this architecture and therefore naturally also no evaluation of its usefulness.

I have many problems with the architecture and the paper. First, the amount of citations of non-peerreviewed "sources" is the largest I have ever seen in a paper (and this includes full papers). Just looking at the introduction reveals viewpoints that for sure are not shared by the AI community (at least by anyone who ever looked at automated theorem proving) and, rather frankly, made me question myself if I should continue reading this paper. Since it is my duty to continue I did and was "rewarded" by many other pieces that never were seriously evaluated (like [23] and [24]). On the other side, there are many (real) papers that would have been of relevance that are not mentioned. An example is

Donaldson T., Park A., Lin IL. (2004) Emotional Pathfinding. In: Tawfik A.Y., Goodwin S.D. (eds) Advances in Artificial Intelligence.
AI 2004. Lecture Notes in Computer Science, vol 3060. Springer, Berlin, Heidelberg

which represents a much more reasonable way how to integrate emotions into the decision making of an agent.

This highlights another big problem of the paper: the title claims that the paper is about moral behavior, but it addresses morality only as a component of the architecture that is mentioned in one sentence. There is much more fuss about emotions (hence my selection of the above citation). I also do not see a precise description of the utility system (by the way, utility is usually at the core of every rational decision making, but there are many different functions that can be used to measure utility, so that a precise definition is always needed when this term is used). And, naturally, there was no exploring of anything going on in this paper. So, the title is highly misleading.

If we add to that obvious problems with citations in the text (often the placeholder "Author" is used) then this is a paper that should not be accepted, even as poster.

Reviewer #2 (Score: +2)

This paper outlines a framework to create behaviour, particularly moral behaviour, with a model that takes context, experience, emotions and relationships into account. The work makes reasonable references back to appropriate literature, and provides a sketch of how such a framework should be set up. Figure 1. looks understandable, but the brevity of the paper results in a lack of implementation details. (which is to be expected from a short paper).

It is a bit unclear if this framework is a design proposal, or has actually been implemented. It would definitely be nice to see something like that in action, or interact with. It would be interesting to hear a talk about this.

I am a bit doubtful though, if this kind of approach can overcome the framing problem. There is a lot of talk about context here, and emotions and relationships, but are these actually generically grounded in experience, or are they all based on predefined rules. In other words, is there a list of situations where the agent feels anger, a list of criteria, or is this something that somehow derives from its perception. The question basically comes back to how this approach would deal with previously unseen experience, or an interaction not anticipated by the game designer. But this is more a comment, than a critique of the current paper, as this seems to be a rather general problem.

Reviewer #3 (Score: 1)

This is a solid conference paper. The paper is well-written. The work is rather incremental. Comparison against other similar works is limited. The paper is borderline to weak accept and thus i propose weak accept.

Reviewer #4 (Score: -1)

This paper presents a very generic agent architecture that includes an affective layer. The paper clearly attempts to do a whole lot of different things - the related work, while extensive for a short paper, covers a number of different areas. It is not entirely clear what specific problem this architecture attempts to solve, or which new capability it tries to provide that is not covered by existing architectures. On top of this, it seems that the architecture is not implemented yet, and certainly not evaluated. While short can report on work that's not quite finished yet, it would be expected that the work is a bit better motivated and more specific. As it is, I don't think this paper is of publishable quality.