Abstract Details

Towards The Gamification Of Software Engineering

Our goal is to introduce gamification to real world software engineering practice so as to enhance the performance of the software engineer in the team setting. A key challenge in the introduction of gamification to the professional engineering setting is that it is generally not feasible to introduce explicit gamification strategies that overlay day-to-day processes. As a consequence, strategies must be built over such data as is already collectable from the context, and must provide behavior cues to the software engineer that map to established individual and team oriented engineering goals. In some sense, the game already exists, and the role of the gamification researchers must be to elucidate the engineering process, simplifying its presentation to make it tractable to all stakeholders, in order to focus the engineer on how individual action impacts the common good. By careful use of gamification techniques in this presentation, we may hope to influence behavior positively. The challenge then is to model the complexity of the modern engineering process, monitor it in real time and present individual and team impact in ways that reflect a causal relationship to meaningful performance goals. In this paper we set out our methodology for addressing this question, report on our progress to date and outline our future work.

A Platform Approach to SE Process Monitoring

How then does one approach the gamification of the software engineering process in practice? We base our approach on the software engineering intelligence field, which seeks to measure the performance of software engineering teams in the delivery of the software codebase. As software teams work, they collaborate using a range of engineering tools, both technical and social, that provide detailed and valuable data regarding the process of software construction as it happens. Such measures range from metrics that capture code quality on a commit by commit basis, to measures that record the pace individual and team work, collaboration in problem solving and many other aspects of the complex engineering process.

By combining this data with longitudinal measures of changing software codebase quality, we thereby seek to explain and link engineering practice to outcomes in terms of individual and team performance. However, the volume of data to be considered is extensive, and the computational analysis required is resource intensive. A key challenge in delivering this approach therefore, is to deliver an efficient processing of these data sets, so as to achieve the near real-time response required to keep up with engineering teams, as they perform their daily work. Otherwise gamification is all but impossible. We have over the past few years, developed a highly scalable methodology and working platform for the gathering and processing of these data sets. Our systems can presently process and profile tens of thousands of software repositories and their developer teams, yielding fine-grained quantitative and social network data regarding all aspects of the software engineering process. This is important because in practice, most software engineering code metrics have limited value unless considered comparatively against statistically significant numbers of other software engineering team efforts.

A further practical challenge to the use of software process monitoring technology has been concerns regarding the unauthorized use and misuse of fine-grained measurements. As monitoring proceeds from coarse-grained data, such as task completion, to fine-grained data, such as keystrokes, concerns regarding data sovereignty and usage quickly arise for software developers. As Phillip Johnson [1] puts it, ‘fine-grained data that provides the most compelling analytics about development is also the largest obstacle to industrial adoption. ‘Our work with engineering teams and students suggest to us that an equally important concern in practice is the accuracy of analysis performed using such data sets. How then do we persuade engineers to share their data?

We believe that gamification offers a possible practical model by which data use and sovereignty concerns can be solved, whilst meaningful assessment data is shared amongst stakeholders. A little considered but nonetheless potentially important benefit of the application of gamification strategy to process monitoring is the fuzzification [2] of the underlying behavioral data sets on which reward models are based. By reducing complex data to that required to support (and only support) a simplified gamification model of the underlying process, the privacy of input data can be conserved. Our work to date with software engineering teams suggest that the processing of fine-grained data sets in this way can be acceptable to engineers in practice, making available crucial data sets that would otherwise remain unacceptable to the majority of working engineers.

Automating SE Appraisal

The key next step in our research agenda is to develop efficient methods for the real-time social analysis and gamification of these data sets in order to deliver actionable insights. We take the mentor relationship between engineer and line manager as the model by which a gamification strategy based on performance appraisal can be built. The game dynamic is to demonstrate how engineer actions contribute to the team performance, seeking to inculcate a matching conceptual model of this relationship in the mind of the supervised. In practice this amounts to the development of an expert system capable of software engineering performance appraisal.

Our approach to this is to consider software engineering as a social network process that generates code as the primary artifact. This makes it possible to apply a variety of behavioral and social analysis frameworks such as Pentland’s social physics model of influence, social learning, and peer pressure between individuals to understand the development process [3]. Moreover, it allows for the development of an expert systems view of software engineering practice, encoded as evidence based argumentation schemes, that can be used to differentiate the observed behavior of development teams, and thereby trace and attribute the behavioral impact on process efficiency and output quality. By combining an analysis of group dynamics with a fine-grained, code-centric analysis of the individual’s behavior and performance, qualitative questions regarding software engineering practice can be addressed in an automated way, grounded in empirical data.

Our methodology is broadly inferential, leading to the development of knowledge representations that are executable over the digital footprint of software engineering activity. We begin by systematically and comprehensively gathering all relevant field data regarding the social engineering processes under study, and then analyses the patterns within these data in ex post facto studies. Relevant data sets include: longitudinal measurement of source code change and quality; repository metadata from toolsets such as git and subversion; measures of social network activity within development ecosystem toolsets such as bug tracking systems, chatrooms, private messaging in tools such as “slack”, email and so forth; data derived from the instrumentation of software development toolsets [4]; and sociometric data that records the environmental and social context.

Informed both by these data sets and by a knowledge representation of software engineering process management, social computing and gamification, we seek to construct a set of models and frameworks that capture the range of relevant and plausible human judgement and reasoning regarding observable software engineering processes and enhance efficiency and performance of software engineering. Broadly, the goal is to capture the reasoning that a skilled software development manager might present, if he or she were in a position to consider the totality of performance evidence. These models are abstract and necessarily non-monotonic [5] in that they capture a network of interacting facts and arguments structured as conditional statements in predicate logic regarding qualitative concepts in the software engineering domain, abstracted from any particular engineering scenario or evidence set.

These abstract models are then developed further to instrument them, by a process of context specific argument selection, elaboration and probabilistic grounding in measurable data sets derived from specific engineering scenarios. This work is necessarily situated, and to this end we are engaged in a collaborative research strategy with real-world software engineering teams. In working with such teams we have found strong and encouraging interest in the insights the approach can yield.

With an encoding of human judgment built, the resulting instrumented models can then be executed over data gathered, delivering a computational framework for comparative qualitative analysis over quantitative data. We believe that there are universal aspects to software engineering practice that can be better understood by consideration of their application and impact in the large, across a wide set of contexts. To this end, our goal is the incorporate particular insights developed in specific real world contexts into our platform so as to deliver a comparative analytics capability.

Gamification of Performance

With a capacity to perform detailed appraisal of the performance of the software engineer in the context of team development, and based on real-world data that matches behavioral inputs to performance outcomes, we next seek to demonstrate to the engineer how specific action leads to specific impacts. This capacity to relate cause to effect provides us with the basis for gamification of those aspects of the engineering process that are vital to performance enhancement.

We do not believe there is one single appropriate strategy for the gamification of software engineering, but we do believe, drawing from the theory and practice of software engineering, that there are collections of behaviors and practices that are more or less appropriate in any particular engineering context. We thus envision a configurable method, by which particular behavior sets can be selected for, with appropriate reward. Such a method will allow for behavior sets which include behaviors that are possibly conflicting, such as the goals of code factoring and while performing functionality additions for example. Conflicts in engineering practice occur when goal sets come up against resource constraints, and the goal of any gamification of software engineering can thus be seen as the selection of optimal behaviors that maximize adherence to goal priorities under resource constraint. This is the subject of our ongoing research.

Related Work

Gamification has been used in different areas such as education, online social community and business. Many researchers try to find a way to increase the efficiency and gamification maybe a good choice as it has positive effect in psychological and behavioral aspects. There are 10 gamification features[6] and researchers always use rewards feature to stimulate users motivation[7]. Also, in business area researchers adapt different gamification frameworks as 6D frameworks, MDA frameworks, octalysis gamification frameworks and frameworks based on user-center design(UCD) and model-driven architecture(MDA) approach[8]. Software engineering one of main task is to enhance software efficiency[8]. Based on previous researches, we wish to increase motivation of developers thus we wish to use gamification in software engineering area. However, researches doomed to failure due to poor understanding of gamification design processing[9] and there is no clear gamification framework for software engineering. The current gamification methods of software engineering based on cognitive principles such as self-determination theory(SDT)[10], flow theory and group flow theory[11]. However, these methods are not detailed and do not have relation between gamification features and gamification frameworks. Therefore, we can synthesize the current gamification design methods of software engineering and detailed method for engineering of gamified and introduce new methods and models to our topic.

Reference

  • [1] Philip M. Johnson, “Searching under the Streetlight for Useful Software Analytics”, IEEE Software, vol.30, no. 4, pp. 57-63, July-Aug. 2013.
  • [2] Sinha, D. and Edward R. D. Fuzzification of set inclusion: Theory and applications, Fuzzy Sets and Systems, Volume 55, Issue 1, pp. 15-42, 1993.
  • [3] W. Pan, W. Dong, M. Cebrian, T. Kim, J. H. Fowler and A. S. Pentland, (2012) “Modelling Dynamical Influence in Human Interaction: Using data to make better inferences about influence within social systems,” in IEEE Signal Processing Magazine, vol. 29, no. 2, pp. 77-86, March 2012.
  • [4] P.M. Johnson, H. Kou, J. Agustin, C. Chan, C. Moore, J. Miglani, S. Zhen and W.E. Doane, (2003) “Beyond the Personal Software Process: Metrics Collection and Analysis for the Differently Disciplined,” Proc. 25th Int’l Conf. Software Eng. (ICSE 03), IEEE CS, 2003, pp. 641-646.
  • [5] Li H., Oren N., Norman T.J. (2012) Probabilistic Argumentation Frameworks. In: Modgil S., Oren N., Toni F. (eds) Theorie and Applications of Formal Argumentation. TAFA 2011. Lecture Notes in Computer Science, vol 7132. Springer, Berlin, Heidelberg
  • [6] Hamari, J., Koivisto, J., & Sarsa, H. (2014). Does Gamification Work? – A Literature Review of Empirical Studies on Gamification. In proceedings of the 47th Hawaii International Conference on System Sciences, Hawaii, USA, January 6-9, 2014.
  • [7] Majuri, J., Koivisto, J. And Hamari, J. (2018). Gamification of education and learning: A review of empirical literature, in Proceedings of the 2nd International GamiFIN Conference (GamiFIN 2018), pp. 11-19.
  • [8] Mora, A., Riera, D., González, C. et al. Gamification: A systematic review of design framework, Journal of Computing High Education (2017) 29: 516.
  • [9] Morschheuser, B., Hassan, L., Werder, K., Hamari,J. How to design gamification? A method for engineering gamified software, Information and Software Technology, Volume 95, 2018, Pages 219-237.
  • [10] Deterding, S. (2011). Situated motivational affordances of game elements: A conceptual model, in Proceedings of the Conference on Human Factors in Computing Systems, Vancouver, BC, Canada, May 7-12, 2011.
  • [11] Unkelos-Shpigel, N. And Hadar, I. (2015) Gamifying software Development Environments Using Cognitive Principles, in Proceedings of the 27th International Conference on Advanced Information Systems Engineering, Stockholm, Sweden, June 8-12, 2015.

Back to list of abstracts