Integrating Pedagogical And HCI Principles In The Design Of Game-based Learning Environments.
In this position paper, we propose a framework that consolidates constructivist pedagogical principles of learning with HCI design-principles that account for perceptual traits intrinsic to the presentation of multimodal information. The framework is purposed towards environments that utilise a combination of sensory modalities (such as games) during the course of teaching and learning. Indeed, game-based learning is increasingly becoming an integral part of many classroom environments, along with other educational technologies such as virtual learning environments (VLEs), interactive whiteboards (IWBs), and educational web apps; all of which utilise multiple sensory stimuli within the scaffold of education. As the Irish primary and post-primary education systems move to align with the Digital Learning Framework as envisaged under the Digital Strategy for Schools 2015-2020; technology-focused teaching and learning will require effective, efficient, and pedagogically-sound methods and applications. The authors therefore submit this framework as a possible foundation in underpinning the design of educational games and human-computer interaction within robust pedagogical principles.
The concept of integrating primitive perceptual stages with higher cognitive processes in learning is recognised as a useful approach for underpinning our understanding of the complexities involved in learning through a variety of technological media – for example, Moreno’s (2006) cognitive theory of learning with media (CTLM) framework. Pedagogical principles tend to rely heavily on more abstract concepts derived from higher-cognitive processes, but lower-cognitive influences feed these processes and are therefore key elements in any pedagogical framework. Indeed, Gevins (2000) allocates working memory as a key factor in higher cognitive processing, which acts as a realtime bridge between incoming sensory information and higher-level contextualisation and organisation. The constraints (and therefore the upward filtering of information) that is tied to working memory has been regularly quantified in various modalities (Cowan 2010; Baddeley 2004). In addition, destructive interference at the working-memory stage (irrelevant background noise, for example), has a profound effect on attention mechanisms during primary tasks, such as that described by the Changing-State Hypothesis (Jones and Macken, 1993). That said, the underlying mechanisms of working memory as modelled by Baddelely and Hitch (1974), and as expanded upon by Cowan (1999), are still debated (Chein & Fiez, 2010), revealing that even at the more primitive levels of cognition that more research is necessary before fully robust learning technologies can be implemented.
The basis of the framework is informed by a dual-model approach, encapsulated as a macro-model with embedded micro-models. The macro-model depicts larger-scale cognitive factors that are influential on processes of learning, such as learning styles, cultural nuances, personal experiences, and personal motivations. Such personalised, and inherently complex characteristics, require advanced software implementations in its application, and would likely be dependent on robust machine learning (ML) approaches. The embedded micro-models exhibits more generic, quantifiable traits of interaction associated with lower-level perceptual processes. However, these too would require a degree of plasticity during applied implementation so as to adjust to subtle variations in how learners in-take and process sensory information. At this micro-level of interaction, the primary concern is to be ‘compatible’ with inherent perceptual constraints. These micro-models incorporate primitive, but universal, perceptual traits associated with:
- Working memory constraints;
- Auditory stream segregation;
- Cross-modal interactions between visual and auditory perception;
- Attention mechanisms;
- The formation of perceptual objects and scenes.
The following summarises the ‘flow of information’ depicted in the framework:
- A virtual scene is presented to the learner, incorporating several modalities and modes of interaction. The primary presentation and interaction modes most commonly include visual/graphic, text, speech, non-speech audio, and occasionally tactile. An additional consideration in game-based learning environments (and related technologies such as VR/AR/MR) is the spatialisation of virtual scenes. This feature has both advantages and disadvantages. For example, spatialisation allows designers to utilise additional dimensions to present extra information cues (such as context), and also allows designers to segregate information streams in order to dissuade perceptual disruption. However, without appropriately-informed implementation, the spatialisation of information streams can magnify distraction from primary task engagement or promote information-stream disjoint.
- Peripheral sensory systems parse and organise the dimensional features of the presented scene. This is modelled by incorporating established theories and proofs, such as Auditory Scene Analysis (Bregman, 1996).
- The resultant segregated streams are then readied for further sensory filtering along the perceptual system, whereby influencers such as working memory constraints (Baddeley, 2004) and attention mechanisms play key roles. Initially, the framework still isolates the sensory modalities, given the specific within-modality criteria that must first be catered for. This is what constitutes the need for several micro-models at the outset. However, it must be acknowledged that cross-modal interaction does occur at these points, and the framework signifies this through marking the top-down process of schema-theory influences, which involve full-scene predictors based on user experiences of prior holistic scenes (all sensory experiences accumulated to form a sense of the entire scene).
- The various micro-models at this point transition to the macro-model, as the schema databases involving prior-learning experiences interface with larger, higher-cognitive learning influences. Incorporated in the macro-model, to which all micro-models feed, are pedagogical concepts such as pedagogical beliefs; perspectives around technology and learning spaces; pedagogical vision around using technology-rich spaces; curriculum; and assessment agendas.
- Drilling further down, pedagogical principles can be extrapolated from more defined criteria such as learner adaptiveness; language/subject dominance; and learning attitudes.
- All of these elements are shown in the framework as influencers of measurable pedagogical constraints, specifically self-efficacy; valence; and goal salience (Tremblay & Gardiner, 1995; Steel & Andrews, 2012).
- Finally, the macro-model depicts the functional aspects of knowledge acquisition (motivational intensity; persistence; attention), which culminates in an overall metric of achievement.
Although the macro- and embedded micro-models are modularised in the authors’ proposed framework to allow for phased empirical testing; they are not mutually exclusive. User-testing strategies would need to incorporate evaluation methodologies that cross-reference between the micro-model under examination and overall macro-model implications.
A series of evaluations will need to be performed on the proposed model. These will be numerous in order to cover the many elements encompassing the larger-scale model. The following summarises a number of the projected user-studies:
- An examination of behavioural responses to the concurrent presentation of simple auditory and visual stimuli (some of which will need to be evaluated at a sonic dimension level – e.g. isolating timbre, meter, pitch, loudness). On a broader level, the design of stimulus material will be based on auditory scene analysis and stream-segregation principles (including spatialised auditory stimuli);
- An analysis of working-memory constraints and attention mechanisms based on responses to tasks that incorporate stimulus distractors. A system of analysis would comprise a quantitative evaluation of task performance under various loads and under various degrees of cross-modality, and will also be evaluated for perceived performance using the NASA-TLX;
- Higher-cognitive evaluations will require the collaboration of applied psychology colleagues to establish appropriate experimental design, tools, and processes for evaluation;
- A series of qualitative studies will be required to evaluate the larger macro-model Valid quantitative analysis is in terms of establishing the influence of various temperaments on learning processes will be conducted, as well as a series of evaluations based on interview and case-study formats to inform methodological triangulation;
- Other qualitative evaluations focused on factors such as attitudinal and adaptiveness traits on the process of learning will also need to be established, some of which has been done by one of the authors of this paper within the context of initial-teacher training.
The framework proposed in this paper is based on the argument that compatibility with human perceptual and cognitive processes, both at primitive and higher-cognitive levels, is central to designing any learning system. This is especially the case when it comes to educational technologies that incorporate several modalities when presenting dynamically-changing information. It becomes more pertinent when considering that learners are increasingly exposed to abstracted digital environments. While technological advancement often outpaces the establishment of fundamental design frameworks, pedagogy and technology researchers each have at their disposal the required tools and understanding in their disparate disciplines. Therefore, it is possible to merge the principles embodied in both pedagogy and HCI into robust frameworks that inform education-technology developers. Through these frameworks, developers are able to encode for effective learning opportunities that allow for the efficient attainment of learning objectives through reducing design bottlenecks that impinge on basic perceptual and cognitive information flow.
Adapting core constructivist pedagogical theories into a software design framework requires the employment of modularised, controlled test scenarios that map the technological environment with learner motivations, expectations and achievement goals. An interesting parallel is the work of Shin and Kim (2008), who link extrinsic and intrinsic motivations to learners’ attitudes and intentions within the context of social online technologies. Through an adaptation of the Technology Acceptance Model (Davis, 1989; Venkatesh and Davis, 2000), Shin and Kim concluded that perceived synchronicity, perceived involvement, and the user’s ‘flow experience’ as key metrics for encouraging user-engagement with new technology. Similarly, these are important factors when it comes to learning processes, especially when engaging with technology as the primary educational vehicle. Therefore, it is the authors’ opinion that frameworks, such as the one proposed in this paper, will play a defining role in how game-based learning environments will be designed, measured and mediated in a modernising educational system.
- Baddeley, A. D. (2004). Y?our memory: A user’s guide.?Carlton Books New York, NY, USA.
- Baddeley, A. D., & Hitch, G. (1974). Working memory. P?sychology of learning and motivation,8,47-89.
- Bregman, A. and Ahad, P. (1996). Demonstrations of auditory scene analysis. MIT Press.
- Chein, J. M., & Fiez, J. A. (2010). Evaluating models of working memory through the effects of concurrent irrelevant information. J?ournal of Experimental Psychology: General,?1?39(?1), 117.
- Cowan, N. (2010). The Magical Mystery Four. Current Directions in Psychological Science, 19(1), pp.51-57.
- Cowan, N. (1999). An embedded processes model of working memory. In A. Miyake & P. Shah (eds.), M?odels of Working Memory: Mechanisms of active maintenance and executive control. ?Cambridge, U.K.: Cambridge University Press.
- Davis, F. (1989). Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology. MIS Quarterly, 13(3), p.319.
- Gevins, A. (2000). Neurophysiological Measures of Working Memory and Individual Differences in Cognitive Ability and Cognitive Style. Cerebral Cortex, 10(9), pp.829-839.
- Jones, D. M., & Macken, W. J. (1993). Irrelevant tones produce an irrelevant speech effect: Implications for phonological coding in working memory. J?ournal of Experimental Psychology: Learning, Memory, and Cognition,?1?9(?2), 369.
- Moreno, R. (2006). Learning in High-Tech and Multimedia Environments. Current Directions in Psychological Science, 15(2), pp.63-67.
- Shin, DH & Kim, WY (2008) “Applying the Technology Acceptance Model and Flow Theory to Cyworld User Behavior: Implications of the Web2.0 User Acceptance,” CyberPsychology & Behavior. 11(3), 378-382. NY: Mary Ann Leibert Inc.
- Steel, C. & Andrews, T. (2012) ReImagining Teaching for Technology Enriched Learning Spaces. In M. Keppell, K. Souter & M. Riddle (Eds.), P?hysical and Virtual Learning Spaces in Higher Education: Concepts for the Modern Learning Environment. H?ershey: IGI Global
- Tremblay P. F. & Gardner, R. C. (1995). Expanding the motivation construct in language learning. M?odern Language Journal, 79,50520.
- Venkatesh, V. and Davis, F. (2000). A Theoretical Extension of the Technology Acceptance Model: Four Longitudinal Field Studies. Management Science, 46(2), pp.186-204