SciELO - Scientific Electronic Library Online

 
vol.30 número2 índice de autoresíndice de materiabúsqueda de artículos
Home Pagelista alfabética de revistas  

Servicios Personalizados

Articulo

Indicadores

Links relacionados

  • En proceso de indezaciónCitado por Google
  • En proceso de indezaciónSimilares en Google

Compartir


South African Computer Journal

versión On-line ISSN 2313-7835
versión impresa ISSN 1015-7999

SACJ vol.30 no.2 Grahamstown dic. 2018

http://dx.doi.org/10.18489/sacj.v30i2.481 

RESEARCH ARTICLE

 

Process model for Differentiated Instruction using learning analytics

 

 

Ronald G. LeppanI; Johan F. van NiekerkII, III; Reinhardt A. BothaIV

ISchool of ICT, Nelson Mandela University, South Africa. ronald.leppan@mandela.ac.za
IISchool of ICT, Nelson Mandela University, South Africa. johan.vanniekerk@noroff.no
IIINoroff University College, Kristiansand, Norway
IVSchool of ICT, Nelson Mandela University, South Africa. reinhardta.botha@mandela.ac.za

 

 


ABSTRACT

Higher education institutions seem to have a haphazard approach to harnessing the ubiquitous data that learners generate on online educational platforms, despite promising opportunities offered by this data. Several learning analytics process models have been proposed to optimise the learning environment based on this learner data. The model proposed in this paper addresses deficiencies in existing learning analytics models that frequently emphasises only the technical aspects of data collection, analysis and intervention, yet remain silent on ethical issues inherent in collecting and analysing student data and pedagogy-based approaches to the interventions. The proposed model describes how differentiated instruction can be provided based on a dynamic learner profile built through an ethical learning analytics process. Differentiated instruction optimises online learning through recommending learning objects tailored towards the learner attributes stored in a learner profile. The proposed model provides a systematic and comprehensive abstraction of a differentiated learning design process informed by learning analytics. The model emerged by synthesising steps of a tried-and-tested web analytics process with educational theory, an ethical learning analytics code of practice, principles of adaptive education systems and a layered abstraction of online learning design.
Categories:
Applied computing ~ Computer-assisted instruction

Keywords: learning analytics, web design, differentiated instruction, learning design


 

 

1 INTRODUCTION

In response to the #FeesMustFall protests that halted classroom instruction in 2015 and 2016, many South African institutions adopted a blended learning approach to complete the academic year. The increase in the number of lecturers moving their courses online follows a global trend, resulting in the data generated by online student activity to escalate exponentially (Luna, Castro & Romero, 2017). The measurement, collection, analysis and reporting of data about learners and their contexts is called learning analytics (Siemens, 2013). Data is collected and analysed to optimise learning and the environment in which learning occurs. Exploring the challenges inherent in, and opportunities offered by learning analytics at two mega open distance learning institutions, Prinsloo, Slade and Galpin (2012) argue for a unified and holistic approach to learning analytics that involves all higher education stakeholders. Unlocking the potential of untapped data requires higher education institutions to embed a systematic, student-centric and ethical learning analytics process. One area where the potential of learning analytics can be harnessed is the provision of tailored instruction in response to a dynamic learner profile. Teachers who adapt pedagogy towards their learners' needs, do so out of a belief that a strategy that benefits one group of learners may potentially frustrate another (Brusilovsky, Wade & Conlan, 2007). Employing diverse teaching strategies, whether matched or mismatched to learner needs, could potentially keep learners adequately engaged or suitably challenged (Manning, Stanford & Reeves, 2010). There are several levels of tailored instruction that shares the same goal of modifying pedagogy but differ in how the profile is built and how the learning design is modified.

Differentiated Instruction is a teaching approach that tailors pedagogy towards the diverse needs of individuals or groups sharing similar characteristics (Tomlinson et al., 2003). In online learning, differentiated instruction can be achieved through proactively modifying and sequencing learning objects along preset pathways towards the same learning outcome. Learners are grouped according to shared attributes stored in a learner profile and guided to appropriate learning objects.

While not the primary focus of this paper, related terms need disambiguation since the proposed model incorporates some elements of each of the following levels of tailored instruction:

  • Adaptive learning also tailors content and provides individualised pathways. However, unlike differentiated instruction, the profile is built and pedagogy adjusted in real-time through adaptation rules that conditionally include, hide or annotate learning objects (De Bra et al., 2003).

  • Personalised learning, like adaptive learning, provides real-time profile building and adaptation but achieves a higher level of personalisation through incorporating initial diagnostic tests and providing learners direct control over their learning environment (Halim, Ali & Yahaya, 2011).

  • Individualised learning is a teaching approach that allows learners to dictate their own pace and often set their learning agenda (Kop & Fournier, 2010), unlike the previous three levels of tailored instruction that generally works towards the same learning outcomes.

In the face-to-face classroom, a lecturer can tacitly identify a student's personal needs and adapt accordingly. It is accepted that students are more engaged with the learning material if the learning environment is matched to their attributes (Manning et al., 2010). However, attempting to cater for individual characteristics poses a challenge in face-to-face instruction, especially in large classes. This challenge gets manageable in the online learning environment. Still, lecturers do not directly interact with individual students in an online learning environment, so they need data to make a judgment call regarding student needs. The abundant data provided by Learning Management Systems provide an opportunity to create a learner profile of relevant learner attributes (Luna et al., 2017).

This paper proposes a process that can be used by lecturers who wish to capitalise on students' data generated through their online learning activities. Towards this aim, certain concepts need to be unpacked through a focused literature review (Section 2). Section 3 discusses the research approach followed to design the model. Section 4 synthesises all these related concepts into a comprehensive, systematic and data-driven model for the provision of differentiated instruction based on a dynamic learner profile. Section 5 provides recommendations for practice and future research.

 

2 BACKGROUND AND LITERATURE

Section 2.1 examines several learning analytics models to identify the typical steps of the learning analytics process. In response to deficiencies identified in existing process models, Section 2.2 describes an ethical learning analytics code of practice and Section 2.3 describes a layered approach to differentiated learning design as an example of a pedagogy-based approach to learning analytics interventions. Section 2.4 introduces the steps of a web analytics process model as an alternative to drive learning analytics interventions.

2.1 Existing learning analytics process models

The Learning Analytics research community uses Educational Data Mining techniques to understand and improve learning processes and learning environments (Siemens, 2013). Educational Data Mining is concerned with developing methods to explore complex data from educational contexts (Romero & Ventura, 2010). The vision of Learning Analytics researchers is modest incremental interventions to complex educational problems (Merceron, Blikstein & Siemens, 2015). Several cyclical models have been proposed to abstract the steps in a typical learning analytics process.

In Chatti, Dyckhoff, Schroeder and Thüs (2012), the process is described as three steps: data collection and pre-processing, analytics and action, and post-processing (Figure 1). Data is gathered and aggregated from various educational platforms. This data is transformed into input for analysis using pre-processing techniques from the field of data mining. Learning analytics techniques are used to gain insight into strategies employed by learners navigating through online courses. The discovered knowledge about learners is used as a basis to inform suitable interventions and make informed recommendations. The final post-processing step is used to improve the analytics process.

In Clow (2012), learning analytics is described as a cycle that starts with learners participating in formal or informal online learning activities (Figure 2). Through their actions, learners generate large amounts of data that gets logged on online learning platforms. Raw data is processed into knowledge (metrics) about learning processes that can inform appropriate interventions.

In (Hundhausen, Olivares & Carter, 2017) a learning analytics process model is used to design an Integrated Development Environment (IDE) capable of collecting data on learning strategies while programming and intervening where necessary. The process describes four steps (Figure 3): collecting data from the IDE, analysing the data to discover programming behaviours, designing the intervention and establishing an automated response to scaffold learners while learning how to code.

The cyclic model of four stages in (Verbert, Duval, Klerkx, Govaerts & Santos, 2013) focus on the provision of a dashboard for learners to gain insight into their learning strategies (Figure 4). At the first stage, a dashboard will present data visually to the learner who can interrogate the data for self-reflection. After gaining a deeper understanding of their learning processes, the learners can decide whether it is in their best interest to act upon this new insight.

Learning analytics processes can also be used to turn raw data stored in Learning Management Systems into actionable information that can be used to enhance learning (Romero & Ventura, 2013; Romero, Ventura & García, 2008). Usage data of learners completing courses presented on a learning management system is stored in a database (Figure 5). This data needs to undergo a pre-processing phase to transform it into a format suitable for analysis. Data mining algorithms are used on the pre-processed data to create a learner model. Knowledge represented in the learner model can be interpreted and used to make improvements to the learning environment.

The consensus from the learning analytics process models described above is that all of them are represented as a cyclic process that includes a data collection phase, a data analysis phase and a phase where action is taken based on the results of the data analysis. What is not explicitly mentioned in these models are:

1. An initial goal setting phase linked explicitly to educational theory

2. The form of the pedagogical intervention that can be taken based on analysis of the results

3. An explicit reflection on an ethical learning analytics process

The first two shortcomings are echoed by Tsai and Gasevic (2017) who also identified a lack of a pedagogy-based approach to learning analytics interventions. The third deficiency in the above list concurs with a concern raised by Viberg, Hatakka, Bälter and Mavroudi (2018), who found only 18% out of 252 papers published from 2012 to 2018 on learning analytics in higher education, reflected on the issue of ethics. Section 2.2 is aimed at addressing the first two concerns, i.e., the lack of an explicitly named pedagogical goal to initiate and conclude a learning analytics initiative, while Section 2.3 describes the relevant issues towards addressing the third concern (ethics).

2.2 Differentiated online learning design

At the core of this study is a belief that online Learning Design should be informed by behavioural patterns exhibited by learners as they navigate through the course material.

These behavioural patterns reveal a learner's cognitive processes (Sabine Graf & Kinshuk, 2008) and affective states (Desmarais & Baker, 2011) that influence the learning process. The learning design in an adaptive learning system that adapts to a learner profile is abstracted in a layered model (Atif, 2010). This type of layered abstraction makes it easier to define differentiation goals during learning design (Figure 6).

At the base is the domain layer that represents content knowledge as an ontology of relevant concepts and semantic relationships between these concepts. The domain model can be represented as a conceptual graph with nodes representing concepts and edges representing relationships between concepts (Melia & Pahl, 2009). Domain experts are responsible for preparing and structuring learning outcomes and related content. The domain layer should be pedagogically neutral.

The next layer, goal and constraint layer, overlays required competencies and instructional and pedagogical constraints by applying prerequisites and postconditions in the form of learning rules to the domain ontology. Instructional constraints lead to the sequencing of concepts based on whether knowledge of one concept is needed before the learner can move on to another concept. Pedagogical constraints must be defined on learners that are grouped based on prior knowledge and learning goals. Individual learner preferences described in learning style theories are also defined in the goal and constraint layer (Melia & Pahl, 2009).

The learner model layer represents the learner profile. The learner profile can be built explicitly by asking relevant questions to the learner, or implicitly through inferring relevant characteristics by analysing their behaviours (Graf, Kinshuk & Liu, 2008). The learner model can capture their knowledge progression from before, during and after instruction. The learner model can also record learner goals, needs and preferences. The learner model is built while learners work through the course material. In order to optimise online learning design, instructional designers need to build and maintain a dynamic learner profile that is used as a basis for learning interventions. Example categories of learning analytics-based interventions include (Baker & Yacef, 2009):

1. Predictive modelling to model something that cannot be directly observed

2. Structure discovery to find patterns in data that are not obvious

3. Relationship mining to discover or confirm meaningful connections between variables that affect learning

4. Distillation and preparation of data into meaningful information that teachers and learners can use to make informed decisions

Within the above taxonomy, one can identify several techniques within each category and call upon an extensive collection of algorithms to convert raw data into meaningful information. For example, clustering is a common technique categorised as structure discovery. Algorithm choices used for clustering include K-Means, Mean-Shift, or DBSCAN, among others. With so many choices of algorithms to analyse data, the learning analytics process needs to cater for the eventuality that different algorithms may be required if new hypotheses arose after the initial analysis.

The choice of analysis technique is just the technical aspect of building a learner profile. The process of building and maintaining a learner profile must also be conducted within ethical constraints. Some of the fundamental principles of an ethical code of practice for learning analytics is informed consent, transparency and trust (Section 2.3). Informed consent, transparency and trust will be achieved if the reason for the learning analytics initiative is clearly defined from the outset and the learner is made aware of these goals.

The resource layer focus on identifying, repurposing or constructing learning objects that represent the learning content. These learning objects are tagged with metadata based on a standard specification such as IEEE LOM (Atif, 2010). The resource model is, therefore, the layer where the basis is set for instructional design tailored towards the characteristics defined in the learner model. The focus of the adaptation is on the content and presentation of the learning object.

In the course layer, learning objects are sequenced based on the characteristics defined in the learner model. The learner's knowledge, goals, needs and preferences will ultimately dictate how the learner will traverse through the coursework as represented by the domain and goal and constraint models (Melia & Pahl, 2009).

The validation layer is used to examine the instruction design before course delivery (Melia & Pahl, 2009). Validation ensures that learning objects are logically constructed and appropriately sequenced. Validation criteria linked to educational theories must be applied to each layer of the learning design model. An explicit goal setting phase based on pedagogy is, therefore, necessary at the start of any learning analytics initiative.

2.3 Ethical learning analytics code of practice

Since the publishing of the Belmont Report (NCPHS, 1979), higher education institutions have established review committees to ensure research involving human subjects are carried out ethically (Willis, Slade & Prinsloo, 2016). The principles of ethical research upheld by these review committees include respect for persons, beneficence and justice.

Respect for persons is shown when the individual is given adequate information, and they can make informed judgements based on this information. Special care needs to be taken to protect individuals with diminished capacity, from harm. Informed consent by autonomous individuals or their legally authorised guardians should be sought for any ICT related research (Bailey, Dittrich, Kenneally & Maughan, 2012b). The principle of beneficence compels researchers to minimise risks associated with their research and maximise the potential benefits. Invasion of privacy is one of the major ethical dilemmas associated with learning analytics (Griffiths et al., 2016; Steiner, Kickmeier-Rust & Albert, 2016). For any intervention based on learner data, the potential benefits must be weighed against the privacy concerns of the learners. The issue of privacy as it relates to learning analytics is further explored in Section 2.3.1. To ensure the principle of justice, all human subjects should have an equal chance to be selected as participants and receive equal benefits. The issue of equity as it relates to learning analytics is further explored in Section 2.3.2.

2.3.1 Privacy

To eliminate resistance to learning analytics interventions, custodians of data have an ethical and legal obligation to protect the privacy of learners (Hoel & Chen, 2016). Learners' privacy concerns, though, should not prohibit these data-driven initiatives. Admittedly, Slade and Prinsloo (2013) argues that it will be irresponsible to ignore the potential benefits of learning analytics to gain insight into complex learning processes. The issue of data privacy is, therefore, something that deserves careful consideration to ensure acceptance of learning analytics.

In the information age, data protection has become a critical issue related to informational privacy (Griffiths et al., 2016). This sentiment is echoed by Steiner et al. (2016) in the development of LEAs BOX, a learning analytics toolbox that addresses privacy concerns associated with data-driven learner interventions. The LEA'S BOX privacy and data protection framework proposes eight principles that act as best practice guidelines for learning analytics research:

  • Consent: Resistance to provide informed consent can be overcome when learners are provided with relevant information presented unambiguously (Drachsler & Greller, 2012). Necessary information includes, but is not limited to, assurance that their data will be protected, a description of the type of data collected and the purpose for analysing the data.

  • Data protection: Learners need reassurance that their data will be protected from abuse. Strategies implemented, such as anonymisation of data and the use of the latest encryption standards, and privacy policies should be communicated to learners.

  • Purpose and data ownership: The reason for collecting and analysing data should be published. Data ownership and access rights should be clearly defined and displayed throughout the entire learning analytics process.

  • Transparency and trust: Transparency in learning analytics fosters trust in the process and inspires informed consent. An Open Learner Model as presented in (Bull & Kay, 2010) has the potential to build the trust necessary to acquire informed consent.

  • Access and control: While transparency of Open Learner Models affords learners an opportunity to view their data and the inferences made from this data, they should also be allowed an opportunity to modify the data where feasible.

  • Accountability and assessment: Stakeholders initiating learning analytics endeavours should have clearly defined roles and accountabilities throughout the process. Assigning accountabilities is done to ensure data sources and the analysis techniques are appropriate for the goal.

  • Data quality: Data collected about the learner must be timely, precise, appropriate and consistent with the goal. While data quality alone will not guarantee accurate conclusions, poor data quality may undoubtedly contribute to incorrect inferences. All stakeholders have a responsibility to ensure the quality of the raw data collected and inferences made on the data.

  • Data management and security: Policies for data management and security must be established at managerial and technical levels.

To minimise risks and maximise the benefits to be gained from learning analytics, these eight data privacy guidelines should underpin all data-driven initiatives. Adhering to these guidelines will support the principle of beneficence proposed in the Belmont Report (NCPHS, 1979).

2.3.2 Equity

To uphold the principle of justice, learning analytics must be applied fairly and equitably (Bailey, Dittrich, Kenneally & Maughan, 2012a). Unless there is a compelling reason, no learner or group of learners should be included (or excluded) from participating in data-driven interventions above others. Furthermore, if there are conflicts of interest between the educator and learner, these must be ethically managed.

The actions taken as a result of the data analysis should be applied consistently to all participants (Roberts, Howell, Seaman & Gibson, 2016). To this end, special care needs to be taken to ensure models developed through learning analytics are validated. Any potential for bias must be accounted for in the development of the learner profiles. For example, if facial recognition data is analysed, data from male and female learners must be used to create the model. Models rarely have 100% accuracy, so automated interventions must be dealt with in a sensible way (Roberts et al., 2016). One possible solution to avoid mislabelling a learner through inaccurate models is the use of an open learner model as proposed in (Bull & Kay, 2010). Open learner models allow learners to identify potential misinterpretations made in the analysis process.

The ethical, privacy and equity restrictions placed on learning analytics should not deter educators from using learner data towards optimising the learning environment (Slade & Prinsloo, 2013). Instead, the learning analytics process should be accompanied by a carefully crafted code of practice to ensure buy-in from all stakeholders involved in the process.

2.4 Web analytics process model

The requirement of informed consent and beneficence imposed by an ethical learning analytics process, calls for "goal setting" to be an explicit step in any comprehensive learning analytics process model. "Learning optimisation" as a generic goal of learning analytics initiatives, is too vague and inadequate for building trust in a learner whose informed consent is required. The learning analytics process models described in Section 2.1 is mostly silent on the need for an explicit goal setting phase. With an overemphasis on data collection as the start of the process, we run the risk of taking a haphazard approach to learning analytics initiatives.

Furthermore, the extensive choice of analysis techniques and associated educational data mining algorithms that can be used to extract meaningful information from learner data, needs to be acknowledged in a comprehensive learning analytics model. New hypotheses may have emerged after the initial analysis, and while there was no change in the initial goal, these hypotheses need to be tested before action is taken based on the results of the analysis. The existing learning analytics processes (Section 2.1) also fail to acknowledge this intermediate step.

In online learning, and in particular if learning is delivered through Learning Management Systems, learners receive instruction in a web-based environment. These Learning Management Systems would typically record all learner interactions, thereby providing data that could potentially help us understand learners, their cognitive strategies or affective states. A systematic, comprehensive and student-centric learning analytics process is needed to avoid a hit-or-miss approach to harnessing this untapped learner data.

Learning Analytics and Web Analytics share the same generic goal of using data collection and analysis to understand users online behaviours in order to optimise the websites with which they interact. It may, therefore, be worthwhile to examine web analytics models used in e-commerce.

Waisberg (2015) proposed a process of six steps that commercial website designers can use to optimise e-commerce websites under their control. An examination of this model reveals not only similar steps prescribed in the learning analytics process models described in Section 2.1, but also steps to overcome some of the limitations identified in existing models.

As can be seen from Figure 7, the web analytics process model is also represented as a cycle, and the following steps are congruent with the learning analytics process models described in Section 2.1:

  • Step 3: Collect data,

  • Step 4: Analyse data,

  • Step 6: Implement insights

Having previously established the need for an explicit goal setting phase and an ability to evaluate alternative hypotheses post analysis, the web analytics model makes these steps explicit through the addition of Step 1 (Define Goal) and Step 5 (Test Alternatives).

These steps will be described in the context of the provision of a dynamic learner profile for differentiated instruction in Section 4. The next section discusses the research approach and knowledge contribution of this paper in more detail.

 

3 RESEARCH APPROACH

Simon (1996) distinguishes research in the natural sciences with research in the "science of the artificial". The focus of research in natural science is on describing and explaining how objects in nature or society behave and interact, while research into human-made objects focus on how they are designed to meet predefined goals.

Building on the ideas of Simon (1996) and design research in other fields, Hevner, March, Park and Ram (2004) developed guidelines for conducting, evaluating and presenting design science research in the Information Systems discipline. Design Science Research produces technological artefacts as relevant solutions to problems identified in a specific context. These artefacts can take the form of a construct, model, method or instantiation. The artefact contributed in this study is the proposed model synthesised in Section 4. The model represents an abstracted process to optimise online learning environments through the provision of differentiated instruction based on a dynamic profile. The framework for a Design Science Research contribution (Gregor & Hevner, 2013) classifies artefacts according to solution and application domain maturity (Figure 8). A routine design exercise, in which known solutions are applied to known problems have no knowledge contribution and is therefore not suitable as a research inquiry. Based on this maturity model, knowledge contributions in design science can be classified as improvement, exaptation or invention. A knowledge contribution is classified as an invention if a new solution is developed for a previously unknown problem. An invention is a highly rare form of knowledge contribution, and examples in literature are scarce (Gregor & Hevner, 2013). A knowledge contribution is classified as an improvement if it is a new solution for a known problem and an exaptation if it adopts solutions from other fields to new problems.

When classifying a contribution on sliding scales from specific to abstract, limited to complete and less mature to more mature, three levels can be identified (Gregor & Hevner, 2013):

  • Level 1: Situated implementation of the artefact, e.g. instantiation of a software product or application of a process to develop and evaluate the product.

  • Level 2: Emerging design theory in the form of prescriptive knowledge, e.g., constructs, methods, models, design principles and technological rules.

  • Level 3: Complete mid-range or grand design theories about embedded phenomena.

The process model, described in Section 4, was derived from an established web analytics process model used in business. As such it is a level 2 contribution of the exaptation type, since the proposed model extends a known solution customarily used in a business context (web analytics), to a problem in online learning design.

 

4 A MODEL FOR DIFFERENTIATED ONLINE INSTRUCTION BASED ON A DYNAMIC LEARNER PROFILE

This section synthesises a model for Differentiated Instruction based on a dynamic learner profile. The model is derived by integrating:

  • the abstracted differentiated learning design layers to initiate and tailor the online course (Section 2.2),

  • principles of an ethical learning analytics code of conduct (Section 2.3),

  • the steps from the web analytics process to build a learner profile (Section 2.4).

The aim of building a learner profile to provide tailored instruction is shared by researchers who create automated adaptive education systems (AES). Two core phases of a typical AES are the learner modelling phase during which the learner profile is built, and an adaptive learning design phase, during which instruction is personalised based on the unique learner profiles (Brusilovsky & Millán, 2007). The proposed model consists of three phases (Figure 9):

  • Preliminary Goal Setting Phase

  • Learning Design Phase consisting of two distinct subphases

- Initial Learning Design before learner modelling

- Differentiated Learning Design after learner modelling

  • Learner Modelling Phase

4.1 Preliminary goal setting phase

The preliminary goal setting phase in Figure 9 stems from steps 1 and 2 of the Web Analytics Process model (Figure 7). One of the abstracted learning design layers is the validation layer that proposes any learning design choice should be backed by recognised educational theories. One such theory, or group of theories, is the identification of learning styles and the tailoring of instruction based on unique learner attributes associated with the learning style model. The validation layer, therefore, maps onto the preliminary goal setting phase, since they both aim to initiate and conclude the learning analytics initiative based on pedagogy. This paper proposes a pragmatic approach of identifying relevant attributes from multiple learning style theories.

4.1.1 Identify goal and set Key Performance Indicators

The general goal proposed in the model (Figure 9) is enabling and optimising differentiated learning design based on learning style theories. Since differentiated instruction share common phases of tailored learning design and learner modelling with learning style based adaptive education systems, two sub-goals are identified:

  • Correctly identifying relevant learner attributes from learners' online behaviours

  • Appropriately tailoring instruction based on the identified learner attributes

With the goal identified as enabling differentiated instruction and optimising the learning design based on learner profiles, the Key Performance Indicators are linked to the two sub-objectives of the learner modelling phase and the learning design phase. Since the outcome of the learner modelling phase is a learner profile of attributes from selected learning style theories, a generic KPI for a successful learner modelling exercise can be "Attribute X is identified in Learner A". Similarly, the generic KPI to measure a successful learning design phase can be "Learning design is optimised for a learner with Attribute X". A model for evaluating the impact of the learning design is beyond the scope of this paper, but the step is included as part of the goal-setting phase of the proposed model (See the "Test Impact" shape on Figure 9). This impact study is necessary to measure whether the changes made to the learning design had the desired effect on learning. The results of the impact study will feed into further goals for optimising the learning environment and initiate a new cycle.

4.1.2 Select learner attributes and describe online behavioral patterns

One of the biggest challenges when integrating learning styles into adaptive learning systems is the selection of an appropriate learning style theory. Mounting criticism from some dissenting voices (Coffield, Moseley, Hall & Ecclestone, 2004; Cook, 2012; Kirschner, 2016) is pointing to theoretical incoherence, conceptual confusion, lack of scientific basis and seemingly never-ending overlapping characterisation of learner attributes. Further criticism is levelled at the questionnaires used to determine student attributes. This paper proposes that instead of focusing on the model of one particular theorist, we focus instead on the student attributes defined in various learning style theories. By limiting the content of the learner profile to only one learning style theory, we may be missing out on other attributes with an equally significant impact on teaching and learning. The following criteria should be applied to the selection of suitable attributes (Popescu, 2008):

  • The learner attributes must influence the learning process in some way, based on an educational theory

  • The learner attributes must have implications for differentiated learning design

  • It should be possible to infer the learner attributes from metrics that represent online logged behaviours

The focus on collecting and analysing patterns of students' online behaviours to build a learner profile dynamically is precisely in response to the criticism against the use of questionnaires to determine student attributes. When using implicit learner modelling techniques, relevant metrics must be identified that describes the online behaviour of the learner. These metrics must be mapped onto the chosen learner attributes validated by existing educational theory.

4.2 Learning design phase

The learning design phase consists of two subsections, one performed before learner modelling (initial learning design) and one initiated in response to changes in the learner profile (differentiated learning design).

4.2.1 Initial learning design

During the initial learning design phase, the focus is on the domain layer and the resource layer.

For the domain layer, a theoretically sound online instructional design process should be followed to create a significant student-centric learning experience. Module outcomes need to be defined and matched with suitable content. At this stage, the content will be described and later instantiated when the focus shifts to the resource layer. The initial learning design can be represented in the form of a domain ontology.

The input for the resource layer is the learner attributes defined in the goal setting phase. The learning objects that will be presented to the students in the online environment should be linked to the stated module outcomes, and be based on the pedagogic needs associated with the selected attributes. These learning objects must be tagged with educational metadata to record the teachers' pedagogic intention. IEEE LOM standards provide a suitable vocabulary for educational metadata (IEEE 1484.12.1, 2002).

4.2.2 Differentiated learning design

While learners navigate the course material, the learner modelling phase will continuously update a learner profile. This profile provides the input into the differentiated learning design subsection. During differentiated learning design the focus is on the goal and constraint layer and the course layer.

Learning rules are created in the goal and constraint layer. Pre- and post-conditions based on the learner profile are overlaid onto the domain ontology. These rules influence the sequencing, content and presentation of learning objects. Learning objects are differentiated based on pre-requisite knowledge, learner goals, cognitive and affective needs contained within the learner profile.

Rules for differentiated learning design based on learner attributes can be represented using IF statements of the format proposed in Popescu (2008):

IF attribute THEN Action Object Value, where

  • Action = Sort | Dim | Hide | Highlight | Trigger | Show

  • Object = Metadata tag of Learning Object | UI element

  • Value = Value of Metadata tag

The metadata tags of learning objects and their associated values are linked to the fields and values from the Educational category of the IEEE LOM standard (IEEE 1484.12.1, 2002). The list of actions suggested above is not exhaustive. The mentioned actions are illustrative of the typical type of techniques used in adaptive education systems to tailor learning objects (Popescu, 2008):

  • Sort represents the sequencing of LOs or UI elements

  • Dim represents greying out or disabling an LO or UI element such as a button or hyperlink

  • Hide represents the removal of an LO or UI element

  • Highlight represents a recommendation of a particular LO or UI element

  • Trigger represents an action such as the sending of an automated message

  • Show represents displaying an LO or UI element such as a table of content or annotation

The learning rules designed in the goal and constraint layer are implemented in the course layer. The learning objects from the resource layer are tailored according to the rules defined in the goal and constraint layer. The learning objects can be differentiated on their sequence (Action: Sort), content (Actions: Dim, Hide, Highlight, Trigger, Show) or the presentation UI. The chosen educational theory will determine the form of the actions to be taken based on the learner attribute. Any tailored learning object must still guide the learners towards the same learning outcomes defined in the domain layer.

As can be seen from the IF statement, the identified attribute will be the trigger to inform the differentiated learning design choices. In the learner modelling phase, the online behaviour of learners will be used to infer relevant attributes to add to the learner profile. This learner modelling phase is described next.

4.3 Learner modelling phase

The steps in the learner modelling phase are based on steps 3-6 of the web analytics process model (Figure 7) and the learning analytics code of ethical practice described in Section 2.3. Also incorporated into the learner modelling phase are activities and techniques associated with learning style based adaptive education systems and educational data mining.

4.3.1 Review ethical requirements

Any learning analytics initiative must be conducted ethically, and practitioners must carefully address privacy (Section 2.3.1) and equity (Section 2.3.2) concerns. To ensure buy-in from learners, their privacy must be guaranteed during data collection, and they must be convinced that the benefits that will accrue from the data analysis outweigh potential risks. A learning analytics code of practice must be drafted and used to acquire informed consent from all participants whose data will be analysed and used for changes to the learning design. This code of practice must incorporate principles of ethical research, i.e., respect for persons, beneficence and justice (NCPHS, 1979).

4.3.2 Data collection

During the data collection step, metrics identified during the goal setting phase must be collected. All potential data sources that may supply these metrics need to be identified. In implicit modelling, these metrics represent learner cognitive and affective behaviours linked to learner attributes associated with educational theories. In explicit modelling, data can be elicited directly from learners responding to questions. During data collection, all privacy measures as drafted in the learning analytics code of practice must be implemented.

4.3.3 Data analysis

Learner attributes as identified during the goal setting phase are inferred during the data analysis step. The goal and the nature of the raw data collected in the previous step will determine the sequence of activities in the data analysis step. It may be possible, for example, to use simple inferential statistics if inferences and predictions are to be made on a small dataset. More complex goals and large datasets may require more advanced educational data mining techniques, such as listed below (Baker & Yacef, 2009):

  • Predictive modelling to model something that cannot be directly observed by using readily available features as input (e.g., Classification, Latent Knowledge Estimation, Regression)

  • Structure discovery to find patterns in data that are not obvious (e.g. Clustering, Factor Analysis, Social Network Analysis)

  • Relationship mining to discover or confirm meaningful connections between variables that affect learning (e.g., Association Rule Mining, Correlation Mining, Sequential Pattern Mining, Causal Data Mining)

  • Distillation and preparation of data into meaningful information that teachers and learners can use to make informed decisions (e.g., Data Visualisation, Text Mining)

Large data sets from disjoint sources may require pre-processing to prime data for analysis. Preprocessing can include data cleaning, integration, reduction or transformation. It is beyond the scope of this paper to report on all possible pre-processing techniques, but the following serve as an illustration of the potential strategies commonly applied to data mining:

  • Data cleaning is responsible for removing inconsistencies and errors in the data. For example, there may be missing values, noisy, i.e., meaningless or unstructured data, outliers or inconsistent data.

  • Data integration is responsible for consolidating data from multiple disjoint data sources. Learners frequently need to consult resources outside of the learning environment or perform offline activities. Alternatively, biometric data need to be integrated with online behavioural metrics in order to measure affect, for example. Metrics may, therefore, come from several sources and need to be combined sensibly.

  • Data reduction focuses on deciding which data features to include or exclude for analysis. Data reduction aims to find a smaller dataset that can produce similar analytical results. Data reduction can be performed through several techniques such as:

- Aggregation-combining two or more attributes

- Sampling-selecting a subset from the population

- Feature subset reduction-removing redundant or irrelevant features

  • Data transformation converts data into a different format. Conventional techniques to transform data include:

- Normalisation-scaling values into a predetermined range

- Smoothing-the removal of outliers

- Aggregation-preparing data into a summarised format

- Generalisation-substituting data points into hierarchical layers

When data is ready, analysis can proceed through a suitable educational data mining technique. Data pre-processing and analysis is concluded by evaluating the results of the analysis. Evaluation methods will depend on the data mining technique used and are necessary to measure the quality of the learner model that results from the data analysis.

4.3.4 Test alternatives and implement insights

The educational data mining step may reveal unexpected results that need further investigation. The proposed model allows an optional step to generate new hypotheses that may require:

  • Exploration of different data sources

  • Addition of new attributes/features

  • Application of different educational data mining techniques, for example

- Trying different algorithms

- Tweaking clusters

- Using the results of one analysis technique as input into another

  • Applying negotiated learner modelling to seek the learners approval of the conclusions made in the data analysis step

  • Making a quick change to the learning design and conducting a small-scale pilot study to measure the effect of the change

Once satisfactory results are achieved the necessary action can be taken ("Implement Insights"). The subsequent action involves a two-part process:

  • Updating the learner profile with inferred information

  • Initiating the differentiated learning design in response to the changes in the learner profile (Section 4.2.2)

Evaluation of the impact of the differentiated learning design on learner satisfaction, learning effectiveness and efficiency closes the process model loop. This step is represented in the model as an off-page reference since this step is yet to be modelled as part of future work.

 

5 CONCLUSION AND FURTHER WORK

Existing learning analytics process models suffer from a too narrow focus on the data collection and analysis steps of learning interventions. This myopic view on the technical aspects of learning analytics often results in interventions lacking pedagogical validation and ethical reflection. When the first step of the learning analytics process is data collection, there is likely to be a lack of clarity on the goal of the intervention. An ethical learning analytics code of practice requires participants to be explicitly made aware of the goal of the data collection, analysis and intervention. A learning analytics process model also needs to acknowledge the fact that more questions may arise after the initial analysis is done. There is, therefore, a need for a more comprehensive abstraction of the learning analytics process.

Regarding Design Science Research, the knowledge contribution made in this paper is that of an emerging model that addresses limitations in existing learning analytics models. The proposed solution can be classified as an exaptation of a tried-and-tested model used in e-commerce and applying it to the online learning application domain.

The process model proposed in this paper emerged by incorporating steps of an established web analytics process with educational theory, an ethical learning analytics code practice and a layered abstraction of online learning design. The pedagogical aspects of the model are derived from the concept of differentiated instruction, a teaching approach that prescribes modifying instruction based on the diverse needs of individuals sharing similar attributes. The online learning design is abstracted through several layers that systematically guides instructional designers through the process of designing and developing tailored learning objects to satisfy a range of diverse learner needs. The learner modelling phase prescribes a review of ethical requirements, drafting of an ethical code of practice and implementation of mechanisms to ensure principles of data privacy and equity are upheld throughout data collection, analysis and intervention. The learner modelling phase also provides an optional step to test new hypotheses should they arise after initial analysis.

Many education institutions are adopting Learning Management Systems as the online learning environment. However, Learning Management Systems mostly suit a one-size-fits-all approach to teaching. Future work includes instantiating the learning design phase and learner modelling phase in a Learning Management System to determine whether it is possible to provide differentiated instruction and maintain a dynamic learner profile based on the data logged by the system. The ultimate goal for the proposed model is to enable the discovery of relevant learner cognitive and affective attributes that influence online learning behaviours. While the contribution of this paper is on how learning analytics can inform learning design, a model to measure the impact of the changes to the learning design also remain future work.

 

References

Atif, Y. (2010). An architectural specification for a system to adapt to learning patterns. Education and Information Technologies, 16(3), 259-279. 10.1007/s10639-010-9125-9        [ Links ]

Bailey, M., Dittrich, D., Kenneally, E. & Maughan, D. (2012a). Applying ethical principles to information and communication technology research: A companion to the Department of Homeland Security Menlo report. US Department of Homeland Security.

Bailey, M., Dittrich, D., Kenneally, E. & Maughan, D. (2012b). The Menlo report. IEEE Security & Privacy, 10(2), 71-75. 10.1 109/MSP.2012.52        [ Links ]

Baker, R. S. & Yaceff, K. (2009). The state of educational data mining in 2009: A review and future visions. Journal of Educational Data Mining, 1(1), 3-16. Last accessed 23 Oct 2018. Retrieved from https://jedm.educationaldatamining.org/index.php/JEDM/article/view/8        [ Links ]

Brusilovsky, P & Millán, E. (2007). User models for adaptive hypermedia and adaptive educational systems. In P Brusilovsky, A. Kobsa & W. Nejdl (Eds.), The adaptive web. 10.1007/978-3-540-72079-9_1

Brusilovsky, P, Wade, V & Conlan, O. (2007). From learning objects to adaptive content services for e-learning. In C. Pahl (Ed.), Architecture solutions for e-learning systems (pp. 243-261). IGI Global.

Bull, S. & Kay, J. (2010). Open learner models. In R. Nkambou, J. Bourdeau & R. Mizoguchi (Eds.), Studies in computational intelligence (Vol. 308). 10.1007/978-3-642-14363-2_15

Chatti, M. A., Dyckhoff, A. L., Schroeder, U. & Thüs, H. (2012). A reference model for learning analytics. International Journal of Technology Enhanced Learning, 4(5/6), 318-331. 10.1504/ijtel.2012.051815        [ Links ]

Clow, D. (2012). The learning analytics cycle. In Proceedings of the 2nd international conference on learning analytics and knowledge (pp. 134-138). 10.1 145/2330601.2330636

Coffield, F., Moseley, D., Hall, E. & Ecclestone, K. (2004). Learning styles and pedagogy in post-16 learning: A systematic and critical review. Learning and Skills Research Centre London.

Cook, D. A. (2012). Revisiting cognitive and learning styles in computer-assisted instruction. Academic Medicine, 87(6), 778-784. 10.1097/ACM.0b013e3182541286        [ Links ]

De Bra, P, Aerts, A., Berden, B., De Lange, B., Rousseau, B., Santic, T.,... Stash, N. (2003). AHA! The adaptive hypermedia architecture. In Proceedings of the fourteenth acm conference on hypertext and hypermedia (pp. 81-84). 10.1145/900065.900068

Desmarais, M. C. & Baker, R. (2011). A review of recent advances in learner and skill modeling in intelligent learning environments. User Modeling and User Adapted Interaction, 22(1-2), 9-38. 10.1007/s11257-011-9106-8        [ Links ]

Drachsler, H. & Greller, W. (2012). The pulse of learning analytics understandings and expectations from the stakeholders. In Proceedings of the 2nd international conference on learning analytics andknowledge (pp. 120-129). 10.1 145/2330601.2330634

Graf, S., Kinshuk, K. & Liu, T.-C. (2008). Identifying learning styles in learning management systems by using indications from students' behaviour. In 2008 eighth ieee international conference on advanced learning technologies (pp. 482-486). 10.1109/ICALT.2008.84

Graf, S. [Sabine] & Kinshuk, T. (2008). Analysing the behaviour of students in learning management systems with respect to learning styles. In M. Wallace, M. Angelides & P Mylonas (Eds.), Advances in semantic media adaptation and personalization. 10.1007/978-3-540-76361_3

Gregor, S. & Hevner, A. R. (2013). Positioning and presenting design science research for maximum impact. MIS Quarterly, 37(2), 337-355. 10.25300/misq/2013/37.2.01        [ Links ]

Griffiths, D., Drachsler, H., Kickmeier-Rust, M., Steiner, C., Hoel, T. & Greller, W. (2016). Is privacy a show-stopper for learning analytics? A review of current issues and solutions. 23 Oct 2018. Retrieved from http://www.laceproject.eu/learning-analytics-review/files/2016/04/LACE-review-6%o7B%o5C_0/o7Dprivacy-show-stopper.pdf

Halim, N. D. A., Ali, M. B. & Yahaya, N. (2011). Personalized learning environment: Accommodating individual differences in online learning. In 2011 international conference on social science and humanity (Vol. 5, pp. 398-400). 23 Oct 2018. Retrieved from http://www.ipedr.com/vol5/no2/88-H10220.pdf

Hevner, A., March, S., Park, J. & Ram, S. (2004). Design science in information systems research. MIS Quarterly, 28(1), 75-105. https://doi.org/10.2307/25148625        [ Links ]

Hoel, T. & Chen, W. (2016). Privacy-driven design of learning analytics applications: Exploring the design space of solutions for data sharing and interoperability. Journal of Learning Analytics, 3(1), 139-158. 10.18608/jla.2016.31.9        [ Links ]

Hundhausen, C., Olivares, D. & Carter, A. (2017). IDE-based learning analytics for computing education: A process model, critical review and research agenda. ACM Transactions on Computing Education, 17(3), 1-26. 10.1 145/3105759        [ Links ]

IEEE 1484.12.1. (2002). IEEE standard for learning object metadata. Learning Technology Standards Committee of the IEEE, 1484.12.1-(July), 1-44. 10.1109/IEEESTD.2002.94128

Kirschner, P A. (2016). Stop propagating the learning styles myth. Computers & Education, 106, 166-171. 10.1016/j.compedu.2016.12.006        [ Links ]

Kop, R. & Fournier, H. (2010). New dimensions to self-directed learning in an open networked learning environment. International Journal of Self-Directed Learning, 7(2), 1-18.         [ Links ]

Luna, J. M., Castro, C. & Romero, C. (2017). MDM tool: A data mining framework integrated into Moodle. Computer Applications in Engineering Education, 25(1), 90-102. 10.1002/cae.21782        [ Links ]

Manning, S., Stanford, B. & Reeves, S. (2010). Valuing the advanced learner: Differentiating up. The Clearing House, 83(4), 145-149. 10.1080/00098651003774851        [ Links ]

Melia, M. & Pahl, C. (2009). Constraint-based validation of adaptive e-Learning courseware. IEEE Transactions on Learning Technologies, 2(1), 37-49. 10.1109/TLT.2009.7        [ Links ]

Merceron, A., Blikstein, P & Siemens, G. (2015). Learning analytics: From big data to meaningful data. Journal of Learning Analytics, 2(3), 4-8. 10.18608/jla.2015.23.2        [ Links ]

NCPHS. (1979). The Belmont report: Ethical principles and guidelines for the protection of human subjects of research. OPPRReports, 44(76), 1-5. 10.1002/9780471462422.eoct093. arXiv:DHEWPublicationNo.(OS)78-0012        [ Links ]

Popescu, E. (2008). Dynamic adaptive hypermedia systems for e-learning (Doctoral Dissertation, University of Technology of Compiègne).         [ Links ]

Prinsloo, I?, Slade, S. & Galpin, F. (2012). Learning analytics: Challenges, paradoxes and opportunities for mega open distance learning institutions. Proceedings of the 2nd International Conference on Learning Analytics and Knowledge - LAK '12, 130-133. 10.1145/2330601.2330635

Roberts, L. D., Howell, J. A., Seaman, K. & Gibson, D. C. (2016). Student attitudes toward learning analytics in higher education: "The fitbit version of the learning world". Frontiers in Psychology, 7(DEC), 1-11. 10.3389/fpsyg.2016.01959        [ Links ]

Romero, C. & Ventura, S. (2010). Educational data mining: A review of the state of the art. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), 40(6), 601618. 10.1109/TSMCC.2010.2053532        [ Links ]

Romero, C. & Ventura, S. (2013). Data mining in education. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 3(1), 12-27. 10.1002/widm.1075        [ Links ]

Romero, C., Ventura, S. & García, E. (2008). Data mining in course management systems: Moodle case study and tutorial. Computers & Education, 51, 368-384.         [ Links ]

Siemens, G. (2013). Learning analytics: The emergence of a discipline. American Behavioral Scientist, 57(10), 1380-1400. 10.1177/0002764213498851        [ Links ]

Simon, H. A. (1996). The sciences of the artificial. Computers & Mathematics with Applications, 33(5). 10.1016/S0898-1221(97)82941-0. arXiv: 0262191938        [ Links ]

Slade, S. & Prinsloo, P. (2013). Learning analytics: Ethical issues and dilemmas. American Behavioral Scientist, 57(10), 1510-1529. 10.1177/0002764213479366        [ Links ]

Steiner, C. M., Kickmeier-Rust, M. D. & Albert, D. (2016). LEA in private: A privacy and data protection framework for a learning analytics toolbox. Journal of Learning Analytics, 3(1), 66-90. 10.18608/jla.2016.31.5        [ Links ]

Tomlinson, C. A., Brighton, C., Hertberg, H., Callahan, C. M., Moon, T. R., Brimijoin, K., . . . Reynolds, T. (2003). Differentiating instruction in response to student readiness, interest, and learning profile in academically diverse classrooms: A review of literature. Journal for the Education of the Gifted, 27(2-3), 119-145. 10.1177/016235320302700203        [ Links ]

Tsai, Y.-S. & Gasevic, D. (2017). Learning analytics in higher education - challenges and policies. Proceedings of the Seventh International Learning Analytics & Knowledge Conference on - LAK '17, (October), 233-242. 10.1 145/3027385.3027400

Verbert, K., Duval, E., Klerkx, J., Govaerts, S. & Santos, J. L. (2013). Learning analytics dashboard applications. American Behavioral Scientist, 57(10), 1500-1509. 10.1177/0002764213479363        [ Links ]

Viberg, O., Hatakka, M., Bälter, O. & Mavroudi, A. (2018). The current landscape of learning analytics in higher education. Computers in Human Behavior, 89(December), 98-110. 10.1016/j.chb.2018.07.027        [ Links ]

Waisberg, D. (2015). Google analytics integrations. John Wiley & Sons.

Willis, J. E., Slade, S. & Prinsloo, P (2016). Ethical oversight of student data in learning analytics: A typology derived from a cross-continental, cross-institutional perspective. Educational Technology Research and Development, 64(5), 881-901. 10.1007/s11423-016-9463-4        [ Links ]

 

 

Received: 01 Mar 2017

Creative Commons License Todo el contenido de esta revista, excepto dónde está identificado, está bajo una Licencia Creative Commons