Introduction

How can we test gamification and social learning in online writing environments? This paper will describe an online writing environment, GWrit (Game of Writing), where students can comment on each other’s writing and where they get rewards for on task activity (gamification). GWrit was developed by the Arts Resource Centre team at the University of Alberta over the past four years and has been used with over 1000 students.

Gamification has been increasingly used in education in recent years. Deterding et al. (2011) define gamification as the use of game elements and game-design techniques in non-game contexts to engage people to solve problems. Points, levels, badges, leaderboards, rewards, progress bars, storyline, and feedback are the extensively used game design elements in the educational context (Nah et al. 2014). Most papers report that gamified environments employ a number of these mechanisms to encourage students to engage with them (see Barata et al. 2016; Brewer et al. 2013; Fitz-Walter, Tjondronegoro and Wyeth 2012). Students’ motivation for learning and lecture participation have also been positively influenced (Barata et al. 2016; Brewer et al. 2013; Gibson et al. 2015; Fitz-Walter, Tjondronegoro and Wyeth 2012; Kapp 2012). However, game study scholar Elizabeth Lawley (2012) warns that reducing the complexity of well-designed games to these surface elements may fail to engage. Deeper implementation of behind the surface elements should be emphasized. Feedback, one of the deeper game mechanics, is reported to have significance on facilitating learning (Brookhart 2017). It is already a key element in traditional education that does not integrate game design but can be magnified in a gamified environment (Kapp 2012). In addition to progress bar(s) and built-in visual cues, comments from other players (students) also contribute to providing continual feedback to learners. Social networking, which allows students to build connections during the learning process, encourages students to give feedback on each other’s work, and has proven to have a positive influence on students’ academic learning (Tian et al. 2011). Feedback has been found to have a significant influence on learner performance (Cho et al. 2007; De-Jorge-Moreno 2012). In other words, the more a student gets involved in the network, the better they will perform.

Gamification analytics and feedback and social media-inspired peer commenting are the key pedagogical innovations behind GWrit. GWrit is a system that combines gamification elements with social networks where students have the opportunity to comment on each other’s drafts to become better critics of their own writing performance. Our aim is to create an environment which is more engaging and motivating for learning how to write in academic contexts.

This paper provides an overview of our research on GWrit from the following perspectives:

  1. key features of GWrit;

  2. an evaluation on GWrit using Cognitive Walkthrough and Heuristic evaluation approaches;

  3. data on user behaviour based on Google Analytics, including comparisons to the behaviour we expected and desired;

  4. a discussion of what preliminary course statistics tell us about the effectiveness of GWrit, including the role that the task completion structure played in motivating learning.

The research subjects were undergraduate students from a wide variety of programs, primarily in Writing Studies (WRS) 102, a 200-student (per term) academic writing course using an online, blended, and gamified learning environment. These students interacted extensively over the duration of the 13-week course, posting in excess of 8000 peer comments, reading thousands of drafts of each other’s work, and habitually using various feedback mechanisms built into the system. While about sixty percent of those students were in their first year, the remainder were from upper years, including about ten percent who were fourth year students. As part of the gamification of the academic writing course, students were given a choice to complete one of three assignments for each of the four modules of the course (i.e. they wrote a total of 4 assignments from a choice of 12). Assignments were grouped into three streams: Arts, Social Science, and Science. Students could choose a science-related topic, for example, or an arts or social science topic. They could pick which genre of assignment they preferred to write, as well. By giving students choices (a somewhat unusual feature in writing courses), we allowed them to play to their strengths as writers and/or to choose a context for their writing with which they were familiar. The gamification of the course went beyond the surface features of badges and leaderboards to draw upon strategies from each student’s background and each student’s strengths.

Major features of GWrit

Gamification

Gamification involves making into a game a ‘tedious’ task like writing an essay or washing the dishes. Jane McGonigal (2011), in Reality is Broken, describes a variety of situations where participants can be motivated by using gaming principles to enliven an otherwise tedious task. An example is Chore Wars (http://www.chorewars.com/), where you define the chores you don’t want to do and then get points for doing them as if the chores were quests in a fantasy role playing game. In her advocacy of gamification, McGonigal contrasts game life with “real” life: “Compared with games, reality is too easy. Games challenge us with voluntary obstacles and help us put our personal strengths to better use” (McGonigal 2011, 4). As the title to her book suggests, she believes that we can modulate the tedium and other problems, the “brokenness,” of everyday reality by thinking of it as a game and, where possible, designing game-like obstacles and rewards into it. This philosophy of gamification can be valuable to anyone trying to enhance motivation in learning. We take a different view: gamification involves providing feedback based on analytics of one’s activity. Whether you get a word count on how many words you have written at the bottom of your window in a Status Bar or get badges when you hit a target, in both cases you are getting feedback based on analytics.

We have built GWrit so that it can be a platform for analyzing and representing information about a user’s writing back to them in different forms, specifically in the form of:

  • Feedback from another person like a comment from a fellow student or TA, or

  • Feedback in the form of gamification components, or

  • Feedback in the form of an analytical panel that might, for example, visualize activity.

Our working hypothesis is that users want information about what they are doing, and that gamification can be a playful way of representing that information back to the users so that they can make decisions and possibly be motivated differently. Gamification, for the purposes of this paper, is a process of re-presenting information in a different rhetorical mode. Instead of simply stating information about progress on a project (as in “you have finished 3 out of 5 tasks in 9 days”) gamification is an experiment in presenting this information playfully. GWrit is a platform into which we can plug and track a variety of analytics that gather information about writing; a variety of “serious” and “gamified” representations of that information; and tools for capturing information and comments about the writing we are trying to encourage through gamified analytics.

GWrit was initially designed to provide a place for users to define their writing tasks and overcome procrastination by getting points as they finish self-imposed tasks. In an early prototype we built a challenge mode where people could challenge each other to finish tasks.

With the opportunity to adapt it for university writing courses, GWrit was changed to provide an environment where instructors can design writing tasks with documented milestones for students to complete. The system keeps track of different types of behaviour, especially the completion of assignments, and uses rules to then assign badges based on activity (Figure 1).

Figure 1
Figure 1

Badge rewards in GWrit.

The most important feature we built was a peer commenting system so that students could be asked to comment on each other’s writing. As will be demonstrated, this was heavily used (partly because its use was required in the course, but many students used it more than was required). Among the reasons for use beyond what was required may have been that the attention of others is a motivating form of feedback.

One last word on the gamification built into GWrit. The current limitation to the system as an experimental system is that the rules for assigning gamification have to be hard coded (Figure 2). One consequence of hard coding the badges is that to save time we limited the number of badges students could collect. Students collect the badges as they progress through the course (Figure 3). We have done the preliminary design work to imagine what an environment would look like where an instructor could define which activity leads to which badge.

Figure 2
Figure 2

Example of rules for badges.

Figure 3
Figure 3

User profile with badges.

Analytics

One of the key digital humanities features of GWrit is analytics. The current version of GWrit has two analytical tools: Word Cloud and Concordance. In GWrit, Word Cloud provides a visual description of the written text based on word frequency; it gives greater prominence to words that appear more frequently in the writing text. For student users, it is an interesting form in which to visualize their writing and provides an easy way to have a visualization of the theme and vocabulary of the piece of writing. For instructors, this visualization helps to provide feedback effectively.

The concordance tool provides an interactive keyword-in-context (KWIC) visualization. Corpus and concordance resources for students have been described as one of the most promising ideas in computer-assisted language learning since the 1980s (Johns 1986; Leech and Candlin 1986) and have become key topics in language teaching. Concordance analysis in GWrit provides the context of a given word in the writing. For example, in the following Figure 4, the given word is “some,” and the right and left word context range is three words. The analysis shows the context of the word ‘some’, with three words before and after every appearance of the word “some.”

Figure 4
Figure 4

Concordance analytic tool in GWrit.

For student users, concordance analysis has the potential to reveal their usual grammar patterns, phrase patterns and sentence patterns. Making these patterns obvious may help some students self-correct their writing for some incorrect linguistic patterns.

With Google Analytics, we are able to assess the usability of the word cloud tool and other key events in GWrit. Figure 5 depicts submit comment events, checkout events, and word cloud events from January 2016 to April 2016, comprising a total of 16,865 events. A checkout event happens when the user checks out a writing task; a submit comment event happens when the user posts a comment on a piece of writing. The number of word cloud events followed the checkout event closely at the beginning of the winter term 2016. We can deduce that most users used the word cloud analytic tool after they checked out the tasks. Since users can edit their writing many times after checking out a task and they can also use word cloud on other people’s writing, the count of word cloud events exceeds the number of checkout events.

Figure 5
Figure 5

Usage profile of three unique events: Submit Comment, Word Cloud, and Checkout.

However, compared with the submit comment event, the number of word cloud events fell off rapidly from the initial assignment, and it was never picked up again by the majority of users. The reason might be that the submit comment event is tied to a grade, while the word cloud is not. The lack of activity could also be tied to users not seeing a value in the visual representation.

Usability evaluation of GWrit

We evaluated GWrit using two methods: Cognitive Walkthrough and Heuristic Analysis. Through these evaluations, both design and programming issues were identified to improve usability.

Cognitive Walkthrough evaluation on GWrit

Cognitive Walkthrough (CW) is an evaluation method that aims to assess the usability of a system, and it focuses on ease of learning (Polson et al. 1992). This method is often used in the early stages of the design of a system and allows designers to anticipate some learning problems before implementation (Polson et al. 1992).

To conduct a CW test on GWrit, we assumed that GWrit users were new to GWrit and had experience with computers and the internet, but little experience with online writing systems. These four tasks were completed to evaluate the key features of GWrit:

  • Task 1: Login and view courses;

  • Task 2: Check out a writing task and submit after completion;

  • Task 3: Use analysis tools to analyze writing;

  • Task 4: Post comments on writings of other users.

Table 1 lists the correct actions sequence required to complete each task. Failure stories are presented to help identify the problems.

Table 1

Correct action sequence to complete each task.

Task 1 Task 2 Task 3 Task 4
1. Open GWrit webpage 1. Click “check out” button under a specific task 1. Open an existing draft 1. Find someone else’s draft
2. Click anywhere on screen to login 2. Write a paragraph in the text area under submission tab 2. Click “Analytics” tab in the right column 2. Click “Comment” tab in the right column
3. Select and click course showed in course panel 3. Submit draft for review 3. Choose one analysis tool to analyze the writing 3. Type in comment in the text area on the comment window that opened, then submit it
4. Click project or tasks in the course to view details

Task 2:

Failure story on action 2:

Criteria: Will the user try to achieve the right effect?

Problem noticed. There are two default tabs in the writing page: note and submission. Users may be confused about the difference between the two tabs and have no idea of where to write.

Task 3:

Failure story on action 3:

Criteria: Will the user associate the correct action with the effect he or she is trying to achieve?

Problem noticed. Users may easily find the analytic tools, including word cloud and concordance, but they may be confused about the goals of the analytic tools.

Task 4:

Failure story on action 1:

Criteria: Will the user know that the correct action is available?

Problem noticed. There is no obvious button or link to the submissions of other users, so users may find it hard to find others’ submissions.

Generally, CW evaluation identified both design and programming issues to be addressed to make GWrit a user-centred writing environment.

Heuristic evaluation of GWrit

Heuristic analysis is a way of analyzing something based on a set of goals and measures of website usability. It is a way of auditing basic features of an interface. Through heuristic evaluation, the design team uncovered a few issues with usability. The heuristic used on GWrit was developed by Xerox for use on a variety of websites, and focuses primarily on user accessibility and website design (Pierotti 2018).

A major feature of GWrit is that a large number of conventions are borrowed from word processors, so the website as a whole is much more familiar to those who are used to using word processors. Users who do not have this experience are at a significant disadvantage when learning to use the website. The usability issues that are present are few but are of significant importance to the user. For example, users can lose their writing if they navigate away from a page without saving, and there is no integrated Help feature.

In terms of visibility, the system provides a good amount to the user at all times. Situations when the user is not totally aware of the status of the system are fairly uncommon. The interface is standardized for the most part, and there are relatively few different layouts for the user to learn and adapt to in order to understand the system. While the majority of features give users control and protect their work, it is possible to navigate away from the page without the user saving their work, causing all of the work since the last save to be lost. Other than that issue, the degree of user freedom is high. It is easy to change quickly between screens and tabs, access different menu levels, and change and edit information. Overall, GWrit is very effectively designed and accomplishes or exceeds the different goals of the heuristic review. Though there are some issues, as noted above, the website is user-friendly and should be familiar to even occasional computer users.

User habit analysis with Google Analytics

To have a general view of user preference and user habits, we used Google Analytics (GA) to collect data generated by user behaviours. Previous studies on website analysis suggested that GA is not only user-friendly but is also a useful tool for analyzing and building a user-centred website (Plaza 2011; Hasan, Morris and Probets 2009). GA provides not only rich data with which to study website usage, but it also provides a customized visual report according to different research aims. All the data collected can be exported into CSV and TSV files, which facilitates deeper analysis.

The following section shows findings based on analysis of data collected from courses that used GWrit during the winter term 2016 at the University of Alberta.

General Information

Figure 6 shows information from the GA dashboard. Since GWrit is only used at the University of Alberta and has not been opened to the public, only authorized users can log in to the GWrit system; therefore, most GWrit users are returning visitors.

Figure 6
Figure 6

General information from the GA dashboard.

The average session duration is 12 minutes and 12 seconds, meaning that most users spend about 12 minutes on the GWrit website at a time. This 12-minute time frame suggests that while users are not using GWrit as a composing environment (because, presumably, an assignment is written in increments of time longer than 12 minutes) they are also not using it only as an assignment submission system. They likely spend time reading or commenting on others’ work. It also suggests that the mutual study environment and commenting system, two major features designed to improve writing skills, are recognized by users.

Figure 7 shows the relationship between task deadline and the usage profile of GWrit. Writing Studies (WRS) 102 instituted different task deadlines in fall term 2015 (September 1, 2015–December 10, 2015) and winter term 2016 (January 1, 2016–April 10, 2016): in fall term 2015, assignments simply had a grading deadline; in winter term 2016, assignments had a draft deadline one week before the grading deadline and then a grading deadline. Figure 7 shows a comparison of the usage during these two semesters. The blue curve shows fall semester 2015, and the orange curve shows winter semester 2016. We tagged different task deadlines for WRS 102 in Figure 7. Points marked with a red circle are deadlines in fall semester 2015; points marked with a black circle are grading deadlines in winter semester 2016; points marked with a green circle are draft deadlines in winter semester 2016. By comparing the data from the two semesters, we can see in fall semester 2016 that the usage curve follows the deadline closely and reaches a peak at the deadline. In the winter semester 2016, the curve is more complex than in fall 2015.

Figure 7
Figure 7

Session comparisons in fall semester 2015 and winter semester 2016.

There are two possible reasons for this:

  1. There were three other courses using GWrit in the winter semester 2016, and it is not possible to remove usage data from the other two courses in GA at this stage of our research. Therefore, we cannot exclude the possibility that some of these peaks were generated by the two additional courses in this term.

  2. A second possible reason that the peaks in the winter semester 2016 are more intensive is the setting of the draft deadlines. The usage reached a peak not only at the grading deadline but also at the draft deadline.

To clarify the relationship between the deadline settings and usage profiles, we collected the data on assignment status including submit for review and submit for grade. In this way, we gathered the numbers of assignments submitted for review and submitted for grades from the different courses, and we extracted the data for WRS 102. In Figures 8 and 9, the blue line shows the curve of assignments submitted for review, and the orange line shows the curve of assignments submitted for grade. We can clearly see the difference between the curves during these two semesters: the assignments submitted for review reach a peak about one week before assignments were to be submitted for a grade in winter semester 2016. In fall semester 2015, the shape of the blue and orange curves is similar: they reach a peak on the same date.

Figure 8
Figure 8

Assignments submitted in WRS 102 in fall semester 2015.

Figure 9
Figure 9

Assignments submitted in WRS 102 in winter semester 2016.

In addition, we found a relationship between the user profile activity and the 3-week module schedule for WRS 102. In winter semester 2016, after the first week, WRS 102 students spent three weeks each on Module 2, Module 3, Module 4, and Module 5. After comparing the curve and the time duration of each module, we found four peaks that follow the four-module duration closely, and the assignments submitted reached a peak at the end of each module.

In general, this comparison of the usage profiles in the two semesters demonstrates that the GWrit usage profile largely depends on the deadline settings and the study module settings. In other words, GWrit is a deadline- and module-driven study environment in its current usage.

Traffic source and device, browser and operating system preference

GA provides a report on how users access GWrit and the device and operating system preferences. From these reports, we have the following findings. Most users (79.23%) visit GWrit through a referral website, and the most used referral website is accounts.google.com. Users use this referral because their email through University of Alberta requires them to login through accounts.google.com to confirm their login authorization. The second source is through a search engine, and the most common search keywords are “gwrit,” “game of writing,” “gwrit ualberta,” and the url of the GWrit webpage (see Figure 10).

Figure 10
Figure 10

Traffic source for GWrit website.

Chrome, Safari and Firefox are the top three browsers used, vastly exceeding the alternatives (see Figure 11). Macintosh and Windows are the top two operating system with a share of almost 90%. However, iOS and Android systems together take 6.2% of the share. Future development needs to focus on the compatibility of GWrit with the most commonly used browsers and operating systems (see Figure 12).

Figure 11
Figure 11

Browsers used to visit GWrit.

Figure 12
Figure 12

Operating System used to visit GWrit.

Through analyzing the session numbers and averaging the session duration of the three devices commonly used (desktop, mobile, and tablet), we find that most sessions happened on a desktop (see Figures 13 and 14). But after comparing the average duration of the sessions using the three devices, we find that the average duration on the three devices was similar, but even longer when users chose a mobile phone or tablet to visit GWrit. However, based on the development cost and the session length on tablets, we have decided against developing a version for tablet users at this stage.

Figure 13
Figure 13

Session numbers using desktop computers, mobile, tablet.

Figure 14
Figure 14

Average session duration on desktop, mobile, tablet.

Common route after logging into GWrit

GA also provides a report on how users engage with the website. Figure 15 presents a behaviour flow report that visualizes the path that users traveled from one page or Event to the next. This shows the traffic route after users had logged in. We generated a flow chart (Figure 16) that illustrates the route straightforwardly. This data shows that, after logging in, most users go to the course panel, which lists all the projects in that course. Clicking a course name leads users to the project panel, where tasks are listed. Then the traffic flow divides into two branches, “task submitted” and “task panel.” Users follow the first branch to view others’ submissions, where they can post or reply to comments about the draft. Users following the second branch usually finish their own task and use interactive tools like word cloud and post comment.

Figure 15
Figure 15

General user behaviour flow.

Figure 16
Figure 16

Flowchart of user behaviour.

In addition to the high traffic route that shows the most commonly used pathways in GWrit, we also found that 7% of users went directly to the “task submitted” panel after logging in, where they viewed others’ submissions, selected one to view, and posted comments or replied to comments. This pathway might be explained by a second user group of teaching assistants and paid peer tutors who usually go directly to the submission list to view and comment on students’ posted assignments. This separate user group, thus, constituted most of the 7% who went directly to the submitted panel. However, it is not possible to separate data generated by these different user groups, so we would need to further analyze user behaviour in these different groups by tagging different users on GA.

Task based structures

Task completion structures: Fuel gauge, task list, deadlines

GWrit incorporates three task completion structures to help students. One structure, the course completion fuel gauge, allows students to track their progress through the course and provides visual feedback as to the relationship between work completed and work outstanding. The second structure, a task list, describes a process to follow to complete the selected assignment. The third structure, assignment deadlines, gives students due dates to submit drafts of the assignments. All three of these task completion structures were used in WRS 102 in winter 2016 term and the first and the last structures were used in fall 2015 term. In a notable exception to this pattern, in the fall term 2015, students were told that the deadlines for drafts of assignments (one week before the marking due date) were optional. Consequently, many students did not post drafts of their work in time to get formative feedback.

Course completion fuel gauge

One motivational feature of GWrit that helps students track their course progress is the course completion fuel gauge (see Figure 17). The fuel gauge highlights the percentage of the coursework that a student has completed and what percentage remains to be done. One problem that emerged with the fuel gauge was that it was insufficiently fine-grained to reflect student progress on individual assignments since it is summative rather than stage-based. In the case of some students who did not submit task-related activities (see next section), the fuel gauge never showed the course as complete, even after all formal assignments were submitted and the course ended. A possible change would be to have the gauge show “Achievement” when an assignment is turned in for grading.

Figure 17
Figure 17

The assignment completion fuel gauge.

Task lists

Each writing assignment in WRS 102 contains a list of activities that outlines a process to complete it. Students are expected to undertake these steps as they work on the assignment. Since WRS 102 is a first-year writing course and many first-year post-secondary students must expand their study skills beyond what was successful in high school if they want to succeed in tertiary education, the list of activities has a pedagogical goal: articulating clearly for students the stages of a successful planning, researching, and drafting process. Figure 18 presents the list of activities for one assignment choice in Module 2. The list appears as a series of tasks that students check out and perform if they have selected this assignment. This task list was used in the fall 2015 iteration of the course.

Figure 18
Figure 18

Activities related to completing an assignment catalog a series of tasks.

Task 1 suggests that students focus their topic by selecting a YouTube video of Calgary Mayor Naheed Nenshi being interviewed. To complete this step, students open a browser tab to visit YouTube and review the range of videos currently available on this topic. From a pedagogical perspective, it requires students to search YouTube using effective keywords to locate an appropriate video (i.e., the mayor being interviewed, not just talking on a news clip), and select one of the options they find.

Task 2 directs students to write a detailed description of what they observe about Nenshi’s body language during a short segment (maximum 3 minutes) of the interview. Students may open a “Notes” window in GWrit to create this description, or a drafting window, or more often a word processing file to complete this work. This task prompts students to observe closely and record details related to their visual perceptions. If they are unsure of how to describe something, they can consult resources on the course Google site or in the textbook to help them identify the key features of a good description.

Task 3 requires students to conduct scholarly research on the topic of body language. Here they can visit the library or conduct online searches using university databases of academic sources to locate reputable information on human body language. Resources exist in the textbook and on the course resources website to help students learn the research process. They can visit the library on campus to talk to a librarian or visit the University of Alberta libraries website and work through tutorials posted online to assist them in finding and evaluating information sources. An important focus of WRS 102 is finding, evaluating, and using scholarly research to support the students’ critical engagement with the topic, a process that is essential to most post-secondary coursework, regardless of disciplinary major.

Task 4 asks students to apply their research on body language to the segment of interview with Mayor Nenshi to interpret his actions through the lens of reputable scholars on this topic. This task encompasses the bulk of assignment 2B where students think critically about how their research informs and illuminates the excerpt they have chosen. In addition, students have to figure out how to refer correctly to their sources in in-text citations and the references section of the paper.

These tasks, then, were intended to lead students through the extended process of university-level secondary research (i.e., with sources). By accomplishing each task, students move towards completing the assignment. In fact, in fall 2015, few students checked out and visibly completed these series of tasks (at least not in a way that registered on the GWrit system). Several explanations are possible for why there was limited uptake of the task activities:

  1. No marks were assigned to task list completion, so students saw no material benefit to completing them, at least formally.

  2. Students completed the tasks informally but were unaware of or confused as to how to formally submit the task activities so they would register in the system or contribute towards the course completion fuel gauge.

  3. Students in fall 2015 had adequate learning strategies, so they knew intuitively how to complete the assignment without following the suggested process (about 40% of the students were 2nd year or above students).

Assignment deadlines

The third task completion structure was assignment deadlines. In fall 2015, only one assignment deadline was used, a due date for the final draft. The single due date resulted in few students posting drafts of their assignments in time to receive formative feedback. In winter 2016, two assignment deadlines were used: a draft of each module assignment was due one week before the grading deadline, and the final draft was due at the grading deadline (January 20, in Figure 19). At the first deadline, students submitted a complete but still-in-progress draft, and they read and commented on other students’ submitted drafts. For the draft deadline, students gave feedback to others, but they also received feedback on their own paper draft from their classmates, as well as members of the instructional team. They received formative feedback but no grade. Figure 19 represents the three-week module structure of the course. It indicates the deadline for drafts, one week before the grading deadline. For the grading deadline, students revised their drafts using peer and instructor feedback and their own ideas for improvement. When they submitted the assignment for the second time, they received summative feedback on a rubric and a grade.

Figure 19
Figure 19

The three week module structure of the course: draft deadlines were one week before the grading deadline.

Figure 20 captures the GWrit site usage over the winter 2016 term. The high level of usage shows double peaks at the dual assignment due dates for each of the five modules. The relatively low usage in February 2016 reflects Reading Week. This graph suggests that of the three task completion structures used in the GWrit system the assignment due dates were most effective for increasing usage over the term.

Figure 20
Figure 20

Usage activity reflecting task completion structure of assignment due dates.

Implications of task completion structures

Of the three task completion structures available for use in GWrit, the assignment deadlines seem to have been most effective in motivating students to complete their work in the course. This is hardly surprising, given the reward structure of education and the mindset of contemporary post-secondary students. While they may have been entertained by the changing colour of the fuel gauge over the term as they completed more and more work, it did not seem to drive them onward. The task lists may have been largely ignored as activities because they did not clearly link to course-related rewards. The Game of Writing as it is used in the academic writing course constitutes a serious game where students learn how to excel by becoming proficient writers of academic prose. Their decisions and choices are driven by time-constraints and whether or not an activity will help them reach their goals. When deadlines are tied to marks, students make time to submit a draft to receive feedback; when they receive and act upon useful feedback, they are able to improve the quality of their assignment drafts.

Learning academic writing

The qualitative improvement in the quality of student ideas and expression over the term suggests that the commenting function in GWrit assists students in learning the core principles of the course and exhibiting those in practice. In a survey of students over four years in the academic writing course that used GWrit, we asked the students to rate the following statement on a scale of 1–5 (with 5 = strongly agree): “This course has improved my understanding of key concepts about academic writing” (see Table 2). This data shows that students feel they improve their ability to write in academic contexts, and it suggests that the use of the gamification system to teach academic writing had positive results.

Table 2

Descriptive Statistical Data by term on the item “This course has improved my understanding of key concepts about academic writing.”

Term n m SD Mdn IQR
Fall 2014 31 4.16 .93 4.00 1.00
Fall 2015 42 3.67 1.12 4.00 1.00
Fall 2016 56 3.91 1.07 4.00 1.75
Fall 2017 69 3.68 1.01 4.00 1.00

Discussion

Our results give much more specific detail than has usually been reported for using technologies to teach academic writing. Liu and Sadler (2003) note that students who are less vocally active, ESL students in particular, participate more in online peer feedback writing than in traditional classes and face-to-face interaction. Gamification of academic work has begun to attract attention from researchers as a way to frame online learning experiences (Kim and Joo 2017; Hanus and Fox 2015; Lister 2015; Hamari, Koivisto and Sarsa 2014). Kim and Joo (2017) highlight fun and engagement as the two important features associated with gamified learning environments. While we did not collect data on the “fun” aspect of GWrit, our high participation rates for commenting are one metric that points towards high engagement. We also found that with an in-person class of 200 students, in-class oral participation opportunities are extremely limited. The high rates of engagement with commenting provided a structured way for students to participate in the course.

Much recent research on the use of games in educational contexts has established that they have the potential to produce good learning outcomes and more fully-engaged student learners (see Barata et al. 2016 for an overview). One recent study in the secondary school context found that students improved their argumentative writing skills significantly when immersed in a blended approach to learning that was similar to the blended approach we used (Lam, Hew and Chiu 2017). These researchers’ blended approach entailed peer comments posted online that were then responded to. A subset of the students in that study also used three kinds of gamification components to enhance their participation and engagement that were similar to the gamification components we used: points, a leaderboard, and role-playing. Lam, Hew and Chiu (2017) also found that students using the gamification approach similar to the way we did posted fewer off-topic comments to their peers and more relevant topics generally. In a survey of students who used GWrit, we found that students reported high levels of learning of academic writing. These similarities, while not conclusive, suggest that more research in gamifying writing might find strong connections between gamification techniques and student learning outcomes.

Chen (2016) reported on a meta-analysis of 20 studies of peer feedback in ESL/EFL writing classes from 1990–2010. Chen (2016) notes that over the last 20+ years many technology solutions have been tried out in language learning classrooms. Ultimately the results are unclear simply because there are too many variables across that many studies to reach any kind of consensus about the advantages of technology-enabled peer feedback. We agree with Chen (2016) and note that the variables that affect the successful implementation of gamification are numerous, and small changes (such as user interface design choices) can have large effects. Our implementation of gamification showed positive learning outcomes for academic writing knowledge and skill, but we share Chen (2016)’s perspective that the details of implementation are important and vary tremendously in the studies reported in the academic literature.

Conclusion

GWrit offers a different way of teaching writing by including gamification components and a mutual study environment. It provides a platform for experimenting with different types of features. Features tied to grades were used more frequently, and assignment deadlines, as one of the task completion structures applied in GWrit, played an effective role in motivating learning. Of course, much work needs to be done to improve the system from programming, design, and assessment perspectives. While students surveyed over a four-year period did report positive learning outcomes, many factors might have accounted for those gains. The evidence suggests that the system did make a difference, despite the work that needs to be done to fine-tune it, but a more refined study to assess this learning might help us make a stronger connection between the design of the system and the learning that happened.

Such a future study might involve comparing students who took the course with the academic progress of students who did not take the course. For future research and development, an open version will be designed and implemented, which will not be limited to University of Alberta students, but open to the public as a useful educational environment. Data generated by a larger group of users will help to enrich research on the field of digital humanities, especially the study on different usages of analytics and gamification. We would also like to collect and directly assess the learning outcomes for student writing in the course. We might also be able to compare the skill levels of students who participated highly in the commenting function of the course with students who did not participate beyond the minimum required. This kind of study would give us insight into the value of the interactive commenting feature and its contribution towards the overall skill development of students.

Ethics and Consent

Our research was approved by the Research Ethics Board 1 at the University of Alberta (Pro00052265).

Competing Interests

The authors have no competing interests to declare.

Author Contributions

Conceptualization: GR; RG; HG; KR; MM; JZ

Methodology: GR; JZ

Software: KR; MM

Writing – Original Draft Preparation: JZ; GR; RG; HG

Writing – Review and Editing: JZ; RG

References

Barata, Gabriel, Sandra Gama, Joaquim Jorge, and Daniel Gonçalves. 2016. “Studying Student Differentiation in Gamified Education: A Long-term Study.” Computers In Human Behavior 71: 550–85. DOI:  http://doi.org/10.1016/j.chb.2016.08.049

Brewer, Robin, Lisa Anthony, Quincy Brown, Germaine Irwin, Jaye Nias, and Berthel Tate. 2013. “Using Gamification to Motivate Children to Complete Empirical Studies in Lab Environments.” In Proceedings of 12th International Conference on Interaction Design and Children, edited by Juan Pablo Hourcade, Ellen A. Miller, and Anna Egeland, 388–91. New York: ACM. DOI:  http://doi.org/10.1145/2485760.2485816

Brookhart, Susan. 2017. How to give Effective Feedback to your Students. Alexandria, VA: ASCD.

Chen, Tsuiping. 2016. “Technology-supported Peer Feedback in ESL/EFL Writing Classes: A Research Synthesis.” Computer Assisted Language Learning 29: 365–97. DOI:  http://doi.org/10.1080/09588221.2014.960942

Cho, Hichang, Geri Gay, Barry Davidson, and Anthony Ingraffea. 2007. “Social Networks, Communication Styles, and Learning Performance in a CSCL Community.” Computer & Education 49: 309–29. DOI:  http://doi.org/10.1016/j.compedu.2005.07.003

De Jorge Moreno, Justo. 2012. “Using Social Network and Dropbox in Blended Learning: An Application to University Education.” Business, Management and Education 10: 220–31. DOI:  http://doi.org/10.3846/bme.2012.16

Deterding, Sebastian, Dan Dixon, Rilla Khaled, and Lennart Nacke. 2011. “From Game Design Elements to Gamefulness: Defining ‘Gamification’.” In Proceedings of the 15th International Academic MindTrek Conference: Envisioning Future Media Environments, edited by Artur Lugmayr, Heljä Franssila, Christian Safran, and Imed Hammouda, 9–15. New York: ACM. DOI:  http://doi.org/10.1145/2181037.2181040

Fitz-Walter, Zachary, Dian Tjondronegoro, and Peta Wyeth. 2012. “A Gamified Mobile Application for Engaging New Students at University Orientation.” In Proceedings of the 24th Australian Computer-Human Interaction Conference, edited by Vivienne Farrell, Graham Farrell, Caslon Chua, Weidong Huang, Raj Vasa, and Clinton Woodward, 138–41. New York: ACM. DOI:  http://doi.org/10.1145/2414536.2414560

Gibson, David, Nathaniel Ostashewski, Kim Flintoff, Sheryl Grant, and Erin Knight. 2015. “Digital Badges in Education.” Education and Information Technologies 20(2): 1360–2357. DOI:  http://doi.org/10.1007/s10639-013-9291-7

Hamari, Juho, Jonna Koivisto, and Harri Sarsa. 2014. “Does Gamification Work?: A Literature Review of Empirical Studies on Gamification.” In Proceedings of 47th Hawaii International Conference on System Sciences, edited by Ralph H. Sprague Jr., 3025–34. Piscataway: The Institute of Electrical and Electronics Engineers, Inc. DOI:  http://doi.org/10.1109/HICSS.2014.377

Hanus, Michael D., and Jesse Fox. 2015. “Assessing the Effects of Gamification in the Classroom: A Longitudinal Study on Intrinsic Motivation, Social Comparison, Satisfaction, Effort, and Academic Performance.” Computers & Education 80: 152–61. DOI:  http://doi.org/10.1016/j.compedu.2014.08.019

Hasan, Layla, Anne Morris, and Steve Probets. 2009. “Using Google Analytics to Evaluate the Usability of E-Commerce Sites.” In Proceedings of International Conference on Human Centered Design, edited by Masaaki Kurosu, 697–706. Berlin: Springer. DOI:  http://doi.org/10.1007/978-3-642-02806-9_81

Johns, Tim. 1986. “Micro-concord: A Language Learner’s Research Tool.” System 14(2): 151–62. DOI:  http://doi.org/10.1016/0346-251X(86)90004-7

Kapp, Karl. 2012. “Games, Gamification, and the Quest for Learner Engagement.” Training and Development 66(6): 64–8.

Kim, Kyongseok, and Sun Joo (Grace). 2017. “The Role of Gamification in Enhancing Intrinsic Motivation to Use a Loyalty Program.” Journal of Interactive Marketing 40: 41–51. DOI:  http://doi.org/10.1016/j.intmar.2017.07.001

Lam, Yau Wai, Khe Foon Hew, and Kin Fung Chiu. 2017. “Improving Argumentative Writing: Effects of a Blended Learning Approach and Gamification.” Language Learning & Technology 22(1): 97–118. http://hdl.handle.net/10125/44583

Lawley, Elizabeth. 2012. “Games as an Alternate Lens for Design.” Interaction 19(4): 16–7.

Leech, N. Geoffrey, and Christopher N. Candlin. (eds.) 1986. Computers in English Language Teaching and Research. London: Longman.

Lister, Meaghan C. 2015. Gamification: “The Effect on Student Motivation and Performance at the Post-secondary Level.” Issues and Trends in Educational Technology 3(2): 1–22. DOI:  http://doi.org/10.2458/azu_itet_v3i2_Lister

Liu, Jun, and Randall W. Sadler. 2003. “The Effect and Affect of Peer Review in Electronic versus Traditional Modes on L2 Writing.” Journal of English for Academic Purposes 2: 193–227. DOI:  http://doi.org/10.1016/S1475-1585(03)00025-0

McGonigal, Jane, and Julia Whelan. 2011. Reality is Broken: Why Games Make Us Better and How They Can Change the World. New York: Penguin.

Nah, Fiona Fui-Hoon, Qing Zeng, Venkata Rajasekhar Telaprolu, Abhishek Padmanabhuni Ayyappa, and Brenda Eschenbrenner. 2014. “Gamification of Education: A Review of Literature.” In Proceedings of International Conference on HCI in Business, edited by Fiona Fui-Hoon Nah, 401–9. Cham: Springer. DOI:  http://doi.org/10.1007/978-3-319-07293-7_39

Pierotti, Deniese. 2018. “Heuristic Evaluation, A System Checklist.” Accessed August 6. http://eitidaten.fh-pforzheim.de/daten/mitarbeiter/blankenbach/vorlesungen/GUI/Heuristic_Evaluation_Checklist_stcsig_org.pdf.

Plaza, Beatriz. 2011. “Google Analytics for Measuring Website Performance.” Tourism Management 32(3): 477–81. DOI:  http://doi.org/10.1016/j.tourman.2010.03.015

Polson, G. Peter, Clayton Lewis, John Rieman, and Cathleen Wharton. 1992. “Cognitive Walkthroughs: A Method for Theory-based Evaluation of User Interfaces.” International Journal of Man-Machine Studies 36(5): 741–73. DOI:  http://doi.org/10.1016/0020-7373(92)90039-N

Tian, Stella Wen, Angela Yan Yu, Douglas Vogel, and RonChi-Wai Kwok. 2011. “The Impact of Online Social Networking on Learning: A Social Integration Perspective.” International Journal of Networking and Virtual Organisations 8: 264–80. DOI:  http://doi.org/10.1504/IJNVO.2011.039999