Yay or Nay is a mobile game for teaching privacy
The game incorporates a short feedback loop to train children and teenagers with common scenarios they may face online. Its overall goal is to help teens develop an understanding of the concept of privacy and help them make logical decisions based on the reasoning they develop through spending time on Yay or Nay. This game is developed for MediaSmarts, a Canadian non-profit organization that focuses on media literacy programs.
On a macro level, Yay or Nay will focus on a product that provides logical privacy training while incorporating gamification. On a micro level, the focus will be on micro-interactions, aesthetics, design trends, badges, game logic, question and feedback sets.
Pedagogical Goals and Target Market
Our target demographic is children and teenagers between the ages of 11 to 16. In this stage of cognitive development, members of this age group are in the formal operational stage. Here, they are more consistently capable of abstract and logical thinking, so they will be able to analyse scenarios presented on cards quickly and effectively. These children also have the ability to use deductive reasoning and analyze options logically, so they will be able to deduce correct answers after some practice and internalize new information provided to them well. This age group is also capable of using more complicated technologies, and thus will be able to use Yay or Nay on iPhone or iPad-type devices. In fact, this demographic has high internet adoption rates and are in their early stages of their online career. Since users in this demographic are young, it is important to highlight the importance of privacy early on in their lives. Children and teenagers would greatly benefit from understanding what they are in control of, what is exposed, and how they can make sure their privacy is protected.
Learning about privacy is integral in our ever evolving society, which is becoming more connected every day. The internet of things connects more and more appliances and gadgets every day, making our lives easier and more streamlined. What we sometimes forget is that through every new connection, we open new vulnerabilities as well. For this reason we must make the youth be aware of the interaction between privacy and functionality in our everyday lives, and help them identify how to react in presented scenarios.
Yay or Nay strives to show that privacy questions aren’t always so black and white through thought provoking questions that make the user weigh functionality vs privacy and highlights how the two are related in real life.
Creating the prototype
Initial round of feed back
We took the concept to our media partners to see what features they wanted and what they wanted the most attention paid to. It was important to them that the product:
- Categorized the information into groups (ex. Information privacy, Social privacy)
- Had ambiguous questions that talk about trade offs and scenarios (to promote discussion and critical thinking)
- FOCUS ON THE QUESTIONS (content is king)
With these point in mind we created the first iteration of the product.
For each of our playtesting sessions, we focused on the educational content, the swiping interactions, and the feedback states of our game. The game relied on these three essential components because our game was primarily focused on educational content and the structure of questions, as well as the interaction with the content itself, and finally the acknowledgement of faulty or successful interactions.
In regards to educational content, we specifically needed to understand if the style of questioning was clear and provided enough actionable content for users to properly engage with the app. For instance, there needed to be an opportunity for users to read the scenario presented on the app and group it as a good thing (Yay) or a bad thing (Nay) and understand the action they needed to take to acknowledge it as such.
For the swiping interaction, users needed to understand that what they were looking at were movable cards, meant to move right or left according to the associated answers. For the test, users could only manual swipe the cards, and the icons were not interactive. This association needed to be clear, and implied interaction needed to be communicated clearly within the design. Lastly, users needed to understand the feedback they received as they answered questions correctly or incorrectly. For instance, they needed to acknowledge that a red feedback screen indicated that they answer was incorrect, and that the green was correct. We needed to test if this association was clear, and if users could properly learn from their answers.
Our testing strategy utilized three primarily points of research methodology; surveying, observation, and casual interviewing.
We began our test with a basic survey of user demographics and contextualizing information to ultimately help us understand who our users were and apply that knowledge to our inferences. We asked questions about age, occupation, and how comfortable the user was with app based learning. We asked age and occupation so that we could understand who our user was, what they did with their time, and what they knew how to do well.
We asked about how comfortable the user was with app based learning to understand how much of their difficulty had to do with our application, and how much of it had to do with our designed experience. We also ended our user test with a survey that asked users to reflect on the user experience of our application.
We asked questions that focused on aesthetic design, the ease of use and learning for how to use the app, the level of difficulty of the questions, how fun the game was, and how they would rate their summed up experience
Aesthetic design was important because a users engagement and experience with an application relies in part on how visually pleasing they find the design itself. We asked how it easy it was to learn how to use the application because our app needs to be autonomous and provide a first user experience with little to no pain points or confusion to create smooth adoption.
The level of difficulty of the questions was important to know because we needed to understand if the questions were too obvious, or too difficult to answer on a rolling basis, or if they were clear in regards to understanding. How fun the app was and the overall experience were two components that were important to know about because these were things that would retain users and ultimately define whether or not our game app was successful.
Observation and Interview
We defined expectations by explaining that there was no pressure and that there were no failed behaviors as our goal was to document feedback and pain points overall. Users understood that the ultimate goal of this testing session was to weed out issues and thus they felt compelled to be more critical in order to help us more effectively.
We defined certain tasks for the user to complete, and asked them to speak out loud while they engaged with the app while we documented their comments and their experience. We asked prompting questions as needed, and pulled as much information from users as was possible.
All feedback was recorded in a table that specified the tasks, the feedback, and also allowed for commentary and immediate inferences for us as testers to fill out at the moment of the test or at the end.
Lastly, our team needed to test the content of our game and the formulation of questions that would be asked as if all question cards were already integrated. We printed out card sized papers with printed questions on top in order to allow users to sift through questions and provide feedback accordingly. Our feedback was recorded according to verbal commentary and ease of answering.
For our first half of testing we chose to examine the first half of the swipe interaction and the questions themselves. The swipe was tested on a physical phone with a very rough prototype of our app loaded on. The purpose of this was to see how the users would instinctively interact with the interface. The results were very consistent; we found that all users needed an initial onboarding process from our team. 16% of users were confused about entering the application and wanted to see some choice in regards to categories or levels, and 66% of users wanted some diversity in gameplay to make the app more exciting. 50% of users found the green and red states of correct and incorrect answers to be very confusing as they tried to figure out what the meaning was, and this interrupted their thinking and gameplay. There were static elements that 16% of users thought were icons and 50% of users felt the positioning of the right/ wrong swipes and icons (Nay on the left and Yay on the right) did not match up with the positioning of the “Yay or Nay” order in the title. Additionally most users, 83%, did not notice the feedback from the wrong swipes and 16% thought the confetti took too long and would be annoying if they were trying to use the app for long periods of gameplay.
For our second half of testing, we focused on the questions within our game. We found that the questions themselves were well formulated, but an issue we encountered was that some had no call to action, with 83% of users indicating that the questions were confusing. The questions were simply sentences with the scenarios and actions already filled out, so the users were confused as to what the question itself was asking. We found that the issue came through mostly because there was no action statement.
Most users thought the design of the app was simple and clear, but wanted to see more variety of design. When asked conversationally about the content of the app and whether they could see themselves downloading the app, most agreed they would if it was a school assignment but most would not consider using it on their own time or using it organically.
The Pivot Strategy
Due to the very specific results gathered from user testing, it was relatively easy to identify and address the features of our app that need to be improved upon moving forward into the development phase of the project. Based upon this, our pivot strategy covers three main areas that experienced overarching issues across more than three quarters of tests, these being: interactions (I), feedback (II), content (III), and categories (IV).
In terms of interactions, the overwhelming majority of users experienced frustration as a result of the main screen. As a result, the major features that will need to be addressed moving onward are: the onboarding issue, static elements that were perceived to be touchpoints, and reversing the swipe direction for “Yay” and “Nay” responses. In order to improve the experience, we will need to implement a separate onboarding screen, or more ideally, very effective indicators to guide the user through the first screen. This will allow us to guide the user more effectively, but also to potentially address the icons that caused confusion by altering them to be intractable. Once this is achieved, the swiping direction will need to be reversed in order to match the icons at the bottom of the screen (which were changed from this position originally after feedback from MediaSmarts).
In addition, we received overwhelmingly similar results in terms of feedback states that need to be addressed moving forward. Originally, we had included a colour change in the cards while swiping, green and red for “Yay” and “Nay” respectively. However, this caused confusion among users because they expected this to mean right and wrong answers. Instead, we will be removing the colour change in order to adopt a more neutral state while swiping. Furthermore, we will be making the answer feedback more prominent in order to attract the users attention to it. In some cases, users exclaimed that they weren’t aware of the feedback when they answered a question incorrectly. This is a major concern that will need to be address because the purpose of the app is to be used as an educational material, if the feedback for wrong answers is not noticeable then it defeats the purpose. To address this, we will be converting this type of feedback from a banner at the top of the screen, to an overlay that must be interacted with to progress. This change will guarantee the user will be able to read and learn why their choice was wrong so that they can improve and learn from an incorrect answer.
We will be addressing the content, specifically in regard to the formatting of the questions. Users exclaimed during testing that they didn’t understand the wording of the questions or they didn’t understand what it was asking them to do. In order to fix this, we will be reformatting the questions to include a distinct call to action in each question. The call to action will vary according to the question, but will be some variation of “Do you do this? This will give users a distinct action or problem to think about when answering the question, and will also allow us the opportunity to retain the same difficulty level of our questions
According to feedback received about making the app more exciting and allowing some choice in regards to location within gameplay, we are planning to divide our cards based on levels or categories. These categories may focus on specific privacy related topics, and may be divided according to difficulty. Ultimately, the goal here will be to add dimension to the game and enable users to chose and explore based on their interests.