0:00
/
0:00
Transcript

Ethics, Bias and Trolls... Oh My!

It was gratifying to see how many of the respondents took the time to seriously engage with the questions, and provide lengthy and thoughtful written responses, some of which were genuinely moving.

Ethical Considerations of Content Analysis Coding

Despite concerted attempts to stay objective, the coding of the data required me to interpret results. In undertaking the coding of the data, some subjectivity and decision making was at times required. I worked hard to find documentary proof of the board game designers racial and gender identification, but in some instances, this required a significant amount of Internet research. Some designers had a very limited footprint online. Regarding the cover art analysis, I resisted letting my previous knowledge of a game influence my coding decisions. For example, with Sekigahara: The Unification of Japan (2011), I coded 175 artwork figures as ‘gender and race undetermined’ despite the reasonable assumption that the figures on the battlefield depicted might be understood to be male and Japanese. In this case, the figures were too stylized and small to make definitive judgements. In making such decisions, I tried to imagine how a consumer with no knowledge of the game would ‘read’ the artwork on the covers. To determine if my coding assessments were accurate and objective, I conducted a coding verification step based upon the model of the Williams, Martins, Consalvo, and Ivory (2009), and the Smith, Choueiti and Pieper (2015) studies. I had two paid coders replicate a cross section of my coding, each taking 20 random board game covers, and 20 different game designers to code. Each volunteer coder was trained in advance, provided the materials, and giving clear directions on each of the coding categories to ensure there was no definitional drift or differing understanding of what was meant by white, male, female, BIPOC, non-binary, etc. In this process, I found that the coders’ findings and my results had a Cohen’s kappa percentage agreement of 0.97. Cohen’s Kappa is a statistical coefficient that represents the degree of accuracy of findings between qualitative coders; with 0.81–1.00 finding considered to be a near perfect level of agreement. This verification process helped to demonstrate the accuracy of the findings and ensure readers can have a strong confidence in the coding process.

Assumptions, delimitations and limitations

First, given my data sources, and my methods of data collection, I had to assume that the information board game designers were sharing about themselves was accurate. Further, I had to assume, through a process of my own verification, that the wider information about the top 400 BGG-ranked games was accurate in each of the 400 respective BGG entries. Again, with crowdsourced, open, user-driven fan portals, there was a chance that some of the information might be outdated or incorrect. I did, in fact, notice errors and omissions in some of the board game listings, which I simply corrected in my own database. In some cases, when I did not have access to the actual game boxes themselves, I needed to rely on images of the game posted to BGG. Thus, if any errors were made in the original data and images posted to BGG, the old adage of data mining and analysis might apply, ‘garbage in, garbage out.’ Overall, I acknowledge the data that I was using from BGG to make coding decisions might be incomplete in the case of the racial and gender representation of the game designers, and the board game artwork. Some of my field observations required that I make a qualitative call on how each board game designer identified themselves in terms of gender, race and ethnicity using outside sources to validate my decisions. Each decision was grounded in significant investigation into each listed game designer.

Some might argue that the sample of 400 board games for the game designer analysis, and top 200 games in the case of the artwork analyses were too small to be representative samples via which we can use to observe and infer wider patterns in the hobby board game industry. As such, my research might be challenged by sample representation issues, as with a small sample (however defined) there is a higher potential for deviancy or being fully unrepresentative of the wider population (Merrigan & Huston, 2009, p. 150). Overall, quantitative assessments are, by their very nature, limited. They are simply bottom-line numbers that need to be contextualized with additional qualitative data to create meaning. Taken alone, quantitative analyses don’t really tell the whole story; and this was the rationale for blending this work with other research measures such as the survey and the qualitative interviews.

A final and important point on delimitation and issues is the selection of BGG as a structured data set in the first place. A question I often get is: ‘why not look at mass market games, and/or raw sales or revenue numbers for mass market games instead?’ First, I needed a structured data source that gave me insight into the current state of the hobby praxis. BGG consistently ranks in the top four for tabletop games, and offers a unique crowdsourced view into hobbyist perspectives. When designing the study, I had to look at my own, independent and unfunded capacity to study this expansive hobby and sector. Having conducted large-scale, mass-market research with the backing of corporate clients, I well understood the scope and scale of investigating mass-market household behaviours and quickly realized I would not be able to obtain a clear picture of mass market games with any degree of certainty nor accuracy. Given my limited resources, a focus on BGG, with all of its imperfections, stated throughout this dissertation, was the best choice to obtain a snapshot of the hobby games community. BGG, like other websites, demonstrates the 1 percent rule of Internet culture, the rule that asserts that only around 1 percent of active users dominate a website’s content, and discourses. The rule further states that other users, the remaining 99 percent, might take a more passive, lurking role on the site. This 1 percent rule tells us that BGG, like any other Internet site, is shaped and skewed by these active users, and they curate a space that privileges their worldviews, preferences and objectives. It should be clearly understood that this dissertation does not purport to address the entirety of the board game hobby nor the sector itself. As outlined in chapter two, there are many segments, sections and permutations of board games and board gaming communities.

Surveying the Table: Qualitative Online Survey

The online survey was designed to gather perspectives and experiences from women, non-binary, BIPOC and LGBTQiIA+ members of the hobby who were 19 years old or older, and the promotional materials explicitly included the inclusion criteria (see Appendix A-3, Appendix A-4). The quantitative online survey, designed and launched in the Ryerson University instance of Google Forms, was promoted to active members of the board game community from August 2020 through to October 2020. To ensure I was able to obtain respondents that matched my inclusion criteria, I targeted promotions of the survey on fora specifically targeting analog game players and industry professionals, including the BoardGameGeek Women and Gaming, Rainbow BGG, and regional specific BGG fora in Alberta, BC, Ontario, Yukon, Alberta, Manitoba, Quebec, PEI, and Nova Scotia. I also posted the survey links to Twitter, and to the Reddit forum called r/SampleSize, a discussion board focused on market research surveys, as well as r/Boardgames but received limited engagement on those two platforms.

The survey’s questions were informed and influenced by a variety of sources. Questions about duration of time with the hobby and levels of engagement were informed by the Modified User Engagement Scale (UES) (O’Brien et al., 2018). Used in human computer interaction studies, the traditional User Engagement Scale (UES) is an assessment tool developed to measure user engagement (UE). Often used to test video games and simulations, the original tool consists of 31 items that focus on six dimensions of engagement: aesthetic appeal, focused attention, novelty, perceived usability, felt involvement, and endurability (O’Brien et al., 2018). A modification of the engagement scale designed by O’Brien et al. (2018) attempts to provide a consistent and usable short- and long-form version for use in media analysis projects, and is called the User Engagement Scale Long Form (UES-LF). While I did not use the scales themselves, these measures provided some catalysts for framing levels of engagement with a hobby. I also used some ideas from several assessment scales from elements of several assessment frameworks from Reeves and Read (2009) that assessed the relative level of commitment to the gaming hobby in terms of weekly, monthly and yearly frequency of game play. The David (2013) study used extensive in-person interviews to create a complex picture of how women engaged with digital and analog games, The Davis (2013) study placed an emphasis on the intersections of relationship status, childcare, work responsibilities, and other competing activities, and participants’ ability to engage in gaming. Davis (2013) also explored how representation impacted gaming behaviours and delved into questions as to whether board gaming was male dominated in terms of the respondents’ perceptions. Using all of these sources for inspiration, the result was a survey that was segmented into key sections that touched on key categories of questions for the respondents: gaming habits, access to gaming spaces, participation in gaming, discrimination and representation in gaming, as well as a final section asking for demographic information from the respondents. The questions and their answers will be explored in greater detail in another podcast.

For the majority of the questionnaire, I opted for questions based on the Likert scale out of five, with the options of strongly agree, agree, neutral (I don’t know), disagree and strongly disagree available to respondents. Many of the questions on the survey were about assessing thoughts, feelings and other qualitative personal perspectives. The Likert scale is a widely used psychometric scale that helps researchers gauge participant opinions and their relative levels of intensity. It can help gauge feelings of “motivation, anxiety and self-confidence” in a way that allows large numbers of respondents to answer quickly in a format that is widely used by market research studies and might be familiar to some participants as a result (Nemoto & Beglar, 2014, n.p). The Likert scale can be used as a measure to inform other qualitative measures such as in-person or online interviews (Nemoto & Beglar, 2014, n.p). The survey was a very lengthy one at 127 questions, and I believe the Likert scale to be a more efficient means of obtaining a larger volume of responses from a larger number of people, versus written-response questions.

Participants

Participants were invited to complete the online survey. I asked that participants be a member of the board game hobby (casual player, game collector, industry member, content creator, designer, etc). My justification for the focus on BIPOC, women and LGBTQiIA+ was that these populations were understudied, and marginalized in the board gaming space. It was important to the research and the anonymity of the participants that they need not connect me directly in any way. This step was taken to ensure participants felt safe and free to share their honest perspectives about the board game hobby and industry. When the survey was concluded, although it came in a little under my prior expectations for responses, however, it still generated a strong response, with 320 unique participants completing the online surveys fully. I did offer an incentive to participate in the survey, in this case a chance to win a gift card for $100 (Cdn) to an online board game warehouse. The research participants were told that if they were interested in being entered in a drawing for a $100 gift card, they were directed to a separate Google Form survey. In this way, their email addresses were completely disconnected from their online survey responses, ensuring the anonymity of participants’ responses. While gender identity, racial, ethnicity, age ranges, sexual orientation, relationship status and board gaming habits in the online survey were requested, no other personally identifying information such as names, addresses, phone numbers or e-mails were requested. Those participants who entered the random drawing for a gift card were fed into a spreadsheet that was stored securely on a password-protected Google Drive and the gift card winner was randomly drawn using a random number generator that I constructed in Google Sheets. The spreadsheet with e-mail addresses was permanently deleted after the winning participant received the $100 digital gift card.

It should be noted again that the online survey was lengthy, and required at least 35 to 40 minutes to complete. There was, rightly, testing feedback that I received prior to the survey launch, that the survey link might pose a problem for people with busy work, school and/or family lives. As such, it should be understood by the reader that those participants who took the time to fill out a survey of this length had the privilege of 35-40 minutes of dedicated time during their day, a privilege not available to many during a time of gig-economy jobs, employment precarity, record unemployment, and all of the challenges of life during the 2020 global COVID-19 pandemic. Further, all respondents were given an opportunity to write long-form answers to respond to the question, “Are there any other thoughts you'd like to share about your experiences in board gaming?” I was surprised by the level of engagement from the respondents, and the generosity both with their time and depth of the responses. It was gratifying to see how many of the respondents took the time to seriously engage with the questions, and provide lengthy and thoughtful written responses, some of which were genuinely moving. This question alone generated a rich trove of qualitative perspectives that I will combine with the quantitative findings to add additional clarification and colour. The overall results demonstrated a diversity of respondents who were a mix of identities and backgrounds from around the world, something that I will explore in greater detail in chapter four.

The promotional strategy of posting the survey in several different places and making multiple copies of the Google Form was as much about ensuring I was able to get the maximum number of research participants who conformed to my inclusion criteria, as it was about risk mitigation. These precautions were taken in light of some of the online harassment and threats I received after the publication of my 2018 analysis on the game designers and artwork of the top 200 and top 100 BGG games (Pobuda, 2018). My strategy was designed to mitigate the impact of any online backlash, hacking attempts or other malicious activity. By posting multiple copies of my survey across multiple forums, I was able to carefully track and monitor any attempts to engage in griefing or trollish behaviours such as attempts to repeatedly complete the survey, or entering gibberish or offensive responses. In the design of my Google Forms-based survey I also ensured that the survey could only be completed once by a participant, and multiple attempts would be blocked as Google Forms based on the participant’s e-mail address. Through August 2020 to October 2020, I engaged in daily monitoring of the surveys and their responses and observed no attempts to troll the surveys nor any apparent effort to fill the completed surveys with nonsense, offensive content, or threats.

Discussion about this video

User's avatar