“In order to encourage more physical activity & social exchange in a team, a user can create a 'workout of the week' that everyone should complete - five squats before lunch, etc"
Client: Bright Skills
Role: UX research & design
Deliverables: User flows, Figma prototype & graphic assets
Create a gamified social wellness app that takes into account the different motivations people have for engaging.
This was a comissioned as an in-house project at Bright Skills where I interned. It was intended as a proof-of-concept for the junior developers learning the Outsystems low-code platform, so in addition to the application brief I also had some technical requirements: Simple design with no animations, no SVG or other built in graphics (only PNG) and make sure to use AI some capacity.
An interactive Figma prototype delivered on time to the devs along with graphic assets.
What do you call two buttons that individually act as booleans, but also affect each other? Had the team agreed on this earlier we could have avoided some dead ends due to confusion.
I had two weeks to deliver a finished design, with a wireframe delivery halfway. In order to get started quickly – and since this app was an exploratory MVP rather than building on an existing hypothesis – I did competitor analysis in an attempt to gauge the mechanics and value-adds of the most popular gamified games and task-oriented apps. This approach is fraught with confirmation bias since gamification often targets people with a certain behaviour, not necessarily representatiive of the people in a typical office – which was our target audience.
Nonetheless I could identify a few useful trends:
• Friendly tone of voice & playful colours
• Sincere and personable challenges
• Minimal shaming of non-participans
Many of the successful approaches were out of reach for us – fancy graphics, sound design, game elements, advanced progression ladders, etc – and we had no incentive to upsell participants on microtransactions or rewards, so I could safely ignore such features.
Since the setting for our app was corporate, I wanted to allow both “positive” and “negative” participation. There’s a risk of encouraging “toxic positivity” with initiatives such as these which is counterproductive to the goal of encouraging a team spirit, and allowing someone to participate on their own terms seemed as an important aspect – for example, you could set difficult or obtuse challenges, allowing you to “vent” while still participating in the social aspect of the app.
Concurrently, I was outlining the requirements for a minimal user flow – my goal was to maximise the possible engagements over as few screens as possible so that we could develop an MVP faster, as well as force me to focus on a clear IxA and UX copy.
I also dipped into academic research. For example, the paper "Gamification in Apps and Technologies for Improving Mental Health and Well-Being: Systematic Review” (Cheng VWS et.al) identifies 18 different gamification mechanics in apps promoting mental wellbeing. There wasn’t enough time to evaluate the results of my app, but this paper and others offered good ideas for how to break down the motivation for participants.
Sketching, interactive wireframes and flowcharts complement each other – where one has to be rigorous or pedagogical, the other can just drop in a “cool animation icon here” without bothering with the details or effort of making it – that comes out in the discussions with PO and dev team.
Wellmate presents all users with the same challenge any given week, and resets on Monday morning. As a user I can choose to skip this weeks challenge, I can rate it good/neutral/bad, and I can suggest my own challenge to the team which will be added to the pool of challenges, withone challenge is selected randomly from the pool per week.
The social and gamified elements of Wellmate are intended to be obvious but not perceived as manipulative: Complete and rate challenges, and suggest challenges of your own. The challenges are open-ended enough that they allow different “gaming style” as well as both extrinsically & intrinsically motivated participation.
Using AI was a requirement in the process, and adding it to the challenge generation feature seemed like a good fit. The developers integrated ChatGPT 3.5 and wrote an agent with instructions to generate wellness-focused challenges, character limits, etc.
This proved technically feasible for the most part, but of limited value for the core mechanic of the app – quite often the challenges were generic and difficult to complete (“eat healthy breakfasts this week!”) which risk of lowering participation. Who wants to obey a pollyanna AI just cause HR told them to?
Gamification has obvious use of AI in settings such as these, but the agents need to be closely tuned and evaluated as a core part of the application, rather than just being tacked on – especially in cases where the goal is “human interaction”.
Research into how LLM:s effect social interactions has only begun, but “Artificial intelligence in communication impacts language and social relationships” (Hohenstein et.al) points to that even though many are sceptical of AI-mediated social interactions (automatic messaging, for example) this scepticism doesn’t carry over once the users are better acquainted with how the system works, as long as the system is transparent and offers an improvement.
The app was delivered on time to the dev team and the subsequent weeks I helped out with some minor UI tweaks, refactored icons into different sized (no SVG:s allowed) and changed some copy around. There was some feature creep as the scope expanded (account creation, dedicated achievement page, etc) but nothing major.
The disproportionately biggest hurdle for the devs was my implementation to “rate the challenge” – two buttons with two states each used to represent three states: null, positive, negative. The buttons toggle between two states, but also turn off the other if it’s in “on” state.
I should just have illustrated the three states along with some words and arrows to explain the interaction. Because it’s difficult to infer all behaviours from an interactive prototype – and Figma doesn’t always map 1:1 in intent – any ambiguity should be explained in a way that suits all participants.
Lesson learned.