Theories on behavior change, health psychology and intrinsic motivation are often used to inform the design of interactive applications, in particular in an e-Health context. However, these theories have been developed to predict user intentions, and not to assist the design of interactive applications. In this inaugural lecture, I will present the challenges when translating such theories into design, drawing on the second law of thermodynamics.
A couple of years ago, a colleague from the faculty of Sports Sciences came to us, the e-Media Lab. He asked whether we could help build an application to motivate people with a sedentary lifestyle to stand up every 30 minutes. After all sitting is the new smoking.
Of course, we were keen on developing such a health app which involved signal processing of data coming from an acceleration sensor attached to the thigh. It also involved some fine Android programming, to make sure that the app on the smartphone worked and could be paired to the sensor via bluetooth. [As you can see, I am not alone to work on such systems, this is work together with colleague Luc Geurts.] Finally, it also involved some good User Experience design. That is where I come in. After all, users should find the app easy to use and engaging. However… that wasn’t enough!
My colleague from the faculty of Sports Science was actually an expert in sports psychology. He informed us that the app should also be designed according to SDT. Now, SDT, for those of you who are not familiar with the human sciences, does not refer to a long forbidden pesticide, but rather to one of the most popular theories at the moment, on self-determination. It predicts that for people to be intrinsically motivated to perform any behavior, it should cater to a universal need for competence, relatedness and autonomy. So, with our health app, we had to provide our users with a sense of competence or mastery, it should make them feel related or connected to others and finally, it should provide a sense of autonomy, of free will and choice.
Now … how to design an app that fosters, for example, relatedness. There are myriad ways in which people can connect to each other. Should we add a forum to build a community? Should we allow for chatting with friends? Or a feature to find a virtual workout buddy? Or a leaderboard where you can compare yourself to how your friends are doing? Unfortunately, SDT does not say much about how it should be operationalized and embedded in a smartphone app, and neither did the colleague from sports psychology. He didn’t say which operationalization is more successful.
And SDT is only one of the many theories on behavior change. When researching theories on behavioral change, health behavior and motivational psychology, to inform the design of Health apps, there are myriad. There is for example the Theory of Planned Behavior, but here is also the Health Belief Model, the TransTheoretical Model of change and we can even use good old Skinner to reinforce behavior.
Different traditions in science often have very different theories, different constructs, different factors that in the end can be operationalized in many different ways as different feature in an Health application.
However, with apps aiming at changing or supporting health behaviors, there needs to be that moment where clinical ambitions are translated into specific features, into specific user interface components; for good or for bad, for better or worse. Part of my research is to be on top of, and to investigate the many different ways in which theoretical constructs related to behavior change for health, can be embedded in health apps. Here you see recent research where we investigated the many ways in which persuasive principles can be embedded in health apps for chronic arthritis patients.
To encapsulate this knowledge we have also built tools for other app designers and health researchers, to be informed and inspired on what different motivational features exist. Here is for example a website showcasing the bridge between what we like to call motivational design features and health theories.
Note, that some like to call this persuasive system design. Part of motivational design features is also labeled as gamification: as these features include elements from games in a non-gaming context.
However, such theories or taxonomies can and should never be used in a prescriptive manner. With this I mean that it is not because we use a chat, a challenge and a leaderboard, that your players will find the health application intrinsically motivating. If only it were that simple… So my work is not limited to simply charting the many ways in which motivational design features can manifest themselves.
Rather, my work is to grasp the context of use and to understand the different functionalizations of motivational designs. Therefore, it is important to apply a user-centered process. Hence, we interview and observe users to better understand what they desire and think about certain mHealth features. Here I present a figure from a recent study from my PhD student Jonas Geuens, where we investigated how chronic arthritis patients themselves perceive and value different motivational designs.
As it turns out, they are not that keen on any features that foster relatedness. Now, I do not want to proclaim that SDT has got it wrong, and that Relatedness does not matter to chronic arthritis patients. But we found that these patients do not want to centralize the disease in their interactions with others. Rather, they like to keep the disease out of that interaction, and also out of any app to support them.
Much of my research is geared towards researching and then evaluating the different motivational designs for different users and patient groups… And physicians… When talking about e-Health apps with clinical experts, we quickly end up with the issue of adherence. When patients are prescribed a therapy, they need to adhere to the regime. They need to comply to what is prescribed.
Adherence or rather, the lack of adherence is well known and well studied in the clinical realm. When eHealth becomes part of the regime, adherence seems to become even more problematic. Sensors are to be attached, bluetooth needs pairing, phones need to charge, apps need updates. Here you see a graph from a seminal study by Eysenbach, testifying to this law of attrition in eHealth.
How can motivational features have their effect when the mHealth app is not used. Should we maybe design motivational features to make sure that the app is actually opened at all? By the way, this lack of user engagement with apps is well known beyond an academic context. But let’s stick to the academic discourse.
So yes we try to improve and promote adherence, here you see a study investigating the different gamification techniques that PostDoc Robin has been evaluating and their impact on adherence.
And again, our research is not limited to systematic reviews, we can also go out and ask patients themselves what they like to see designed in their apps. Here you see a screenshot of gamification principles used in a study with young pectus patients. Pectus surgery is about correcting a certain chest deformities. After surgery, these pectus patients receive a telemonitoring kit, including a health app and sensors. In this manner, they can be send home right away. Hospital beds are expensive. With the app and sensors, the physicians can follow up on their health, from a distance. So again, we asked them what they think of certain gamification principles. As it turns out, they are not that much in favor of social interactions either, through the app.
But with this focus on motivational design to promote compliance, I find myself in an awkward space where I have been asked to design an application that aspires to motivate but at the same time is used to discipline.
Now I promised you the second law of thermodynamics. But in order to analyze this tension underlying much of my recent work, I need real science … I need philosophy. In particular, Foucault, the french philosopher, has given me a lens to investigate this conflict. Foucault wrote many works on the relationship between power and knowledge, and how science & technology can be seen as an extension of existing power structures and as a means for governing people.
Foucault speaks of governmentality, namely “the way governments try to produce the citizen best suited to fulfill those governments’ policies”. Let’s take a couple of minutes to explain this. What does Foucault mean with governmentality, and what does it have to do with health apps? To better explain this, Foucault uses the image of the Panopticon.
The Panopticon is a type of prison building and a system of control designed by the Bentham, the founder of utilitarianism, in the late 18th century. The design of the building is conceived in such a manner that its allow all inmates to be observed by a single guard, without the inmates themselves being able to tell whether or not they are being watched. Although it is physically impossible for the single guard to observe all the inmates’ cells at once, the fact that the inmates cannot know when they are being watched means that they are motivated to act as though they are being watched at all times. Hence, as a result, inmates will regulate their own behavior. Foucault would call this ‘Internalized surveillance’. Foucault viewed the panopticon as the ultimate symbol of a disciplinary society of surveillance.
Health apps intend to empower patients, to allow them to contribute to their own health and enable a better self-management of health. However, these technologies also expand the ‘medical gaze’ beyond the confines of the hospital into everyday life and the home and even the body and mind of the patient.
Health apps can sense and track users, send data to central servers where physicians but equally other staff and researchers can access data, make patients internalize surveillance. The metaphor with the panopticon must be clear now.
Moreover, this health technology promoting active ‘patienthood’ through continuous self-monitoring has strong normative connotations. We not only internatize this surveillance but equally the notion that we are now responsible for their own health. This now becoming a new norm!
Now, imagine this app, with a sensor on your thigh, that measures for how long you have been sitting and then provides an disturbing alarm if you fail to stand up after 30 minutes.
OK, so that app never made it beyond the intervention study. But imagine another app designed for young pectus patients who are sent home right after surgery. They are asked to do some easy measurements, wear sensors, and to fill out questions daily. If a pectus patient does not comply, he or she is frowned upon by the physician. Of course, the physician can then not provide the care that is necessary. The pectus patient should have used the health app appropriately.
This may sound like a far reality, a black mirror episode, but for sure not something that our society would really allow. Or not?
Maggie Deblock, current minister of health, in 2016 funded 24 research projects to investigate whether apps can help physicians follow up patients. Hmm… one of these projects involves a pain clinic and a research group building health apps for young pectus patients. Luckily, the projects Maggie De Block funded are still research projects. And luckily, through our user observations, we have come to find that patients and care personnel find myriad ways to escape the normalizing power that comes with such technologies. Now, I cannot show images from our own studies for various reasons. But this movie made by Superflux, an artist collective, says it all.
Now if you were asked to redesign and improve this app, you could look for ways to prevent cheating, to tighten the net and enforce certain usages. I want to argue for the opposite. We need to design motivational Health apps that cater to disorderly conduct, accommodating behaviors that escape normalizing power. Disorderly interfaces are those interfaces that allow to gracefully sidestep the normative powers embedded in current day monitoring technology.
I want to stress that I conceive disorderly behavior as behavior that deviates from the norm, but not as messy or irrational behavior. Much disorderly behavior is actually very rational or logical within the specific situation the user is facing. [And I am looking forward to exploring disorderly interfaces with colleague prof. De Vleminck, residing at LUCA, the faculty of Arts]
Now, I still need to bring in the second law of thermodynamics and I have been talking for 15 minutes already. Back to the easy science… now let’s talk finally talk about thermodynamics. And let’s bring in at least one formula. After all, I am here presenting to an audience of engineers and computers scientists.
The second law of thermodynamics states that in a closed system, entropy only increases. Now, there are many interpretations on entropy, interpretations that you probably know better than I do. In this lecture I want to talk about entropy as a measure for the amount of disorder in a system. But as I already emphasized, disorder should not be misunderstood as messiness, but rather, the number of different configurations a system can be in. At a microscopic level, the number of microstates a system can be in is infinitely large. Large too such an extent that deterministic models no longer help explain effects observed at this level. That is why physicians turned to quantum physics and probabilistic ways of thinking about the world, embracing uncertainty.
I would like to argue that when dealing with the ecosystem of health care, we are now equally dealing with an infinite number of microstates. When designing current interactions with healthcare apps, of course we have to bring into account the patient, his medical condition and his or her individual demographics. We also have to factor in specific dispositions and preferences, and the context of use. But that context of use, acknowledging that health apps have moved beyond the clinical confines and are now to become weaved into the fabric of everyday life, has expanded exponentially and become infinitely large.
Designers now need to anticipate usage scenarios where the patient is still stuck in traffic in his daily commute and hence late with filling out his daily questionnaire. Or where the patient’s dog considered the bright sensor as a nice chewy toy messing up measurements.
As designers, we need to embrace that human interactions with health apps no longer can be predicted through deterministic models with predetermined interfaces. As designers too we need to abandon deterministic ways of designing and embrace techniques and methods to think of interactions in probabilistic terms.
That is why I am also very keen on collaborating with colleague Professor Kathrin Gerling and professor Verbert, expert in recommender systems. Recommender systems may be thought of as the type of algorithms that tell you what your next favourite movie is as in Netflix. But we attempt to combine recommender algorithms within a motivational designs in healthcare, catering different microstates with different motivational designs, or a different interfaces if you like. I am very much looking forward to this continued collaboration on investigating how to leave behind deterministic models of interaction and move forward to using stochastic models in motivational design.
To conclude; disorderly interfaces not only accommodate deviance from normality. Disorderly interfaces equally embrace the natural tendency towards an ever increasing complexity in healthcare and human interactions.
Thank you!
And many thanks to the PhD students and colleagues not mentioned here, but whose help was indispensable.