Emily Yorgey ADT podcast cover
Episode: 114

Emily Yorgey - Intentional UX in the age of persuasive & manipulative tech

Posted on: 30 Nov 2023
Emily Yorgey ADT podcast cover

Emily Yorgey is a UX/UI designer at Modulous, a modern technology platform which aids in streamlining construction projects.

In this episode, we dive into the importance of intentional UX design in the current digital age which is dominated by persuasive and manipulative tech, drawing key insights from Emily's great article "How to build intentional UX in an era of persuasive technology" which has been published both by UX Collective as well as by Fast Company.

 

Links & mentions:

Transcript

"People building the products sometimes have opposing needs to people using the products, but I think it's become very evident that our cognitive fitness can be shaped by habits that we form online. So more often than not, the products baked in persuasion models are making us less intentional and deliberate."

Intro:
Welcome to the Agile Digital Transformation Podcast, where we explore different aspects of digital transformation and digital experience with your host, Tim Butara, content and community manager at Agiledrop.

Tim Butara: Hello, everyone. Thanks for tuning in. I'm joined today by Emily Yorgey, UX and UI designer at Modulous, a modern technology platform that aids in streamlining construction projects. Emily has just recently published an excellent piece on Medium about the importance of designing intentional user experiences in the era of persuasive or manipulative technology that we're in today.

And we figured that this would be a great topic to also discuss here on our podcast. So here we are. Emily, welcome to our show. It's really great to have you here. And I'm really excited about discussing this video today. Would you like to add anything before we get started? 

Emily Yorgey: No, that's perfect. Thanks, Tim. Yeah, I'm flattered that you've invited me on the podcast. 

Tim Butara: Yeah. As soon as I read your piece, I was like, oh man, we have to have a chat about this and, and why not just record it and do it on our podcast so that our listeners can also benefit from it. Because from our intro call and from your article, we are already uncovered a lot of really great insights that you don't really encounter in your frequent content about user experience and stuff like that. So I really thought, you know, that we should get you on the podcast and have a chat with you about that. 

Emily Yorgey: Yeah, no, it sounds great. Something that I'm very passionate about. So very willing. 

Tim Butara: So Emily, the first, I mentioned in the intro that this current era that we're in right now is driven by kind of manipulative or persuasive tech. So, first thing that I want to ask you is what makes a user experience or a piece of technology manipulative as opposed to just persuasive, just here in quotation marks? 

Emily Yorgey: Well, I think there's a spectrum of persuasive techniques that are leveraged by designers to retain users, the persuasive models have a common theme of habit forming tools. So there's always some degree of a cue and action and a reward. This model can be used with the intention to make users informed and motivated, but it also can be directed towards more company objectives. So that's boosting engagement and designing for immediate gratification. 

So, I guess on one side, you've got a product that could potentially be waiting to be used. It's in most cases a B2B product, so it's paid by the user and us as the designers would pepper some techniques to promote engagement, but it's not necessarily about exposing the human vulnerabilities. It's all about architecting the choices and making sure that we're promoting decision hygiene and making sure the user experience is a bit more purposeful.

And then I believe there's the other side of the spectrum, the more manipulative user experience, and that's in most cases B2C, so you're potentially not paying for the product, and as a user, you're in a vulnerable position. 

So, I guess the product is aiming to not only use those habit forming models to target you and your kind of objectives, but also use your psychology against you. So your activity like shares and likes are considered a currency and they aim to kind of manipulate your data footprint in the process. So I think there's healthy persuasion and also manipulation. 

Tim Butara: So persuasion would be a user's data to benefit them and then manipulative would be using that same user's data to only benefit the business.

Emily Yorgey: Yeah, completely. I think in this competitive landscape, we're always seeking ways to kind of like gain that long term engagement, but at the end of the day, I don't think persuasive techniques can be as malicious as it seems. We need to keep users engaged and yeah, that can be in a way to make them critically think about what they're kind of interacting with and making them a bit more literate in the tools that they're using.

Tim Butara: So, we should move away from the phrase "don't make your users think" by Steve Krug, I think it was who said that. And so we should instead design experiences that can help improve these cognitive capabilities and cognitive abilities of users rather than, you know, have them stagnate or maybe even deteriorate.

Emily Yorgey: Yeah, I think Steve, his original book was in 2000. And I think it's a classic, it's something that kind of, it's a resource that helps with navigation tactics and navigating stakeholders. But equally, I think because tech is becoming way more pervasive, it didn't really account for the fact that our own willpower as humans can't compete with the psychological hacks designers and tech giants employ in order to retain users through the user experience they craft. So it's just not that it's outdated, but it's just not acknowledging the fact that we're now in an information and attention economy and we've got a lot of competition with our attention. Yeah. 

Tim Butara: When was it that he said or wrote that? Do you remember?

Emily Yorgey: The book was titled Don't Make Me Think. And, yeah, it was originally published in 2000, but I think he's made a few more iterations to make it a bit more relevant to technology advancement. 

Tim Butara: Yeah, we're definitely living, right now, we're living in a totally different world than what was true in 2000. I think that it constantly keeps changing. So as you said, it's not that it's outdated. It was just written in a different world for a different world, for a different society, basically. 

So, what would you say are the biggest risks of this drive for engagement, which is kind of at the bottom of everything that we're discussing today, so both looking at it short term as well as long term? 

Emily Yorgey: I think for the short term, it's the fact that we are baking habits within the products that we design. So we are leveraging those associative learning processes like classical and operant conditioning. So that means that we are taking those rewards and when a user completes a task, we're kind of exploiting their dopamine levels and basically trying to encourage that cycle of completing that task again.

So I think, short term, we may start to become a bit more shallow in thinking if we don't create those intentional spaces for critical thinking. So, essentially our cognitive fitness could start to deteriorate. But I think, like you said, the long term effects probably are going to be more neurological in that it could be a battle for our brain.

So, we have an ongoing struggle between individuals and big entities like the tech companies and advertisers, so they're essentially aiming to manipulate our attention and our cognitive processes. So, I think the scary thought is that these manipulation tactics could infiltrate our brain activity and yeah, it's something to be aware of, I guess that cognitive liberty could be at risk, the EEG headsets are becoming more commercially available and yeah, I guess the innovation hasn't stopped and we just don't know where it will head. 

Tim Butara: Well, and you mentioned that the changes to our neurological development basically and capabilities and that's, you know, it's a scary thing for adults who already have their brains pretty much as developed as they will throughout their life, but we're seeing more and more tech, not just tech overuse, but in a lot of cases also tech addiction among children whose brains haven't yet fully developed. So, you know, the kind of default human processes could then instead get replaced by these dopamine engagement oriented processes that just get ingrained that much more in a brain that's still developing... that's also one of the scariest long term things for me as well. 

Emily Yorgey: Completely. I think one of the scariest things, especially when kids are still developing their sense of the world, is pattern recognition. So if we are constantly exposing themselves in echo chambers, and they're kind of bombarded with sensationalized content, their kind of ability to recognize truth throughout that is going to become more difficult. So, yeah, that's where the responsibility lies for the designer in order to craft that kind of intentional space. 

Tim Butara: Well, and also we're moving into a lot of heavy topics right now. Like, one thing that comes to mind is just the role that algorithms play in how children perceive that. And maybe we're not talking about children who like have the capacity to check out videos on their own, but like, you know, I've heard really, really like almost scary stories, almost horror stories of like really small children whose parents maybe just put on some kids video on their iPad and they enable autoplay and then a few videos down the cycle, you just get these AI generated really crazy and like obscene videos that are driven by stuff that the algorithm predicts will get a lot of engagement. And then you have small children who don't even have the capacity in like any sense of comprehending what they're seeing on screen, but it's going to leave like a long lasting impact on them. That's super, super scary. 

Emily Yorgey: No, I'm completely with you. I think the algorithmic power, especially if we leave it in the hands of the companies that kind of drive that product... yeah, it's something that if the designers don't apply general usability heuristics, like error prevention and allowing them to escape those bubbles, it could be quite detrimental.

I know that I think a week ago, the European Union, they're considering a proposal to make apps less addictive. So they're hoping, like you said, those, you know, the infinite scroll, the notifications, so the extraneous badges that kind of like lure you back in and give you a sense of urgency and even video, audio play things that could potentially prevent kids being kind of brought down that echo chamber.

Tim Butara: It's still such a new thing, even though we've been experiencing for some time, but we've only been aware of, really truly aware of the negative impacts of all of this for the past few years. I guess I'd say, like, on the general level, general public level, we've been aware of it ever since The Social Dilemma, the documentary. So we probably still have a long way to go in this regard. 

And so you mentioned these manipulative patterns such as, you know, infinite scroll, but, we have some that are even like even more manipulative, like, you know, obviously dark patterns, and a lot of people in UX and UI design listening right now, or just, you know, interested in that will know what dark patterns are, but let's still break them down a little bit. So what are dark patterns in UX and also which are the ones that tick you off the most and why? 

Emily Yorgey: So dark patterns, they are typically UX tactics when they enter a dark status. So that means that they're not mistakes, there's a company motive behind them, and it's generally when a design feature is subtly nudging you to take a particular action. So, I guess, the difference between... like, I myself, I've only worked in B2B products in the sports industry and now in construction, and we don't have any motive to use dark patterns because we have a unique value proposition and the product kind of sells itself.

So we're more about nudging to promote a more healthy kind of decision making experience rather than integrating those dark patterns. But I think a lot of corporates have kind of leveraged those black hat strategies, and it's very much all about looking at a problem objectively, kind of not considering the user and also removing the noise, I think, for companies that leverage dark patterns, it's because the competitive landscape is so vast, they do want to make sure that you are within the ecosystem, and you kind of stay there. 

So a good example is, Elon Musk recently announced that he wants to reduce traffic to other sites, so Twitter announced a few days ago that they have removed automatically generated headlines from links and external websites, so that's obviously kind of a, there's a clear business oriented decision there, and not only is it aiming to kind of make sure that uses contained within the ecosystem, it's also creating another hurdle for web accessibility. So it's potentially isolating users with disabilities and yeah, it's a scary thought that those sort of dark patterns are being leveraged from a business point of view. 

Tim Butara: It's interesting that this example that you gave made me think about like the distinction between overt and covert dark patterns, right?

Because this would be an example of like the company CEO basically stating like, look, this is what we're going to do. Whereas with LinkedIn, you basically have the same feature. Like, I mean, it's not the same feature, but it's like, posts that link to external sites will be much less favored by the LinkedIn algorithm than those that don't.

So it's a very similar outcome, but one is, like, you have to discover it through posting and through seeing that, okay, some posts have very little engagement, even if it's something really interesting, and the ones with links tend to have very, very poor engagement, whereas the ones without external links tend to have much better engagement. So it's another interesting distinction that I thought of while you were giving this example. 

Emily Yorgey: Yeah, completely. I'm definitely with you. I'm still learning about the tactics of LinkedIn and the fact that they are providing an environment to create and publish articles. So it is, like, strategically, as a company or as an individual, you're better off publishing directly on LinkedIn because the algorithm is in your favor.

But yeah, it's really interesting. And same goes for the covert dark patterns that you mentioned as well. So the fact that you could potentially be masking an action with colors or anything that's kind of making sure that they're either not kind of taking an action or it's just a kind of linear process that they're orchestrating.

Tim Butara: Yeah. So, it's either, you mentioned before that it's nudging you to make a desirable action, but it's also preventing you from taking an undesirable action. 

Emily Yorgey: Definitely. 

Tim Butara: So we already started talking about algorithms and stuff like that. So maybe we can take a little bit of time to talk about AI and AI innovation and over innovation and what kind of risks this poses in the context of manipulative and persuasive technology.

Emily Yorgey: I think AI is a huge topic, especially with the emergence of ChatGPT, it's something I've even leaned on to a few times, but as it's getting more complex, I think the bots are going to become a bit more primary in our lives, since ChatGPT is so disposable and available, I'm already seeing people lean on it for publications and things, so I think the scary thought is there is a lack of safety researchers and it's been deployed quite early, so there really are some unknown capabilities and, yeah, it has a scary potential, but that's where we're at now, I think from manipulative UX perspective, my hypothesis is that it's going to be deployed and begin to become a bit more influential in everyday decision making.

I love this quote from Daniel Kahneman, but it's called "the confidence that individuals have in their beliefs depends mostly on the quality of the story they can tell about what they see, even if they say little." So that's essentially saying that if ChatGPT becomes more coherent, more eloquent, and more conversational, it could become more believable. And that means that users won't necessarily start kind of critically examining or triangulating their results with other sources. So they're kind of saying that ChatGPT is their main source of knowledge, which is kind of a scary thought. 

And that generally attacks our metacognitive skills. So that's cutting into our working and long term memory. The fact that AI can be seen as an external hard drive for our memories. So I think people see Google, I think it's called the Google effect, but I can see AI kind of fitting into that sort of mental model where people see these tools as their own cognitive tool set, and they start to undermine the fact that they actually need to learn and encode things in their own memories, because they've just got these tools that are so disposable. So, I think, for me, that's the most scary thing about this new kind of development. 

Tim Butara: So this would essentially take us back to "don't make your users think#. 

Emily Yorgey: Yes, exactly. Yeah. I think it's definitely leveraging the fact that the queries can be so simple and, yeah, you're exposed to so much information that the way it presents itself does in a way become believable and there's limitless friction, well, there's no friction at all. I think that's a scary thought. 

Tim Butara: And for me, just personally, I'm someone who really values human connection and communication and, sometimes I get really bothered when, you know, I ask somebody a question and they're like, Oh, yeah, just Google it, you know, yeah, why don't you just Google it, you have that at your disposal.

And it's like, yeah, but I value like your take on it. I value your input. I value like the way that you explain it because I'm interested in what you have to say about it. And also it's just like, you know, it's kind of eroding some of the basic and most fundamental aspects of humanity for the sake of convenience, whether that's convenience for the user, convenience for the business. But I just like, I personally, I don't think that things should be too convenient, right? It is the same thing as like, yeah, you know, don't make your users think too much but you know, don't make them stop thinking altogether. 

Emily Yorgey: Yeah. No, I'm completely with you. It's heading into a territory where we might get groupthink or gain groupthink where we're kind of submissive to yeah an opinion and don't necessarily vocalize our own unique perspective, so the creativity is definitely at risk, but equally, I don't, AI is a funny one in that I'm convinced it's not going to necessarily take our jobs or something like that. It's just a matter of, it's making us more employable if we know how to use it. It's not raising the roof, but it's raised the ceiling in that respect. 

Tim Butara: Well, Emily, with everything that we discussed so far in mind, how would you define intentional UX and what are like the most important things here?

Emily Yorgey: So that was kind of the crux of my article and for me, intentional UX is making sure that the designers kind of take their responsibility and account for architecting the critical path so that users are mindful. So, I think, prophesying that I know how much user experience can impact revenue growth. 

So, you know, people building the products sometimes have opposing needs to people using the products. But I think it's become very evident that our cognitive fitness can be shaped by habits that we form online. So more often than not, the products baked in persuasion models are making us less intentional and deliberate. So it's just leveraging those tools that kind of make sure that we are kind of strengthening the right kind of critical thinking aspects, working, having our brain, the kind of an accounting for vulnerability as well, making sure that we're not exploiting the fact that our brains are wired and that products do strengthen new neural pathways as well.

Tim Butara: So it has a lot to do with like ethical design and stuff like that also.

Emily Yorgey: Yeah, completely. So, when I was studying psychology, I had a big passion around problem solving and decision fatigue, and I think because tech is becoming more pervasive, it's definitely something to consider. It's not necessarily a self efficacy problem, so it's not the confidence or the motivation from the user. It's more like how we craft the design. 

Tim Butara: Well, Emily, I'm really glad that we got to discuss all this with you today. We'll definitely also link your, your medium article in the show notes for anybody who wants to check it out, who wants to learn more about everything. But, if anybody wants to connect with you or learn more about you somewhere else, what's the best way to reach out and to connect with you?

Emily Yorgey: I think linkedIn would be great. Emily Yorgey , I don't have any other resource in mind. I really appreciate it. Thank you. 

Tim Butara: Yeah. It was great having you on Emily, as I said, and we really appreciate you joining us. And to our listeners, that's all for this episode. Have a great day, everybody and stay safe.

Outro:
Thanks for tuning in. If you'd like to check out our other episodes, you can find all of them at agiledrop.com/podcast, as well as on all the most popular podcasting platforms. Make sure to subscribe so you don't miss any new episodes and don't forget to share the podcast with your friends and colleagues.