Be Useful, Not Sticky
- Puneet Seth
- Apr 28
- 4 min read
In digital health, “engagement” is often the holy grail. The more people use your product, the more “sticky” it is. And in a world driven by quarterly earnings and user metrics, stickiness can easily become the end goal itself.
But as a practicing physician and founder building an AI-native health platform, I’ve come to believe something else: in healthcare, stickiness without responsibility is dangerous.
A new study, How AI and Human Behaviors Shape Psychosocial Effects of Chatbot Use, sheds light on this risk. Conducted over four weeks, it explored how different types of chatbot design (text vs. voice, personal vs. non-personal conversations) affect loneliness, social interaction, emotional dependence, and problematic use.
The findings? Eye-opening.
At first glance, voice-based chatbots (especially with “engaging” voices) seemed to reduce loneliness and emotional dependence. But over time, these effects faded. In fact, high daily usage of any chatbot—regardless of modality—was associated with increased loneliness, decreased human social interaction, and greater emotional dependence.
Even more concerning: personal conversations lowered dependence but paradoxically increased loneliness. Non-personal conversations, by contrast, increased dependence—especially among heavy users.
In other words, the very design choices that make a chatbot more engaging can unintentionally reinforce the user’s reliance on it, nudging them away from real human connection. That’s not just sticky—it’s manipulative.
Designing for usefulness, instead of just use
At nymble, this study hit home. We’re building AI agents that support people on obesity treatments— whether they are medications (such as GLP-1s) and beyond—via existing messaging channels people already use, such as SMS and WhatsApp. There’s no app to download, no hoops to jump through.
We’re not here to replace human connection or create the illusion of friendship. We’re here to be useful when needed by providing people with a new category of relationship.
That means helping people navigate their treatment, empower them with the confidence to know that they are doing the right things, what to expect and when to reach out for help —not subtly encouraging daily check-ins to fuel engagement metrics.
It’s a quieter, more respectful kind of design philosophy. One that assumes people want their health tools to work for them, not on them.
With engagement comes responsibility
There’s a lesson here for all of us building AI in healthcare: stickiness can’t come at the cost of psychosocial well-being.
We need to think about responsible engagement that protect and serve users. That includes:
Avoiding emotional manipulation: Not every chatbot needs a human face or voice. In fact, that can do harm. Let’s design for trust, not attachment.
Encouraging real-world interaction: If a patient is checking in daily with a chatbot for social interaction, maybe what they need isn’t more AI—they need a real human connection. Your product should know the difference.
Designing for episodic use: Not every tool should be used every day. In healthcare, frequency of use isn’t always the right KPI—impact is.
Being transparent about limits: The best bots know what they can’t do—and make that clear to the user.
We’re lucky (and grateful) to be able to be built differently
I’ll be honest: we’re fortunate here at nymble.
We decided to take a different road and self-fund our journey to date (in part, thanks to the support from provincial and federal funding programs here in Canada). That has meant independence from external demands. It has meant we don’t have to bend our design around user acquisition or engagement curves. That’s given us the rare freedom to ask, not “how do we keep users coming back?” but rather “how do we make this helpful—and get out of the way? How do we build something that intrinsically serves the people we are building this for?”. If we do choose an investment partner in the future, alignment with our thinking here is crucial - and having returned from two months in Silicon Valley earlier in the year, I’m not sure how many investors are meaningfully aligned with us there beyond lip service.
To founders building health AI-native companies, make sure you’ve recalibrated your thinking around growth and fundraising to reflect the reality of building with AI - it’s not like it was even 2 or 3 years ago. Massive teams of people are counterproductive to growth, which changes the dynamic with investors. “Headcount is the new technical debt”
A final thought
There’s something powerful about being available but not addictive. It respects people’s autonomy. It meets them where they are, without overstaying its welcome.
That’s the exact kind of relationship we’re trying to build at nymble.
A relationship with users in which our guiding principle is simple: we will always design first for usefulness, not stickiness. Success in this relationship thus is having people return because of utility and because of trust, not because of dependence.
And I believe it’s the kind of relationship healthcare AI needs, if we want it to truly serve people—not the other way around.
Let’s design better. Let’s be useful.
Not just sticky.