Context grounds our understanding of a person’s situation; it is what lets advice fit, support become useful, guidance hold up and empathy meet the individual rather than a type.
This is true in most human enquiry, and no less so of financial services. Every institution, practitioner and tool built to help a person with their money, whether a banker, an adviser, a coach, a budgeting tool or an AI agent, depends on what it understands of the person on the other side. What it can do is bounded by what it knows.
Two people with identical income and identical spending can be in completely different situations. One is saving for a wedding; the other is trying to move out of a shared house. One has a parent who needs support; the other is debt-free and puts every spare dollar into an ETF. One has a stable salary and a mortgage; the other is a contractor anxious about next month.
The transactions don’t distinguish them. The context does.
Which raises a question that has always sat behind the work of helping people with their money: how is the context actually established?
Lucie Money is being built to help people with their money. What follows is about how context has to be established for a system built to that purpose to be capable of it.
Establishing context
The answers have fallen, broadly, into three categories.
Knowing it from the relationship
The oldest approach is to know the context personally, through a human relationship maintained over time. The traditional bank manager, the contemporary business banker, the private banking relationship manager: the model is the same. Context is established by knowing the customer: conversation over years, an understanding of family circumstances, career direction and aspirations, observation of how the relationship unfolds. When the customer comes in wanting to buy a car, apply for a home loan or restructure their affairs, the context is already there. Advice, support and guidance follow from it.
This has always worked when it has been available. Its limit is one of scale; as financial services went mass-market and digital, the industry had to look for ways to provide some of what the relationship manager knew, without the relationship manager.
Access to this kind of relationship has always been stratified: by creditworthiness, class, gender and the institution’s judgement of who was worth serving. The nostalgia can only be selective.
Inferring it from data
The principal approach is to discern the context by observation. Build a profile of the customer from everything the institution already holds about them. Transactions. Balances. Payments history. CRM records of contacts, complaints and product holdings. External data bought in for enrichment: market-research segments, demographic overlays, life-stage models. All of it combined and run through analytics to derive insights, segment the customer, identify life events and predict next-best-offers.
Banks and financial institutions have been refining this approach for decades. It is genuinely sophisticated, and it has real results in fraud detection, credit assessment, retention analytics and personalised marketing. That infrastructure was built to serve commercial purposes, and it serves them well.
What follows is not a critique of inference itself, but of inference as a basis for the different job of establishing the context needed to help an individual well. For that job, inference has structural limits:
- It answers the wrong question. Inference can predict what the customer is likely to do next, but it can’t tell you why they’re doing it or what they’re trying to achieve. A $40 transaction at the supermarket is a $40 transaction at the supermarket, whether it was the weekly shop or the ingredients for someone’s birthday dinner, whether the customer felt in control or guilty, whether they’d welcome advice or resent it. The data doesn’t say.
- It lags behind change. Profiles are most accurate for customers in stable patterns and least accurate for customers in transition. A job change, a new relationship, a break-up, a move, a redundancy, a health event: the profile retains its focus on the customer’s previous circumstances until enough new data accumulates to overwrite it. And money context matters considerably precisely in those transitional moments.
- It cold-starts poorly and is opaque to the user. A new customer has no profile. And if the system has decided you are a cautious saver when actually you’ve just been between jobs, you have no way to correct it.
And it remains, at root, an observational, inductive relationship. The system watches; the user is watched.
Constructing it from preferences and settings
Configuration has two distinct uses that are often conflated.
The first is control: the user decides what they want and sets a rule the system should enforce. A spending limit on a category. An alert when a bill approaches. A block on certain kinds of transaction. These are deliberate instructions, not descriptions of the user.
The second is context-capture: trying to establish what matters to the user by having them translate their situation into fields. The actions look the same: pick categories, set budgets, declare savings goals, choose alert thresholds. But the purpose is different. The parameters are being treated as descriptions of who the user is and what they are trying to achieve. The user’s settings become the system’s model of the person. This is how every budgeting app and most banking apps try to know their users.
As a way of establishing context, this runs into three structural limits:
- It demands translation work. Categories and parameters can’t accurately represent context. “Saving for a holiday” is a goal; “saving for a holiday that might be a year off if my partner’s contract doesn’t come through” is not.
- It loses fidelity. Categories and parameters can’t hold texture or ambivalence. They can’t hold “I know I should be doing this but I haven’t yet”. The constructed profile loses this context.
- It decays. Configurations are commonly set once and rarely revisited. But life, of course, changes and the settings that fit in March may not fit in July. And it takes effort to maintain the model.
This ‘translation-and-maintenance’ work is exactly the kind of deliberate cognitive effort most people’s attention can’t reliably support. And so the context-capture project, already contrived, loses its cogency.
Both inference and configuration are attempts to substitute, at scale, for the first approach. Each captures something of what the relationship model delivered; neither captures the whole. The question is whether there is another way to derive the benefits of a personal relationship at a scale the traditional bank manager could not reach.
Scalable conversation as context
Conversation as context is an attempt to achieve this.
It means that the user’s own account of their situation sits at the organising centre: what they say they are trying to do, what matters to them, what is changing. Inference and parameters sharpen and detail this picture, but the understanding is organised around what the user has said, not only around what the data shows or the settings hold.
Conversation as context imposes a set of demands on any system built around it. The user’s context has to be held across every interaction, not re-asked each time. It has to be treated as current until the user says otherwise, and checked rather than assumed when circumstances suggest it may have moved. It has to be kept distinct from what has been inferred about the user, because the two carry different weight and warrant different treatment. And what is known has to be filtered to what is worth surfacing in conversation, and when.
Objections
Three objections may be levelled at this approach.
Isn’t this just a chatbot on top of a budgeting app?
In most products that blend chat with a financial app, the chat is a skin: a conversational interface laid over the same budgeting-and-tracking mechanics that were already there. The underlying model of the user is still the configuration they set, the transactions the system captured and the charts that follow. The chat is a way of talking to that.
The conversation as context approach is different. The conversation is not an interface over an underlying model. It is itself the underlying model: what the system holds about the user, built and maintained through what they have said. Features sit on top of that, not under it.
Can a system really do this kind of context-holding?
A particular combination of capabilities now makes this possible: continuity across every interaction, memory that persists and is drawn on rather than re-requested, and observation running in the background to surface what warrants conversation. None of these on its own would be enough; together they amount to a viable substrate for conversation as context.
What if the user misrepresents?
Transactions are hard fact; statements are not. Might inference be the more reliable basis after all? Two responses.
First, conversation as context makes deliberate misrepresentation self-defeating. The user is not reporting to an authority; they are telling the system what matters to them in order to get help shaped to it. Misrepresenting produces worse help, not better. This holds provided user and system interests are aligned, which is a commitment of the institution rather than an outcome of the design.
Second, unintentional misrepresentation is real (self-deception, aspiration, minimisation), and this is precisely why observation continues to matter. A mismatch between what the user says and what the data shows is a cue for the system to ask, not a reason to override.
Being heard, not merely observed
Something in this matters beyond the design argument.
Configuration and inference are different in almost every other respect, but they share one thing: they both put the user primarily in the position of being observed. The system watches what you fill in, or watches what you do, and builds its picture of you from that. The user is the object of observation.
Conversation puts the user primarily in a different position. The user declares what matters. The system takes its understanding from what you say about your life, and uses it to do the work.
That doesn’t mean observation stops: the transaction stream, the patterns, the anomalies, the derived insights are all of value. What changes is what it’s in service of:
- In an observation-primary model, everything the system derives feeds a profile that stands in for knowing the user.
- In a conversation-primary model, everything the system observes feeds back into understanding based on conversation: conversation is how Lucie knows.
Both describe a method of interaction. The first is reporting on someone. The second is closer to how people actually help each other with money: you talk about it; you’re understood; the other party notices things and brings them up; things get done.
That’s the engagement we’re designing for: a conversation built to help the person in it.