In this interview with IFEANYI IBEH, a Nigerian content designer, Ademola Adepoju, shares how small language choices shape fairness, usability, and confidence in digital finance
Can you tell us about your journey into content design and UX writing and what drew you to this field?
I found my way into content design (also referred to as UX writing) by paying attention to what people struggle with on screens. I began in marketing, but the turning point was at home. I bought someone close to me her first Android phone and spent days helping her set up banking and everyday apps. Her excitement quickly ran into confusion, and I saw how much the right words and structure matter when someone is trying to do something important online. That experience made inclusion through language feel like work I wanted to do for the long haul.
In industry, I moved fully into UX writing. I redesigned help content that reduced support requests, served as the sole content designer on a cross-border remittance app, and helped set team standards through a content style guide. Those projects taught me that clarity and care can scale when teams share the same language, and that a simple, honest explanation can change how a product feels to someone who is new to digital finance or watching every naira.
Graduate study in technical communication at NC State has given me the research tools to go deeper. I am studying how content, information architecture, and usability come together to make digital services more accessible, especially for people who are new to technology, less formally educated, or older. My aim is to turn lived challenges into better patterns, better wording, and better outcomes for real communities.
How challenging was it and what kept you going?
It was hard in the way new work is always hard. Content design was still emerging in Nigeria, and there was no playbook to lean on like there is for engineering or product management. I had to explain what the role was while doing it, prove that the words on a screen could change outcomes, and build simple systems for tone, naming, and explanations from scratch. The stakes were real. People were using small phones, juggling data costs, and moving between languages. A vague sentence could slow a payment or make someone abandon a task. Getting the language right took patience and conviction.
What made a difference was where I worked and who I worked with. My employer at the time was one of the first African companies to treat content design as essential to the customer experience, not just something you add at the end. That gave me room to shape the practice. I was surrounded by designers, researchers, and engineers who cared about the experience and saw that great content was part of how we earned trust. We learned together, tested ideas quickly, and kept improving the small details that make a product feel fair and usable.
What kept me going was seeing the impact on people who are often overlooked by technology. When a clearer message helped someone complete a payment or understand a delay without panic, it felt worth the late nights and the advocacy. The team’s belief helped too. It is easier to keep pushing for clarity when the people around you value it and hold the same standard.
Today I am glad to see more content design roles across Nigeria. More companies recognize that clear, caring language is part of the product, not an afterthought. I am proud to have been among the first set of content designers in the country and to have helped lay some of the groundwork others can build on.
What did you learn from working in fintech that shaped how you think about user experience?
Working in fintech has taught me that user experience is really about context. People do not meet a product in a vacuum. They bring fears, past losses, and urgent needs. I learned to write so no one is left hanging, even on short screens. That means naming what is happening, why it matters, and what comes next in the moment someone needs it, from empty states to error messages. In practice, saying a lot does not mean writing a lot. It means focusing on what helps the person move forward with confidence.
I also learned that trust grows when we are honest and proactive. If a choice today has a consequence later, we should say so now and plan follow-ups that respect people’s time and money. Clear reminders before a charge or deadline are not just good service. They are how you turn wary first-time users into long-term customers, especially in communities where data is expensive, screens are small, and switching costs are real.
Another lesson is to keep context alive as products change. New versions and flows can unsettle people unless we guide them back in with explanations that match the update. Tooltips, short intros, and revised copy can re-orient users without slowing them down. That only happens when content designers are in the room early, asking questions in research, hearing the language users actually use, and shaping the product story end-to-end with engineers and designers. Trust is the currency in digital finance, and words carry a lot of that weight when money decisions are on the line.
How did your work at Flutterwave influence your views on inclusive design?
Working at Flutterwave shaped my view of inclusive design because the product serves people and businesses in multiple countries across Africa, as well as Europe, the UK, and the US. The same flow met very different realities, and my job as a content designer was to make the words and structure work in those contexts without losing clarity or respect for the user.
It meant paying attention to the basics that change meaning across borders. Currency, dates, number formats, and the names of common fields are not universal. Routing number is not the same as sort code or IBAN. BVN and NIN mean something in Nigeria that a US or UK customer would not recognize. My work involved writing and reviewing language so these differences were clear on screen, and so that people could complete a task without second guessing what a field was asking for.
Regulatory copy also needed to be precise and human. Strong authentication in the EU and UK added steps that needed explanation so people did not drop off. Privacy language had to align with local expectations and laws while still being easy to understand. In the US, certain financial terms carry specific meanings that the product has to respect. I learned to keep the story consistent across the app, email, and support so that nothing changed meaning between channels.
Economic context mattered just as much. In some markets people plan around data costs and use smaller phones, so the content has to be short, readable, and placed where it actually helps. In other places the concern was less about bandwidth and more about clear guidance through extra verification. Writing with both realities in mind taught me that inclusion is not one baseline. It is a commitment to context, to plain language, and to keeping people informed at the exact moment they need it.
That experience made inclusive design feel practical and measurable to me. If someone in Lagos, London, Nairobi, or New York can answer what is happening, why it matters, and what to do next, then the content is doing its job. If they cannot, the work continues.
What motivated you to start focusing on responsible AI in your graduate studies?
I turned toward responsible AI because I kept running into the same pain point in real life. People meet automated decisions on screens and the words around those decisions many times do not help make sense of what has happened. That feels avoidable. I wanted to understand the decision systems themselves and bring clearer, kinder communication into the design from the start.
During my graduate program I led an audit of a lending AI system. Working on the report was both revealing and inspiring. The model outputs were technical, but the moments that mattered were human. If a person could not tell why an outcome appeared or what to do next, the experience felt arbitrary even when the technology was working as intended. I presented this work at a symposium and saw how many people across disciplines were wrestling with the same gap between model logic and everyday understanding. That confirmed I was in the right place.
My focus since then has been practical. I am learning how to build explanations into interfaces at the moment they are needed, how to write reason statements that feel honest, and how to create review paths that respect people’s time and pride. The motivation is simple. I want families on tight budgets, newcomers to digital finance, and older users opening an app for the first time to feel they can understand and act. Responsible AI gives me a way to work on that goal with more rigor and a wider lens.
Why do you believe inclusive design is so important for the future of finance?
I believe inclusive design is what turns a financial product from “clever” into “useful.” Money decisions are loaded with emotion and risk. When language and flows make sense to a wide range of people, you give them the confidence to act, recover from setbacks, and protect what they’ve earned. When they don’t, small misunderstandings snowball into fees, delays, and lost trust.
The future of finance is more connected and more automated. That only works if people can follow what’s happening. Inclusive design keeps them oriented. It means speaking in everyday terms, showing consequences before a choice is made, and leaving room for questions. It also means taking context seriously. Names, IDs, addresses, payment norms, and compliance expectations differ across markets; so do economic realities. Products that acknowledge those differences—by using local terms, accepting valid alternatives where policy allows, and guiding people step by step—feel fair.
There’s a practical side. Clear content reduces complaints and churn, keeps support and product aligned, and meets growing expectations from regulators for transparency. But my reason is simpler. I want someone sending money to family or running a small business to feel calm, informed, and in control. Inclusive design makes that possible. It treats people with dignity and gives them a fair shot, which is exactly what the future of finance should promise.
What challenges have you seen underserved communities face when using fintech tools?
Two problems come up again and again. The first is language that does not match how people actually talk about money. Terms like preauthorisation, settlement window, or chargeback might be correct, but they leave first-time digital banking users, people reading in a second language, older adults, and customers managing very tight budgets unsure of what will happen. If a card is on hold, they need a plain sentence that explains what that means today and when the hold will lift. If a transfer is pending, they need to know whether that is minutes or hours. When we replace insider labels with everyday phrases and add the immediate consequence in the same breath, these groups stop guessing and start planning.
The second is decision wording that blurs what is firm and what is not. Prequalification is not approval, yet many interfaces make it feel the same, which creates false hope for first-time credit seekers, thin-file borrowers, gig workers with irregular income, and newcomers without local credit history. The same confusion shows up with declines and reversals. “Something went wrong” leaves people in the dark. A clear line that says what happened, why in simple terms, and what they can do next keeps them moving and restores a sense of control.
Can you share an example where better language or design improved trust or fairness for users?
Wise, an international money transfer company, built its cross-border product around upfront pricing and the real mid-market exchange rate, shown before you commit. There’s no hidden markup buried in small print; the total cost is laid out in plain view. That design choice sounds simple, but in remittances it changes outcomes because people can compare options and avoid surprise fees. Wise has made transparency a public promise, and its own research has documented how hidden FX markups distort what customers pay. This is a case where content and interface choices make pricing fairer and confidence higher.
What role do you think AI should play in making lending and financial services fairer?
Honestly, AI should make money feel less stressful, not more complicated. If a system approves, declines, or asks for more information, the screen should make sense the first time you read it. I want AI to help products speak clearly in the moment a decision lands. Tell me what happened in plain words, name the real factor that tipped the outcome, and show a next step I can actually take. If the tool can adapt that explanation to my context or reading level, even better, but a human still needs to check the wording so it stays honest.
Fairness is not a one-time audit. It is maintenance. Models need watching after launch, because real life is messy. Newcomers without local credit files, first-time borrowers, people with irregular income, or folks who have changed names can get squeezed by rules that weren’t written with them in mind. When a pattern like that shows up, the system should be adjusted and users should be told in simple language what changed. Silence erodes trust faster than almost anything.
Privacy and consent are part of fairness. If the product uses data beyond a bureau report, say where it came from, why it helps, how long it’s kept, and how to turn it off. Collect less, not more. Give people control without sending them through a maze. That clarity matters in Lagos, Abuja, London, and Atlanta. It is the same principle everywhere.
There are moments where automation should step back. If a decision could disrupt a household or a small business, a human review must be easy to reach and predictable in timing. AI can still help behind the scenes by organising the facts for an agent, spotting missing documents, and keeping the person updated so they aren’t left waiting without information.
So my answer is simple. AI’s role is to widen access, reduce avoidable harm, and keep people informed. If we use it to explain decisions, fix unfair drift, protect privacy, and bring a human in quickly when the stakes are high, finance starts to feel fairer. That is the direction I want to help push this industry, both at home and abroad.
How do you see UX writers and designers influencing responsible AI practices in the next few years?
I see UX writers and designers becoming the custodians of people’s rights inside AI products. Engineers will keep building models, but we decide how power shows up on the screen. If a system makes a call about someone’s money, we choose whether that person is informed, respected, and able to respond. That is a rights issue as much as a design issue.
Policy already points in this direction. In the United States, the White House’s AI Bill of Rights says people deserve notice and explanation. Our job is to turn that promise into everyday product reality. That means building explanation patterns that actually land with people: a clear statement of what happened, the real factor that influenced it, what it means right now, and an easy path to get help or ask for review.
Underserved communities will feel the difference first. If the product speaks in insider terms, or hides consequences until after a button is pressed, those who live closest to the edge pay the highest price. UX strategy can change that. We can set reading-level targets and plain-language standards, require that fees and timelines appear before commitment, and insist that appeals are visible and simple. We can design consent that is honest about what data is used, why, and how to turn it off later. These are small decisions that protect dignity and reduce harm.
I also expect our role to include ongoing accountability. After launch, we should track whether people can explain a decision back in their own words, whether help is found quickly, and whether outcomes are consistent across groups. When the numbers show a gap, we rewrite, we move the message earlier, or we simplify the steps. Content becomes a living system, not a one-time checklist.
Inside teams, we can build the guardrails that keep everyone honest. Explanation components in the design system. A single source of truth for high-risk messages so the app, email, and support all say the same thing. Review gates that stop confusing copy from shipping in the first place. Space for community advocates to react to real screens, and budget to pay them for that work.
So, what role should we play? We should be the people who make rights visible. The people who refuse to let a model’s decision arrive without context. If we do that work with care, AI stops feeling like a black box and starts feeling like a service that answers to the public it claims to serve.
link
