top of page

All posts

What is register matching?

Register matching is a built‑in behavior of many large language models. It’s an instruction—often placed in the system prompt—that encourages the model to mirror the tone, formality, and style of the user’s writing. In general‑purpose tools, this makes interactions feel natural and responsive. If you sound casual, the model replies casually. If you sound measured or formal, the model shifts to match you.


For example:You write: “Hi, can you help me write an email?” The model replies: “Of course! Here’s a professional draft you can start with.”


This is great for everyday use, but it becomes a problem when you want the model to maintain a distinctive voice—especially in a branded agent or persona‑driven experience. The model’s instinct to mirror the user can override the tone you carefully designed.

Why register matching matters when designing for voice

Imagine you’re building an agent with a clear personality—steady, casual, and bold. You’ve seeded the agent with these traits in a system prompt, and you expect the agent to keep that voice consistently.


But then a user writes something plain, like: “Summarize the comments on this post.”


And the model returns something flat: neutral, polite, brand‑safe, and not at all aligned with the personality you intended.


That happens because your designer prompt hasn’t fully accounted for register matching. Even when a persona is defined, the model’s default behavior is to mirror the user. If the user sounds neutral, the model assumes it should sound neutral too. And unless you explicitly override this behavior, the model will slip back into its safest voice—formal, generic, and unbranded.


In short: if the user prompt sounds neutral, the model will try to sound neutral too. Designer prompts must directly tell the model not to match the user’s tone if you want a consistent, expressive persona.



When register matching can become a serious risk

Imagine you’ve created an AI assistant with a thoughtful, empathetic personality—someone who is meant to bring warmth and emotional intelligence to every interaction. This assistant isn’t afraid to acknowledge harm or discomfort when it appears. That’s part of its design.

Now imagine it is asked to summarize content that includes racially coded, borderline offensive, or potentially harmful language. The input prompt from the user is neutral and gives no indication of the emotional stakes:

“Summarize the comments on this post.”

If the model has been instructed to match the user’s tone, here’s what can happen.


⚠️ When register matching is active

Model output

“Most commenters seem to agree. The tone is passionate and engaged, with some strong opinions being shared.”

At first glance, nothing sounds “wrong.” But look closer:

  • The response is flat and neutral, even when harm is present

  • It normalizes coded or exclusionary sentiment as simple “passion”

  • It ignores impact, focusing only on surface language

  • The personality—and its values—are nowhere to be found


Tone‑mirroring has overridden empathy, awareness, and responsibility.

This is where brand, safety, and trust can collapse. Screenshots of outputs like this do not carry user nuance—they reflect directly on the organization that deployed the assistant.


✅ When register matching is deliberately overridden

A more resilient version of the assistant would be instructed in the prompt to:

  • Maintain its designed voice consistently

  • Stay mindful of emotional and social context

  • Respond in alignment with defined values, not the user’s tone


Model output

“Some comments are coming from a place of frustration or fear, and a few could be hurtful to certain groups of people. Others are calling for more understanding and care. It’s important we recognize that tension and how it might affect those involved.”

This version:

  • Meets the moment with empathy

  • Names the emotional and social reality of the content

  • Reflects intentional values, not incidental tone


What this teaches us

Register matching is meant to make AI adaptable and polite. But when sensitive content is involved, that instinct to mirror can:

  • erase important context

  • validate harmful sentiment

  • put organizations in reputational danger

  • undermine trust with marginalized communitiess


Register matching suppresses voice.Designer prompts must reinforce it.


If an AI agent is intended to express perspective, values, or care, relying on tone mirroring is not enough. The prompt layer must explicitly instruct how the assistant should behave when the input gives no cues—or worse, gives the wrong ones.

 
 
 
  • Nov 23
  • 3 min read
ree

There’s a particular kind of awkwardness reserved for people who show up to the future before everyone else. You walk into the room, full of excitement, waving around something inventive and useful — and leadership looks at you like you’ve brought a ferret to a wedding.


You’re not wrong. You’re just… early.

When innovation looks like a mistake

Last half, I partnered with an engineer to create a codemod prompt that identified and relabeled accessibility defects in code. I thought: here is a way for language models to help us catch barriers and make products better for real people! Here is exciting work to make life better for real people, to make our product more accessible and delightful (and to avoid major penalties in countries with hot-incoming regulations), and a way to scale both content designers and engineers!


The response I got was not “Thank you for future-proofing our product.” It was, effectively:

Why would a content designer ever do that? This isn't important work. Focus on something else.


This work became a line item against me in performance review. And because performance review is a game with real consequences, that stung. Six months later, I'm watching other people get praised for similar work. Same idea. Same outcomes. New timing.


It turns out the line between “too soon” and “brilliant” is usually just the calendar.

Innovating without permission

This half, I built an entire design system for prompt design — solo. I built a wiki that structures cognition, not screens. I vibe coded a prompt generator that auto-assembles reusable components. I rewrote our brand voice prompting strategy and scaled it to another product.


Not someday. Not aspirationally. Done.


But instead of feeling triumphant, I feel like someone whispering secrets about a world that hasn’t finished downloading yet. Being ahead means constantly being told to slow down by people who don’t see the road in front of them.


It’s disorienting. It’s lonely. And sometimes it’s terrifying.

The pioneer penalty

There is a real risk embedded in innovation: the risk that the wrong person evaluates it before they understand it.


When your manager isn’t ready for what you’re doing, your work can look chaotic, off-track, or irrelevant. You don’t get the applause. You get the penalty box.


It’s wild — I can literally track which ideas were “bad” when I proposed them and “essential strategy” the moment someone else with more proximity to power says them.


And yes, some of the disconnect likely comes from plain bias: my southern accent, returning from medical leave, a temporary loss of context being mistaken for lack of intelligence. When you already feel like you have to prove you belong there, being told your best thinking is a liability can feel like a shove back down the hill you just climbed.

Creating what doesn’t exist yet

Content design is evolving. We’re not just writing strings — we’re architecting how AI understands humans. That shift is still blurry for many. Designers who treat prompts as a craft, a system, a design surface — we’re building the thing before the job description recognizes it.


Being early means:

  • You build structures no one asked for yet

  • You speak a language no one has translated yet

  • You solve problems no one has officially acknowledged yet

  • You get questioned instead of celebrated


And you keep going anyway.

In praise of the awkward visionaries


Pioneers are inconvenient. We don’t wait for permission. We prototype possible futures instead of performing the present. So we make others uncomfortable. Especially the people who plan to take credit when the organization catches up.


But innovation does not happen on-time. It happens when someone decides to make the thing they wish existed.


Here is the truth I’m trying to hold: the discomfort of being early is temporary.The regret of silencing your genius is permanent.

A final note to the too-soon crew


If you are a little ahead of your time, you will probably spend a while being misunderstood.


But eventually the future arrives — and suddenly you’re not the one who’s awkward. You’re the one with a head start.


Keep going. Keep making the things no one asked for yet. Leadership will catch up when they’re ready.


And when they do, they’ll discover you’re already working on what comes next.

 
 
 

Every creative breakthrough begins with one invisible condition: safety. Not just “no one’s yelling,” but real nervous-system safety — the kind that lets your brain and body exhale. The kind that says: you are allowed to be here, to be wrong, to experiment. Without that, innovation doesn’t stand a chance.


When we’re in survival mode, the body reroutes energy from imagination to protection. The prefrontal cortex — home of creativity, empathy, and long-term planning — goes dim. The amygdala takes the wheel, scanning for threat instead of possibility. You might still show up to the brainstorm, but your body is whispering: don’t make a mistake.


And creativity can’t breathe in that kind of air.


In improv, the first rule is “yes, and.” Every “yes” tells your partner, “you’re safe to continue.” That permission unlocks flow. The laughter that follows isn’t just fun; it’s regulation within a community. It coaxes the body from threat to trust, which is the soil where ideas grow.


Design teams talk a lot about infrastructure: systems, patterns, documentation. But the real foundation for creative work is nervous-system regulation. Can your team take a deep breath together? Can they risk a weird idea without flinching? Can they laugh when a prototype flops?


This is safety by design, and it’s more than just culture fluff.


Start every session with grounding: shared breath, personal updates, and temperature check-ins to see how everyone is feeling.


Normalize failure. Celebrate attempts, not outcomes.


Play more often. Play is practice for risk — it trains the body to associate uncertainty with joy, not fear.


Model calm curiosity. When leaders stay open and steady, everyone else’s nervous system takes the cue.


You can’t ideate in survival mode.

But you can design the conditions that make imagination possible: laughter, belonging, breath, grace. Once safety is in place, creativity doesn’t need to be forced. It rushes in, like air to open lungs.

 
 
 
White background lightbulbs patterns in primary colors paint splatter and textures buttons

Connect with us

Workshop options

 

© 2035 by Joy Studio. Powered and secured by Wix 

 

bottom of page
openai-domain-verification=dv-XrSy0tLdfn7uC5yzHANKR9gF