Hacker News new | past | comments | ask | show | jobs | submit login

While Koomen makes valid points about the limitations of current AI implementations like Gmail's assistant, I think even his analysis misses a more fundamental insight: we're in a transitionary period that will eventually lead to a very different communication paradigm.

The article focuses on giving users control of their System Prompts to personalize AI outputs, but this approach still assumes a world where humans are both crafting and consuming messages directly. What's missing is consideration of how communication will evolve when AI agents exist on both sides of exchanges.

Consider these scenarios that exist simultaneously during this transition:

- Senders using AI, recipients who aren't

- Recipients using AI to process messages, senders who aren't

- Eventually: AI agents on both sides

In this final scenario, the actual transport format becomes less important. In fact, more formal, verbose messages with additional metadata might be preferable as they provide context for the receiving agent to process appropriately.

Imagine a future where you simply tell your AI, "Let everyone know I won't be in today," and your agent determines:

1. Who needs to be told

2. What level of detail each recipient requires

3. What context from your calendar/activities is relevant

On the receiving end, the recipient's agent would:

1. Summarize the information based on relevance

2. Determine if follow-up is needed

3. Automatically reschedule affected meetings

Most importantly, these agents could negotiate with each other behind the scenes. If your message lacks critical information, the recipient's agent might query yours for details: "Is this a one-day absence or longer? Are there pending deliverables affected?" Your agent would then provide relevant details without bothering you.

This agent-to-agent negotiation seems far more likely than what Koomen proposes - users meticulously crafting System Prompts to personalize their communications. In practice, most people don't want to configure systems; they want systems that learn their preferences through observation and feedback.

Rather than focusing on making current AI implementations mirror human communication styles more precisely, perhaps we should be designing for the eventual world where AI mediates most routine communication, with detailed configuration being the exception rather than the rule.

The real "horseless carriage" thinking might be assuming humans will remain directly in the loop for routine communications at all.






Thinking on this more, I've realized there's an additional problem with Koomen's vision that might be even more significant than what I initially described.

It's actually potentially harmful that current AI email assistants try to impersonate their users. When these systems are designed to write as though they're you - using your voice, your style, your signature - they create a kind of social deception that undermines genuine human connection.

Consider what happens when you meet someone after your AI has "written as you" in an email exchange. They might reference specifics from "your" message that you have no actual knowledge of. You're supposedly continuing a conversation you never actually had. This creates an awkward disconnect that erodes the authenticity of human interaction.

In our rush to make AI communications feel natural by mimicking human writing styles, we risk something more valuable: genuine connection. The problem isn't just that current AI systems don't sound enough like us (Koomen's concern), it's that they're pretending to be us at all.

A more honest and ultimately more useful approach would be for AI agents to have their own distinct identities: "Sent on behalf of Mason" or "Read on behalf of Sarah." This transparency preserves both the utility value of efficient communication and the personal value of authentic human interaction.

For AI-mediated communication to truly succeed long-term, we need to separate:

1. Utility communications (scheduling, information sharing, routine updates)

2. Personal communications (relationship building, creative collaboration, emotional connection)

When we blur these lines by having AI impersonate us for utility communications, we risk devaluing the currency of genuine person-to-person exchanges. After all, part of what makes a personal message meaningful is knowing that another human took time specifically for you.

So while I still believe agent-to-agent negotiation is the future of routine communications, I think transparency about AI involvement is equally (if not more) important. The end state isn't AI that perfectly mimics our writing styles; it's a communication ecosystem where AI handles routine exchanges transparently, while preserving the special value of genuine human connection.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: