
The Human API is Broken
And AI is making it worse.
You are not your LinkedIn profile.
You know it. I know it. The recruiter copy-pasting "I came across your profile and was impressed" into 600 inboxes knows it too.
But here's the thing nobody is saying out loud: the infrastructure we use to understand and allocate human value hasn't been redesigned in decades — and the world it was built for no longer exists. Every company, platform, and AI tool building on top of it is constructing a skyscraper on sand.
How Human Value Allocation Actually Works (And Where It Breaks)
How does the economy connect humans to opportunity? There's a pipeline with four stages:
Identity capture → Signal transmission → Matching → Connection.
Your value gets captured somewhere — a profile, a resume, a company page. That identity gets transmitted — through job boards, outreach, referrals, pitch decks. A matching system tries to pair signals with needs. And a connection is attempted — a cold email, an InMail, an intro.
This pipeline governs far more than hiring. It connects founders to investors, startups to agencies, funds to deals, solo operators to collaborators, buyers to vendors. Think of it as the human API — the interface through which the economy queries who you are and what you're worth. That API is returning stale data, missing fields, and wrong answers.
For a growing number of important decisions, every stage of this pipeline is failing. Not everywhere, not all the time — but for the high-context matches that increasingly drive the economy, it's failing badly.
The pipeline was shallow by design, not by accident. Resumes, titles, and credentials are compression protocols — they make large markets legible by reducing complex humans to comparable signals. That compression was designed for a market operated by humans with limited attention and limited information-processing capacity, and for most of the 20th century it worked well enough. "Well enough" has stopped being enough. Decisions are getting more contextual, roles are changing faster, and the entities doing the querying are increasingly AI agents with vastly more capacity to process richer signal — if it existed. The compression ratio that made sense for a human recruiter scanning 200 resumes makes no sense for an AI system that could evaluate deep context on every candidate in seconds — if that signal can be structured, permissioned, and made available.
AI isn't just disrupting the pipeline — it's reshaping the terrain the pipeline was built to navigate. Block, a $40 billion public company, recently announced it is eliminating traditional middle management and replacing hierarchical coordination with AI. Whether or not their experiment succeeds, the direction is clear: companies are collapsing layers, redefining roles, and moving faster than the profiles that describe them can keep up.
Map that back to the pipeline. The capture layer records credentials and titles inside organizational structures that are actively dissolving. The signal layer is stale — most profiles haven't been updated in years, and the roles they describe may no longer exist. The matching layer is working with garbage inputs, producing garbage outputs at scale. And the connection layer has collapsed under automated outreach that neither party asked for.
The result is measurable — hiring is just the most studied example. A bad hire costs 30% of the employee's first-year salary (U.S. Department of Labor). Three out of four employers admit to making the wrong hiring decision. Disengaged employees cost the world economy $8.8 trillion per year — 9% of global GDP (Gallup).
But the same failure plays out everywhere humans try to find each other. VCs wade through thousands of cold decks to find the ten that matter. Founders choose agencies based on a portfolio page and a referral, then discover the team who won the pitch isn't the team doing the work. Enterprise buyers evaluate vendors through RFPs that reveal nothing. Two people working on the same problem in the same city never meet. The matching problem is universal. The data just happens to be best documented in recruiting.
Is broken identity infrastructure the only cause? Of course not. Compensation, management quality, timing, and a hundred other factors play their part. But trace the causal chain upstream and you keep arriving at the same place: people matched on information that was shallow, stale, or wrong. The infrastructure failure doesn't explain everything. But it poisons everything downstream.
What does it mean to be a software engineer when you no longer write code?
"I'm a software engineer." A year ago, everyone understood what that meant. Today, that title could describe someone hand-writing low-level systems code, someone prompting AI and reviewing its output, or someone architecting complex systems without touching a line. "Software Engineer" has become so low-resolution it tells you almost nothing about what the person actually does on a Tuesday morning.
This isn't permanent — eventually the shared understanding will update. But during the transition, the labels are increasingly ambiguous. We're scoring humans on a test that machines already ace, using titles that could mean five different things depending on who's wearing them.
And it's not just engineers. Director, VP, Senior Manager — these are becoming organizational fossils at companies restructuring around AI. If you're a "Director of Engineering" at a company that no longer has directors, what are you? Your profile says one thing. Reality says another.
Dead Profiles, Living People
LinkedIn knows where you worked, when you worked there, what title someone gave you, and which colleagues were polite enough to write you a recommendation in 2019.
LinkedIn doesn't know what you're betting on right now. How you actually work with AI. What you've shipped in the last 30 days. The side project you're building on weekends. The blog post that got 12 readers but captures how you actually think. What you've chosen to stop doing.
These are living signals. They change weekly. They reveal something different and often more current than a decade of job titles — not always more valuable, but capturing dimensions that static credentials miss entirely. And almost no platform captures them.
Instagram shows what you curate. LinkedIn shows what you were. Twitter shows what you think — if you share publicly, and if anyone notices, and only in fragments mixed with noise. None of them produce structured signal about what you want, what you're building, or where you're headed.
The Matchmaking Lie
Every talent platform, recruiting tool, deal-matching engine, and service marketplace has the same pitch: we match the right people. They make billions doing it. And the matching is still bad — not because they're incompetent, but because they're building on a data layer that was never designed for the job.
LinkedIn has 1.3 billion members. Yet 76% of employers struggle to fill roles. How can you have the largest professional database in history and still not find the right people? Because it contains surfaces, not substance. Better math on garbage data just produces faster garbage.
A central problem was never the matching algorithm. It was the input layer. You can't build intelligence on top of ignorance.
The Outreach Death Spiral
Cold email used to be a human problem. Someone sat down, wrote a message, clicked send. The friction was the filter.
That friction is gone. AI can generate 10,000 hyper-personalized emails for the cost of a coffee. AI agents now handle roughly 80% of research and sequencing for top outbound teams. The machine gun is fully automatic.
The more AI automates outreach, the less any outreach works. Response rates have fallen from 8.5% in 2019 to under 5% today (Backlinko, Instantly). Roughly 19 out of 20 cold emails get no response at all.
Here's what's counterintuitive: AI has made surface personalization easy — the emails look more relevant than ever. But referencing someone's job title and latest LinkedIn post isn't the same as knowing their actual intent. The outreach looks personal. It's still blind to what matters.
Inboxes drown. Walls go up. AI was supposed to connect us. Instead, it's making us unreachable. The people who should be talking can't find each other — not because we lack tools, but because we lack current, permissioned, decision-relevant signal. Neither party knows the other's intent. So one side sprays, and the other side hides.
Somewhere right now, a founder and an engineer who should be building together are two degrees apart. A VC is looking for a deal that a bootstrapped company two time zones away would be perfect for. Two researchers working on the same problem don't know the other exists. None of them will connect — because the signal they need is scattered across systems that don't talk to each other, and whatever outreach might have found them got buried.
The Matching Gap
What actually predicts whether two people should work together? It depends entirely on the match.
For a co-founder pairing — values alignment, complementary skills, how they handle conflict. For a key hire — capability fit for this role at this stage, motivation, trajectory. For an investor-founder match — thesis alignment, how the investor behaves when things go wrong. For a vendor — delivered quality, whether the team that pitched is the team that shows up.
Different signals. But they share three properties: they're contextual (they depend on the specific match, not one person in isolation), current (what matters is right now, not three years ago), and relational (they describe the fit between two parties, not a property of either one alone).
Now look at what the current infrastructure provides. The exact opposite. Profiles are individual — you in isolation, no reference to who you'd work well with. Historical — what you did, not what you're doing. Static — updated once every few years, if that.
Credentials tell you about the individual. Fit is about the relationship. We've built an entire infrastructure around the first and have almost nothing for the second.
This is most visible in AI-native knowledge work — two engineers with identical resumes who belong in radically different companies because they're building toward radically different futures — but the failure runs across every market. A nurse whose real value is patient de-escalation doesn't have that on her resume. A construction foreman whose crews have zero safety incidents carries that in reputation, not credentials. Better signal wouldn't replace interviews and trial periods. But it would produce a better shortlist. And right now, the shortlist is built on almost nothing.
The signals that would improve it aren't entirely absent — fragments live in GitHub contributions, Substacks, deal histories, community reputations, Slack groups, and a hundred private channels. But they're scattered, non-portable, and impossible to query. And the most important signal — what someone actually wants right now, what they're open to, what they'd move for — lives nowhere except inside their own head. The infrastructure to connect these fragments and surface that intent doesn't exist.
A widely cited Leadership IQ study — surveying over 5,000 hiring managers — found that 46% of new hires fail within 18 months, and 89% of those failures were due to attitudinal fit, not technical skill. Because the fit was wrong — wrong motivations, wrong direction, wrong alignment with the team. Fit is contextual, current, and relational. No profile captures it. No resume even tries.
What AI Can't Eat
If AI keeps absorbing more of what humans do, where does human value actually live?
AI is eating the execution layer from the bottom up. Rote tasks. Skilled outputs. Now judgment and strategy. At each stage, the thing that used to differentiate you becomes table stakes.
But there's a ceiling it can't eat through. Human value is concentrating into three layers: ownership — someone has to deploy capital, bear consequences, and be legally accountable; the edge — the moving frontier where AI capability runs out and you need a specific human to close the gap; and trust — the relationships between humans at those layers, the co-founder chemistry and advisor judgment that determine whether companies live or die.
As the execution layer gets cheaper, these three layers become more valuable. When fewer things require humans, each remaining human decision carries more weight. Human matching becomes more critical in an AI world, not less.
And the edge moves. What AI couldn't do six months ago, it can do now. Matching for the edge means knowing what someone is capable of right now, against a frontier that shifted last month.
If your value lives in these layers — and increasingly, it does — you can't capitalize on it. The market can't see it. An engineer whose superpower is architectural taste looks, on paper, like every other senior engineer. A founder whose edge is recruiting and retaining in a crisis has no legible way to signal that. Their real differentiation is dark matter — it exists, it's enormous, and it's economically invisible.
This is the capitalization failure in high-context markets. The infrastructure built for the execution layer was never designed for ownership, the edge, or trust. And the gap between where value lives and where the system looks for it grows wider every day.
The Silo Problem
Everything we've discussed so far — stale profiles, broken matching, collapsing outreach — describes the infrastructure as it exists today. But something else is happening alongside it that changes the equation entirely: people are starting to get personal AI agents.
Not chatbots. Agents. OpenClaw, an open-source personal AI assistant, went from obscurity to over 200,000 GitHub stars in weeks. People are running it on their laptops, connected to their email, calendar, messaging apps, and files — a 24/7 assistant that manages their day, drafts messages, writes code, and takes actions on their behalf while they sleep. It's one project among many. Gartner projects that 40% of enterprise applications will embed AI agents by end of 2026, up from less than 5% in 2025. The trajectory is clear: within a few years, most professionals will have some form of personal AI agent mediating significant parts of their work life.
And here's what makes this relevant to everything above: that agent will increasingly know you extraordinarily well. It's reading your emails, seeing your calendar, learning what excites you. It's building deep, real-time context about who you are right now.
But it has absolutely no idea who everyone else is.
Your agent is brilliant about you and blind to the world. When it needs to find you a cofounder, an investor, a vendor, or simply another human thinking about the same problem — where does it go? Back to LinkedIn. Back to the dead profiles. Back to the sand.
A thousand agents, each knowing their human intimately, none able to see each other. Your agent needs to operate inside a network where every agent has that same depth of context about their human. Without that network, agents are just very smart assistants trapped in very small rooms.
The future isn't agent-to-LinkedIn. It's agent-to-agent, inside a shared system of living context.
The information that actually determines whether two people should work together — the contextual, current, relational signal — is either trapped in fragments across private channels or lives inside people's heads entirely. Every platform captures the professional performance. None of them capture the professional reality.
Restoring Signal
The fix isn't better algorithms on bad data. It isn't AI that writes faster cold emails. It isn't another LinkedIn competitor with a cleaner UI.
The fix is better proof, better intent signals, and better permissioned trust edges for specific decisions.
Not a map of the whole human. Something more precise: a system that provides verified, current, contextual signal at the moment a decision needs to be made. What has this person actually built? What are they working on right now? What are they open to? Who have they worked well with, and in what context? These are answerable questions — if the infrastructure to answer them exists.
Won't this just become LinkedIn with more fields? It would, if built on self-reporting. The critical difference is building signal from revealed behavior — what you actually respond to, who you choose to meet, what you spend your time building, what your agent observes about your real priorities. Behavioral signal isn't truth — it's a proxy, shaped by constraints and context. But it's a meaningfully better proxy than self-reported claims, because you can't fake what you do as easily as what you say. It won't eliminate gaming. But it raises the cost significantly.
When that kind of signal exists, the downstream effects compound. Matchmaking gets dramatically better — not perfect, human connection is inherently probabilistic — but informed by real signal rather than stale proxies. A founder doesn't have to cold-email 200 VCs hoping one bites — the network can surface the two whose thesis matches their problem. Cold outreach starts to lose its grip — not because we blocked it, but because better signal makes more of it unnecessary.
When agents enter the picture, they finally have somewhere to go. Not the void, not LinkedIn — a network where every node carries real, current, permissioned context, maintained by its own agent, updated continuously. Agent-to-agent coordination becomes meaningful because both sides are operating inside a shared system of signal.
This is what we're building at Expreso.
Not a better profile. Not a smarter algorithm. Better proof, better intent, and better trust edges — at the moment they matter most.
Because the world doesn't need more connections. It needs the right ones. And the right ones require better signal than anything that exists today.