How Zach Meltzer is building VeryAI to prove who is real online

Zach Meltzer

The internet used to have a simple trust problem. Platforms needed to know whether a user had the right password, the right phone number, or access to the right email account. That was usually enough to let someone log in, open an account, make a purchase, or join a community.

That world is changing fast.

Today, a fake account can look polished. A bot can behave like a real user. A deepfake can copy someone’s face or voice. An AI agent can take actions across apps without looking like a traditional human user at all. For social platforms, fintech companies, crypto apps, marketplaces, and online communities, the question is no longer just whether someone can enter a code. The deeper question is whether there is a real person behind the action.

That is the problem Zach Meltzer is trying to solve with VeryAI.

As the founder and CEO of VeryAI, Zach Meltzer is building identity technology for a version of the internet where fake users, synthetic identities, and automated activity are becoming harder to spot. VeryAI calls its approach Proof of Reality, and the idea is direct but powerful. Platforms need a reliable way to prove that a real human is present online, without forcing people through clunky, outdated, or invasive verification steps.

Zach Meltzer and the problem VeryAI is trying to solve

The rise of AI has made online identity more complicated. In the past, platforms could often catch fake accounts through basic signals. Suspicious email patterns, repeated phone numbers, simple bot behavior, or low-quality profile details could reveal that something was wrong.

Now, fake activity can be much more convincing.

AI tools can generate human-like messages, realistic profile images, fake documents, synthetic content, and automated behavior that blends into normal platform activity. This creates a serious problem for companies that depend on trust. A bank needs to know whether a person opening an account is real. A crypto exchange needs to stop Sybil attacks and reward farming. A social platform needs to reduce fake profiles and bot networks. A marketplace needs to protect buyers and sellers from impersonation and fraud.

Older verification methods were not built for this kind of internet.

Passwords only prove that someone knows a secret. SMS codes only prove that someone has access to a phone number. Email verification only proves access to an inbox. Even face-based verification is under pressure as deepfake tools improve and synthetic media becomes easier to produce.

This is where VeryAI enters the picture. The company is not trying to add one more login step. It is trying to build a stronger trust layer for digital platforms.

For Zach Meltzer, the founder opportunity comes from seeing that online identity is becoming one of the biggest infrastructure problems of the AI era. If people, platforms, and AI agents are all interacting in the same digital spaces, then trust needs to be rebuilt around something stronger than an account credential.

Why Zach Meltzer’s background matters

Zach Meltzer brings a useful background to this problem because his work has been closely connected to digital identity, crypto infrastructure, credential systems, and online communities.

Before building VeryAI, Meltzer worked in the broader Web3 and identity space, including work connected to Galxe, a platform known for credential infrastructure and large-scale community growth. That kind of environment exposed many identity problems early. Crypto platforms have had to deal with bots, fake wallets, Sybil attacks, token farming, duplicate accounts, and users trying to game incentive systems.

Those issues were once seen as mostly crypto problems. Now they are becoming internet problems.

The same patterns are showing up across finance, social media, gaming, online marketplaces, creator platforms, and AI-powered tools. When online activity has real financial, social, or business value, fake identities become more profitable. As AI makes those identities easier to create, platforms need a better way to separate real users from fake ones.

That is the larger context behind VeryAI. Meltzer is building from a place where identity is not just a compliance issue. It is a trust issue, a security issue, and increasingly, a product experience issue.

What VeryAI is building for the AI era

VeryAI describes its platform as a Proof of Reality layer. In simple terms, that means it helps platforms verify that a real human is connected to an online action.

This matters because many online systems still treat accounts as if they represent people. But an account can be bought. A phone number can be reused. A device can be automated. A profile can be generated. A face can be faked. A password can be stolen.

Proof of Reality is about going beyond those weak signals. It gives platforms a way to ask a more meaningful question: is there a real person behind this activity?

VeryAI’s first product focuses on palm biometric verification. The system is designed to work through a smartphone camera, without special hardware. Instead of relying on a separate scanner or expensive device, it uses the unique features of a person’s palm to create a verification signal.

That hardware-free approach is important. If identity technology is too difficult to use, platforms will avoid it or users will abandon the process. VeryAI appears to be aiming for a middle ground where verification can be stronger than passwords and SMS codes, but still simple enough to fit into everyday digital products.

How palm biometrics fit into VeryAI’s product

Palm verification is different from many familiar identity checks.

A password is something a person knows. A phone code is something a person receives. A face scan uses one of the most exposed parts of a person’s identity. A fingerprint often depends on specific device hardware.

A palm scan gives platforms another kind of biometric signal. The lines, creases, and structure of the palm can be used to verify human uniqueness, while avoiding some of the issues that come with face-based checks. VeryAI has also said its system does not store raw palm images. Instead, it turns palm data into irreversible feature representations, which are designed to support verification without keeping the original biometric image.

That privacy point is central to the company’s positioning.

People are already cautious about handing over personal data. They do not want identity verification to become another risk. If a company is asking users to prove who they are, the process needs to feel secure, respectful, and limited to what is necessary. VeryAI’s pitch is that verification can be accurate and private at the same time.

This is also why the company’s work sits at the intersection of biometrics, AI fraud prevention, deepfake detection, identity infrastructure, and privacy-focused authentication.

How VeryAI can help platforms prove who is real online

The strongest use case for VeryAI is platform trust.

Every major platform has some version of the same problem. It wants real users, real engagement, real transactions, and real communities. But it also has to deal with fake accounts, bot activity, fraud rings, duplicate profiles, impersonators, and automated abuse.

VeryAI’s identity layer could be useful across several areas.

For fintech companies, it can support stronger onboarding and fraud prevention. Financial platforms need to know that users are real, especially when accounts are tied to money movement, lending, trading, or compliance requirements.

For crypto platforms, it can help reduce Sybil attacks, fake participation, and reward farming. In crypto ecosystems, a single person can create many wallets or accounts to exploit incentives. A stronger human verification layer can help platforms design fairer systems.

For social platforms and creator communities, it can help reduce fake accounts and impersonation. When users know they are interacting with real people, communities become healthier and more trustworthy.

For marketplaces, it can help protect buyers, sellers, and service providers from fraud. Identity verification becomes especially important when strangers are exchanging money, goods, access, or services.

For AI agent ecosystems, the problem becomes even more interesting. As AI agents begin to book, buy, post, trade, schedule, and communicate on behalf of people, platforms will need to know not only that an agent is acting, but also who is responsible for that agent’s actions.

That is where concepts like Know Your Agent become important. The future of identity may not only be about verifying humans. It may also be about verifying the agents that act for humans.

Helping platforms reduce fraud without hurting real users

One of the hardest parts of identity verification is balancing security with user experience.

If verification is too weak, fake users slip through. If it is too strict, real users get frustrated and leave. Many people already dislike long forms, repeated SMS codes, document uploads, confusing CAPTCHAs, and verification loops that fail without explanation.

A good identity product needs to protect the platform without making honest users feel punished.

That is one reason VeryAI is worth watching. Its hardware-free palm verification model is built around accessibility. A user should not need a special scanner. A platform should not need to redesign its entire product around a heavy verification process. If the system works smoothly, it can help companies add stronger identity checks at important moments, such as account creation, sensitive transactions, high-risk actions, or recovery flows.

This kind of approach can make verification feel less like a wall and more like a trust layer.

The best security tools often work that way. They do not interrupt every action. They appear when risk is higher, confirm what needs to be confirmed, and let real users move forward.

Why Zach Meltzer’s timing matters

Zach Meltzer is building VeryAI at a moment when the internet is being reshaped by AI-generated activity.

This timing matters because identity problems tend to become urgent only after platforms feel the damage. Fake users can distort engagement. Bots can drain rewards. Deepfakes can damage trust. Synthetic identities can bypass weak onboarding. Automated agents can create confusion around responsibility and permission.

The more capable AI becomes, the less useful old identity assumptions become.

That is why VeryAI’s $10 million seed round attracted attention. The funding, led by Polychain Capital with participation from Berggruen Institute and Anagram, signals that investors see identity infrastructure as a major need in the AI era. The raise is not just a startup milestone. It reflects a broader shift in how companies think about trust, verification, and online authenticity.

VeryAI is also positioned in an area where several trends are coming together at once: biometric verification, deepfake resistance, AI agent accountability, privacy-preserving identity, and platform-level fraud prevention.

When those trends overlap, the opportunity becomes bigger than a single product feature.

The achievement behind Zach Meltzer’s VeryAI journey

The achievement in Zach Meltzer’s story is not only that he founded a company or raised capital. It is that he identified a trust problem that is becoming harder for platforms to ignore.

Many startups chase convenience. VeryAI is chasing confidence.

That difference matters. Online platforms cannot grow safely if users do not trust the people, accounts, agents, and transactions around them. At the same time, users do not want identity systems that feel invasive, slow, or risky. The challenge is to create verification that is strong enough for platforms and respectful enough for people.

That is the space VeryAI is trying to occupy.

By focusing on palm biometrics, privacy-preserving verification, and Proof of Reality, Meltzer is building toward a future where platforms can confirm real human presence without depending only on passwords, phone numbers, device signals, or facial checks.

It is a practical mission, but it also has a bigger meaning. The internet is moving into a period where reality itself needs stronger signals. People want to know whether a profile is real, whether a user is human, whether an action is authorized, and whether an AI agent is connected to a real person.

VeryAI is trying to answer that need with infrastructure instead of guesswork.

Building for platforms that need security and privacy

Security and privacy often feel like competing priorities. Platforms want more information so they can reduce fraud. Users want less exposure so they can protect themselves. The wrong identity system can create a new problem while trying to solve an old one.

This is why VeryAI’s privacy-focused design matters.

If the company can help platforms verify real people without storing raw biometric images or exposing personal identity details unnecessarily, it could offer a stronger model for digital trust. That kind of approach is especially relevant as biometric systems face more scrutiny.

Users are not against security. They are against security that feels careless with their data.

A better system needs to be clear, fast, and limited. It should help people prove they are real without asking them to give away more than they need to. That is the kind of balance Zach Meltzer appears to be building into VeryAI’s identity layer.

Why VeryAI could matter beyond one industry

Although VeryAI has clear relevance for crypto and fintech, its potential use cases go much wider.

Any platform that deals with user trust could need stronger human verification. That includes social networks, dating apps, gaming platforms, creator tools, ticketing platforms, gig marketplaces, financial apps, enterprise software, and future AI agent platforms.

The reason is simple. If fake users can create value or cause damage on a platform, identity matters.

A gaming platform may need to stop bots from farming rewards. A creator platform may need to protect real creators from impersonation. A fintech product may need to reduce synthetic identity fraud. A marketplace may need to prevent scammers from creating endless new accounts. An AI agent platform may need to connect automated actions to verified human control.

VeryAI’s long-term opportunity sits in that broader trust layer.

It is not just asking people to prove identity for compliance. It is helping platforms prove reality in digital spaces where fake activity is becoming cheaper, faster, and more convincing.

What readers can learn from Zach Meltzer’s founder story

There are a few clear lessons in Zach Meltzer’s path with VeryAI.

The first is that strong startups often begin with a shift in behavior. Meltzer is building around a major change in how the internet works. AI-generated users, deepfakes, synthetic identities, and autonomous agents are no longer edge cases. They are becoming part of everyday platform risk.

The second lesson is that important products often solve trust problems. VeryAI is not just making identity verification more modern. It is trying to help platforms keep real users safe, reduce fake activity, and build confidence into online actions.

The third lesson is that timing matters. Identity technology becomes most valuable when the old systems start breaking under new pressure. As AI makes fake digital activity easier to create, the demand for stronger verification is likely to grow.

That is why Zach Meltzer and VeryAI stand out. They are not building for the internet as it used to be. They are building for the internet that is already arriving.

Why Zach Meltzer and VeryAI matter in the future of online trust

The future of online identity will not be solved by one password, one code, one scan, or one account system. It will require layers of trust that can work across platforms, people, and AI agents.

Zach Meltzer is building VeryAI around that reality. The company’s Proof of Reality approach speaks to a growing need across the internet: platforms need to know when a real person is present, when an action is human-backed, and when digital activity can be trusted.

That mission gives VeryAI a clear place in the AI-era identity conversation. As bots become smarter, deepfakes become more convincing, and AI agents become more active, proving who is real online may become one of the most important challenges for digital platforms.VeryAI is still early, but its direction is timely. By combining palm biometrics, privacy-focused verification, and platform-level identity infrastructure, Zach Meltzer is working on a problem that could shape how people and companies build trust on the internet in the years ahead.

Facebook
Twitter
Pinterest
Reddit
Telegram