In the space of a few days, two real estate AI stories landed that tell you almost everything you need to know about what’s changing.

In Florida, a homeowner used ChatGPT to help sell his property without a listing agent. The house went under contract in five days.

At the other end of the market, Ryan Serhant says a $50 million deal nearly fell apart after both buyer and seller went to ChatGPT for reassurance and got exactly the opposite of what the deal needed.

Same tool. Two very different outcomes.

That actually is the story.

Not “AI is replacing agents”.
Not “AI is dangerous”.
Something more useful than either of those.

AI is very good at helping people do work. It is far less reliable when people ask it to make sense of a situation it cannot fully see.

The Florida sale everyone turned into a headline

You’ve probably seen the first story already.

Robert Levine, a father of three in Cooper City, Florida, decided to sell the home he’d lived in for 15 years. Instead of hiring a listing agent, he used ChatGPT to help him build the process.

It helped him map the timeline, work out what to repaint and declutter, draft the listing description, prepare open house materials, figure out MLS access, and create a contract template.

He also had a lawyer review the legal documents.

The home was listed on a Tuesday, reportedly on ChatGPT’s suggestion, and within 72 hours, he had five offers.

By Sunday morning, it was under contract.

That is a real story. It is also a story that got flattened into nonsense almost immediately.

Because “man uses ChatGPT to sell his home” makes it sound like AI took over the transaction. It didn’t.

Levine sold his home.

He did the work. He made the calls. He coordinated the showings. He handled the conversations. He made the judgment calls that actually move a sale from listed to sold.

What ChatGPT did was the work that many people confuse with expertise because it looks polished on the page. It handled planning, drafting, summarising, organising, and gave him a sequence to follow.

That matters. But it is not the same thing as replacing agency.

It is a very good example of AI taking a chunk of knowledge work that used to sit almost exclusively with professionals and making it accessible to a motivated consumer.

That should draw your agents’ attention.

The $50 million deal that almost died

The second story might have gotten more oxygen, but it is the one the industry should be sitting with.

Ryan Serhant shared that a $50 million deal, involving multiple agents and months of work, nearly came undone at the finish line because both sides went to ChatGPT separately.

The seller asked whether they should sell at that price.

ChatGPT looked at the comparables and said no, the property was worth more.

The buyer asked whether they were overpaying.

ChatGPT looked at the comparables and said yes, they were.

On paper, both answers were defensible. That’s the problem.

The model could see the data. It could not see the deal.

It could not see the motivations on both sides. It could not see the relationship history, the negotiating path, the trade-offs already made, the timing pressure, the off-market context, or the human realities that explain why a price can be right even when it doesn’t line up neatly with recent sales.

So it did what these systems do? They both produced a confident answer from incomplete context.

And nearly blew the deal up.

Serhant says they recovered it.

He was also clear that this is not an anti-AI point. His company is building AI tools for agents. But that is exactly why the story matters. This isn’t a luddite warning.

It’s a field report from someone using the technology and seeing where it breaks.

The category error

This is the mistake I think a lot of people are making.

They are asking AI to settle questions that are not really information problems.

“Write this listing description” is an information problem.
“Build me a preparation checklist” is an information problem.
“Summarise these comparable sales” is an information problem.

“Should I accept this offer?” is not. That, my friends, is a judgment problem.

And judgment in real estate is rarely just about the visible facts. It is about timing, leverage, emotion, alternatives, appetite, risk, personality, trust, fatigue, urgency, and the thousand bits of context that never make it into the prompt.

Large language models are persuasive precisely because they are built to produce coherent answers. They do not pause and say: I am missing the most important part of this situation.” They just keep going.

That makes them useful. It also makes them dangerous in the hands of someone who mistakes fluency for wisdom.

What agents should actually take from this

The lazy industry response is to treat the Florida story as a threat and the Serhant story as a warning shot.

I think both interpretations are too shallow.

The Florida story says consumers can now do a lot more of the informational heavy lifting themselves. That is real. Agents who pretend otherwise are kidding themselves.

The Serhant story says that when the stakes rise, context becomes more valuable, not less. That is also real.

So the opportunity for agents is not to position themselves against AI. That argument is already lost.

The opportunity is to be the person who can explain what the machine cannot see.

That means pricing conversations need more depth. Recommendations need more narrative. Your client should not just hear the number. They should understand the reasoning, the trade-offs, the timing, the alternatives, and what happens if they wait.

Because if your advice can be knocked over by a single late-night prompt to ChatGPT, the problem is not that the client used AI.

The problem is that your case or your relationship was not strong enough before they opened the app.

The new part of the job

Whether agents like it or not, clients are now bringing AI into the transaction.

Sometimes openly. Sometimes quietly. Sometimes, at 11 pm, when anxiety kicks in, and they want a second opinion from a machine that sounds calm and certain.

That means part of the job now is pre-emptive.

Not telling clients not to use ChatGPT. That’s pointless.

The job is to explain your thinking so clearly that when they do ask the machine, they have enough context to recognise where the answer is thin.

To learn how to use AI as well or better.

To learn how to explain your use of AI.

That is the shift.

AI is not just changing how work gets done. It is changing how trust gets tested.

And the agents who do best in this environment will not be the ones who simply use AI more. They’ll be the ones who can show clients where AI is helpful, where it is shallow, and where human judgment still earns its keep.

Because the real risk here is not that ChatGPT replaces the agent.

It’s that the client starts using ChatGPT as a referee in moments where only context can produce a good decision.

And if that is happening, the agent’s job is no longer just to advise.

It is to make the deal’s invisible logic visible before the chatbot gets there first.