Hero Hero

SIOR Report Article

Return to SIOR Report Articles

Artificial Intelligence or Data?

By: Michael Hoban

AI Provides Great Tools – but Needs Human Oversight

In the early days of computing, there was an expression, “Garbage in, Garbage out (GIGO),” which encapsulated the idea that in any system, the quality of output is determined by the quality of the input. Generative AI, even with all of its immense computing power, comes with its own massive GIGO problem. In their 2023 report for the Harvard Graduate School of Education’s Next Level Lab, Navigating A World of Generative AI: Suggestions for Educators, Lydia Cao and Chris Dede resurrect that old adage as they describe the inherent problem with relying on AI-generated data.

"AI is trained using existing data from the world wide web, which leads to the potential problem of 'garbage in, garbage out,' as well as pervasive issues of AI "hallucinations" where it generates responses that sound plausible but are factually incorrect, such as fabrication of citations of research articles that do not exist," they wrote.

One hilarious example of an AI hallucination was detailed last year by Indian journalist Satyen K Bordoloi in an article for syfy.com. He asked the popular AI chatbot ChatGPT, “What is the world record for crossing the English Channel entirely on foot?” It replied: “The world record for crossing the English Channel entirely on foot is held by Christof Wandratsch, who completed the crossing in 14 hours and 51 minutes on August 14, 2020. Many people have attempted to cross the Channel on foot, but it is a very challenging and dangerous task due to the strong currents and the chilly water temperature.”


While that example is observably absurd, many of the hallucinations are much more difficult to detect because AI-generated answers are typically well-stated in a matter-of-fact manner, can provide false data sources, and, in some cases, even generate manufactured quotes. AI is already proving to be a valuable tool for early adopters in commercial real estate – in content creation and marketing, property valuations and market analysis, predictive analytics, and risk assessment – but unquestioningly trusting the data can be risky.

“While AI offers numerous benefits for our industry, the pitfalls and challenges can be concerning,” says Ra’eesa Motala, SIOR, president of Evoke Partners. “AI models depend heavily on the quality of input data used for training models, and if it’s incomplete, it can lead to biased predictions/responses, which can lead to poor decision making and inaccurate assumptions, which could result in flawed investment and real estate transactions.”

Motala sees the lack of transparency with AI as an important issue for real estate and energy applications. The decision-making processes for AI models, particularly deep learning models (which use algorithms that can learn to recognize patterns in data), are like a “black box” and not easily interpretable, so stakeholders need to understand the rationale behind an AI-driven recommendation or decision before proceeding.

“It starts and ends with the quality of the data,” says Motala. “AI systems can process billions of operations per second, while the human brain can only process about 11 million bits of information per second. This means that AI systems can learn and adapt much faster than humans can. But do you have all the key factors inputted correctly to ensure your output will yield results with little room for error or miscalculation? Have you taken variables into account? That’s why you need to understand for yourself what went into that decision-making to avoid decisions that could hinder an investment or development transaction.”

You need to understand for yourself what went into that decision-making to avoid decisions that could hinder an investment or development transaction.

Motala also has concerns about regulatory compliance. Real estate and energy industries already require the sharing of sensitive documents, confidential information, and the exchange of high dollar values. AI will bring additional regulations and security to avoid and prevent data breaches, security issues, and other legal implications. “It’s going to make it harder and longer to get deals to the finish line with attorneys focused on adjusting to a new way of doing business,” she warns.


The potential dangers of relying on AI for legal research have already been experienced by a pair of New York attorneys in the case Matta v. Avianca. They were sanctioned last year by a judge for submitting “decisions” that had been fabricated by ChatGPT in support of their client. Although the legal profession has been using AI tools to extract research data for some time, generative AI like ChatGPT is capable of creating new (and possibly incorrect) data (hallucinations), which resulted in the outright fabrication of the cases the attorneys cited. One attorney stated that he was unaware of the possibility that the content generated by ChatGPT could be false, but the judge was not swayed.

Louis Archambault, SIOR, partner in the Miami office of Saul Ewing, LLP, says one of the issues from a legal standpoint for AI in commercial real estate is that if the information is wrong, who is ultimately responsible? “It’s a certification issue. If I get it wrong, I have my license behind it, and there are ramifications for me,” he says. “If an AI is wrong, who's responsible? Because I can almost guarantee you that an AI company is going to have a disclaimer stating that they're not legally responsible for the results.”

The problem for lawyers using AI is the lack of transparency – the “black box” effect that Motala referred to. “If an associate or another lawyer is giving me a brief with facts, I can see their rationale, or if there’s been a business discussion with a client, you can go back through the steps and see how someone got there,” says Archambault. “With AI, I don't have a way to effectively double-check all of the different data it went through to come to its conclusion.”

Another issue for lawyers is protecting client data, which must always remain confidential. Unless an AI model is built specifically for the law firm and only the law firm has access to that data, “then I really can’t use it. Because if outside AI is invading my data, I’ve got a problem,” says Archambault. At this stage of its development, he feels that in terms of utility for lawyers, AI essentially serves as a “wonderful Wikipedia.”


Kim Ford,SIOR, CEO of the Rise Agency Group in Pittsburgh, Pa. says that when using data generated by AI, “It’s just like everything else. You can’t believe everything you read. I think if you’re relying on it right now for data or facts or information, you need to double-check your work and get a few other sources, just like a reporter would.”

Ford thinks the biggest challenge for CRE professionals to use AI effectively will be to gain an understanding of how AI works. “The way that AI generates a response is based on how you ask the questions,” says Ford. ChatGPT provides a guide on how best to ask questions (also known as “prompts”) that will result in the most useful answers. These tips include:

  1. Be specific with your request.
  2. Provide context and background information.
  3. Use explicit constraints and guidelines.
  4. Experiment with various phrasings and approaches.

It also lists ways to avoid common pitfalls, such as:

  1. Being too vague or open-ended.
  2. Overloading the prompt with information.
  3. And perhaps the most crucial tip: Misinterpreting output as factual information.

Ford says that one of the limits of AI for commercial real estate is the lack of publicly available information, especially on the lease side, where leasing data is only available for publicly traded companies on their SEC 10-K filings. There is public data for building sales because it is a recorded instrument, “but for leasing – which is what the majority of companies do – there's just no information,” says Ford. “The bigger challenge we have in commercial real estate is that we don't have a national or global platform with data.”


Despite the potential hazards of relying on the data generated by AI, exercising caution and providing human oversight can mitigate these risks and provide powerful tools for CRE professionals. It’s also important to remember that the technology is still in its early stages and will continue to evolve. “It’s like with the first iPhone,” says Motala. “It will get to a point where it's going to create better systems and processes over time and get smarter. Yes, there are significant concerns around its use, but I think it's going to benefit the average business owner and help optimize their operations.”



Media Contact
Alexis Fermanis SIOR Director of Communications
Michael Hoban
Michael Hoban

Michael Hoban is a Boston-based commercial real estate and construction writer and founder of Hoban Communications, which provides media advisory services to CRE and AEC firms. Contact him at michaelhoban@comcast.net