Should you trust ChatGPT to write your real estate content?

proptech association australia
Paula Shearer in Proptech Association Australia

11 May 2023

5 Mins Read

By Paula Shearer

 

Artificial intelligence and its impact on humanity has been a scorching discussion topic for decades – ever since the term was coined at a Dartmouth College conference in the United States in 1956.

 

But, until recently, most AI has remained in science fiction novels, university studies and tech labs. Now it’s publicly available – within reach, and free to use, for anyone who has access to a computer keyboard or smartphone.

 

But, is the real estate industry ready to use it – and is AI ready for the real estate industry? Although proptechs such as RiTA from AIRE (now part of CoreLogic) and Propic, have introduced the idea of AI assistants to the real estate world, are expectations of a chatbot writing real estate blogs and listing materials actually realistic?

 

While it has been seven years in the making, OpenAI – a tech research lab founded by Silicon Valley investors including Elon Musk, Reid Hoffman and Peter Thiel – has finally succeeded in launching to the public an intelligent language model that can write as though it is human.

The world went pretty crazy in response, with the new ChatGPT chatbot registering more than one million users within five days of its November 30, 2022 launch.

 

According to a Reuters article two weeks later, OpenAI expects ChatGPT to have $200 million revenue this year and $1 billion by 2024. The article also quoted a source as saying “OpenAI was most recently valued at $20 billion in a secondary share sale”.

 

The ChatGPT launch was so successful that by mid-January the system began spitting out error messages “ChatGPT is at capacity now” which infuriated plenty of users and generated numerous posts from the tech community offering workarounds and quick fixes.

 

PUBLIC TESTING

 

Hollywood actor Ryan Reynolds was an early adopter and has already used ChatGPT to create an online ad for his budget wireless service, Mint Mobile. Another user, security researcher and Fly.io software developer Thomas H. Ptacek, managed to get instructions written in Biblical verse on how to remove a peanut butter sandwich from a VCR (hilarious!).

 

But universities weren’t so happy, with many scrambling to ban the use of ChatGPT for student essay writing. There were also questions around copyright and plagiarism outside of academia.

Australian singer-songwriter Nick Cave was openly hostile about the new technology release, describing AI as an “emerging horror” after multiple users asked ChatGPT to create dozens of songs in his style. “The apocalypse is well on its way. This song sucks,” he told readers of his The Red Hand Files newsletter.

 

“What ChatGPT is, in this instance, is replication as travesty. ChatGPT may be able to write a speech or an essay or a sermon or an obituary but it cannot create a genuine song. Songs arrive out of suffering, by which I mean they are predicated upon the complex, internal human struggle of creation and, well, as far as I know, algorithms don’t feel. Data doesn’t suffer.”

All of this makes a great point.

 

ChatGPT can produce human-like text instantly in response to user-generated prompts. But, how “human” are the results? Are they reliable, publishable – and of any benefit to real estate and the proptech industries?

 

THE FINE PRINT

 

While the current version is free, signing up is quick, it’s easy (and fun) to use for nonsensical questions such as getting Jesus to tell you how to remove a sandwich from your VCR, there are drawbacks to jumping in and employing the technology “as is” for commercial purposes.

 

OpenAI itself has been clear in warning users to proceed with caution, noting that engaging ChatGPT in the areas of news, coaching, finance, legal, government and civil services, criminal justice, law enforcement, therapy and wellness “carries a greater risk of potential harm”.

 

“For these use-cases you must: Thoroughly test our models for accuracy in your use case and be transparent with your users about limitations. Ensure your team has domain expertise and understands/follows relevant laws,” OpenAI states in its usage policies.

 

Other standard conditions prohibit using the chatbot to create illegal or harmful content, misusing personal data, promoting dishonesty, deceiving or manipulating users or trying to influence politics. In its guide on moderation, OpenAI states it is continuing to improve its classification accuracy, especially in the areas of hate, self-harm, and violence/graphic content. However, its support for non-English languages is currently limited.

 

It is wise to remember then that ChatGPT, although available publicly, is still in its beta-testing phase and as such the version currently available to access is described by OpenAI as a free research preview.

 

LIMITATIONS

 

Company leaders, including OpenAI president and co-founder Greg Brockman @gdb and CEO Sam Altman @sama, have also been pretty honest on their Twitter accounts about the system’s limitations.

 

In response to one Twitter user’s observation that ChatGPT “does amazing stuff but it has a high error rate”, Brockman said there was still a “demo gap – the distance between what looks like it’s working in a demo and what really works in practice. A deep issue that applies to humans too – eg someone who interviews well but can’t do the job. We are closing the gap but much to do”.

On December 12, 2022, Brockman said: “Love the community explorations of ChatGPT, from capabilities to limitations. No substitute for the collective power of the internet when it comes to plumbing the uncharted depths of a new deep-learning model”.

 

This followed his tweet the previous day: “We believe in shipping early and often, with the hope of learning how to make a really useful and reliable AI through real-world experience and feedback. Correspondingly important to realize we’re not there yet – ChatGPT not yet ready to be relied on for anything important!”

 

Similarly, Altman tweeted this on December 11: “ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness. It’s a mistake to be relying on it for anything important right now. It’s a preview of progress; we have lots of work to do on robustness and truthfulness”.

 

Presenting “truthful” information does seem to be problematic at this stage. It has been widely reported and accepted that ChatGPT, like other forms of AI, is highly susceptible to “hallucinations”. This industry-coined term basically means that while the piece of generated content (officially known as a “text completion”) appears authoritative, confident and reliable, it may instead have no basis in fact. Yes… the result you receive from your prompt question could be totally false!

 

“The API (application programming interface) has a lot of knowledge that it’s learned from the data that it has been trained on. It also has the ability to provide responses that sound very real but are in fact made up. Wherever possible, we recommend having a human review outputs before they are used in practice,” the ChatGPT user guide states.

 

“Humans should be aware of the limitations of the system, and have access to any information needed to verify the outputs… From hallucinating inaccurate information, to offensive outputs, to bias, and much more, language models may not be suitable for every use case without significant modifications.”

 

So without reliable fact checking in place before any generated content is published or distributed to your clients, there is a very real possibility that you may look, at worst, a complete idiot or, at best, totally unreliable and unprofessional.

 

TRAINED INTELLIGENCE

 

Make no mistake. ChatGPT is also not a search engine – it does not trawl the internet to answer your prompt.

 

Instead, ChatGPT is a Generative Pre-trained Transformer. Using millions of documents and billions of words, OpenAI spent up until 2021 “training” the system using a supercomputer developed in partnership with Microsoft. (This means that ChatGPT has no knowledge of current events post-2021).

 

This week (January 23, 2023) the two companies announced their partnership would continue, with Microsoft agreeing to a further multi-year, multi-billion dollar investment in OpenAI.

 

In a company statement, OpenAI said that the Microsoft partnership had been instrumental in its progress. “We’ve worked together to build multiple supercomputing systems powered by Azure, which we use to train all of our models. Azure’s unique architecture design has been crucial in delivering best-in-class performance and scale for our AI training and inference workloads.

 

 Microsoft will increase their investment in these systems to accelerate our independent research and Azure will remain the exclusive cloud provider for all OpenAI workloads across our research, API and products.”

 

Microsoft’s statement on the new investment said it had been committed since 2016 to building Azure into an AI supercomputer for the world, “serving as the foundation for our vision to democratize AI as a platform”.

 

“Through our initial investment and collaboration, Microsoft and OpenAI pushed the frontier of cloud supercomputing technology, announcing our first top-5 supercomputer in 2020, and subsequently constructing multiple AI supercomputing systems at massive scale,” the Microsoft statement said. “OpenAI has used this infrastructure to train its breakthrough models, which are now deployed in Azure to power category-defining AI products like GitHub Copilot (code creator), DALL·E 2 (image creator) and ChatGPT.

 

“These innovations have captured imaginations and introduced large-scale AI as a powerful, general-purpose technology platform that we believe will create transformative impact at the magnitude of the personal computer, the internet, mobile devices and the cloud.”

 

So, given that ChatGPT is based on language training, just how “intelligent” is it? And what other limitations should users be wary of when employing any AI system?

 

COPYRIGHT LAWS

 

It would be fair to say that knowledge workers still have major concerns around copyright and plagiarism issues with any AI-created content – and with laws universally struggling to keep up with the technology, there are currently many blurred lines here.

 

However, numerous Australian lawyers have produced blogs, fact sheets and general information on the issue of artificial intelligence and copyright ownership – and there is general consensus on a number of points:

 

Australian Law is governed by the Copyright Act 1968, which has not kept pace with rapidly advancing AI technology.

 

In Australia, copyright may only be owned by a “human creator”.

 

Establishing copyright on an article or image you have created using AI platforms will be contingent on how much human intervention you had in the creative process.

It is not currently clear whether the data used to train an AI model should have been subject to “source copyright” permissions.

 

The Arts Law Centre of Australia notes that concepts such as AI, data mining and machine learning are not dealt with in the Copyright Act “and there have not been many court cases to test how copyright applies” to AI content.

 

“Since this technology is so new, it is not clear that works created with the help of AI will be protected by copyright,” Arts Law advises in a fact sheet.

 

“As a general rule, a work can only be protected by copyright in Australia if there is a human author who contributed ‘independent intellectual effort’. Because of this, it is possible that works generated by AI which don’t have enough human input won’t be protected by copyright.”

 

Arts Law also advises that AI tools do not currently have a legal status and cannot own copyright. “It is the human contributor who would own copyright if a work was protected”.

 

This also may cause issues when it comes to plagiarism if content has been created by an AI.

Machines generally are trained on very large data sets, but it is also an infringement to digitally reproduce work without the copyright owner’s permission – so even storing the information to train AI systems could possibly be considered a copyright breach. Arts Law advises that it is currently not clear whether fair dealing exceptions “cover use of works as training data in machine learning”.

 

“There are currently no copyright exceptions in Australia specific to data mining or using works for machine learning,” it states.

 

FUTURE IN PROPTECH

 

So, while ChatGPT has burst onto the scene teasing a promise for the real estate industry that one day soon property listings, email content and news blogs will be written in seconds by a supercomputer, it may still be a while before a strong house can be constructed on the concrete slab currently being laid.

 

There are plenty of marketers and YouTube tutorials out there spruiking the convenience of using ChatGPT for everything from automating tasks to improving productivity and generating leads – particularly because the technology is currently free. But, these fail to address the beta nature of the technology or its limitations. There doesn’t appear to be any widespread applause coming from the real estate industry just yet.

 

OpenAI has also been fairly transparent in admitting a professional, paid version of ChatGPT is on its way.

 

“We will have to monetize it somehow at some point; the compute costs are eye-watering,” Altman tweeted, while Brockman said OpenAI was working on a professional version which would offer “higher limits and faster performance” – inviting his followers to join the waitlist.

 

While there are no indications of how quickly this technology will be integrated into proptech platforms, OpenAI’s Brockman is likely to be pretty accurate in his prediction that 2023 will make 2022 “look like a sleepy year for AI advancement and adoption”.

TAGS

Industry

PropTech

Recommended
Proptech Association Australia announces Proptech Awards 2022 Winners

9 Jun 2022

Teamlink Streamlines End-to-End Property Sales Process

22 May 2022

Boost Your Real Estate Sales in the New Year with TEAMLINK

1 Jan 2022

Unleash your potential
with TEAMLINK.

CONTACT US

Sign up for a Demo Today!


Subscribe to TEAMLINK via Email

Don’t miss out on our new features and updates.  Subscribe to our newsletter today!

© 2021 All rights reserved TEAMLINK

Thank you for reaching out!

Our expert will get in touch shortly to arrange a suitable time for you.

We are looking forward to taking you through the TEAMLINK platform, an AI-Driven SaaS for Real Estate.