Most people love ChatGPT, even though the app clearly warns that it may produce inaccurate information about people, places, or facts. We decided to explore the reality of ChatGPT and share what we’ve discovered.
After testing ChatGPT for several weeks, we’ve come to understand its strengths and limitations. Initially, we thought that if you’re an expert in a subject, ChatGPT wouldn’t know more than you already do. But we quickly realized that the AI’s knowledge was last updated in September 2021, or even earlier if you’re using the free version. A lot has happened since then, so in many cases, you probably know more than ChatGPT does.
The AI frequently provides incorrect information about various apps, which can be misleading and even harmful to those trying to navigate digital transformation. For example, we asked ChatGPT to provide a step-by-step guide on using the audio recording feature in Noteful (an app that launched after ChatGPT’s last update). Instead of admitting that it doesn’t have information on Noteful, the AI fabricated instructions. It told us to “tap on the microphone icon located at the bottom of the screen,” which isn’t where the microphone is located in Noteful. It also mentioned that “a waveform would appear on the screen,” which is not how audio recording works in this app. Clearly, the AI was pulling information from general knowledge of other audio recorders rather than from specific knowledge of Noteful.
This raises a major concern: imagine the inaccurate information it might provide on topics you don’t know or can’t verify.
The takeaway? Don’t rely on ChatGPT for learning new things or for information you can’t verify. If you have to double-check everything, it might be better to skip using the AI altogether.
Some people claim that ChatGPT can be used to create new content. But is that true? We asked the AI to write a non-plagiarized post on random topics in our niche, and both Grammarly and Quillbot flagged plagiarized sentences in almost every article ChatGPT generated. This makes us wonder: if these tools can detect plagiarism, what might Google be picking up?
It seems like ChatGPT might be recycling and paraphrasing the same information from a limited set of sources. Over time, there may be nothing new left to recycle. We’ve already noticed some patterns, like the AI’s frequent use of the word “revolutionize” in almost every prompt. It seems to think adding that word makes any topic more interesting.
The bottom line? Always rewrite and test what you get from ChatGPT because not everything it generates is original. We’ve even started avoiding articles with “revolutionize” in the title because they’re likely AI-generated.
So, is ChatGPT completely useless? Definitely not. In areas like mathematics and pure sciences—subjects that have been thoroughly tested and refined over centuries—the AI excels. For instance, we asked it to calculate compound interest with percentage withdrawals, and ChatGPT completed in seconds what would have taken us over an hour to figure out. However, only a math professor could truly assess how accurate the AI is at math. Similarly, we found that ChatGPT performed reasonably well with medical science questions, suggesting it could potentially pass many medical school exams. But, of course, that doesn’t mean it would make a good doctor.
We’ve also found AI to be very useful for brainstorming. If you have an idea, ChatGPT can help you expand on it. You’ll still need to trim the excess information, but it’s a great tool for getting started.
However, it’s clear that ChatGPT is learning from us. Every time you like, dislike, or provide feedback on a response, you’re helping train the AI and adding to its database. ChatGPT remembers all our conversations, and given the billions of interactions it has, just imagine what it has learned in the past few months.
So, should we be helping train AI, or do we have a moral obligation to resist it, like all those movies have taught us? Humans versus machines? We’re curious to hear about your experiences with ChatGPT. Maybe there’s something we’ve missed.