ChatGPT is the rising tide that lifted all AI tools. Many rely on it every day, including product managers. They use ChatGPT to:
- Research the market, competition, and trends
- Create user personas and ideal customer profiles (ICPs)
- Organize feature ideas
- Analyze user feedback
- Prioritize feature requests
- Brainstorm user experience optimization ideas
- Create project plans
- Write support documentation and product release materials
- And more
However, it often gives false information that defeats the toolâs purpose.

For example, you might ask ChatGPT to analyze a collection of customer chats and find feature requests. You canât rely on this tool if it misses some or makes up non-existent feature requests.
Eight AI experts shared their ChatGPT misinformation stories with us. They also gave tips for preventing it. Spoiler â you should:
- Clarify your prompts
- Manually check the information it gives you
- Try specialized AI tools for product managers
Weâre sharing seven tips and ten prompts to train GPT to provide only accurate information. But first, letâs dig into ChatGPT and its limitations.
Key takeaways
- ChatGPT doesnât lie on purpose. It fills gaps in its knowledge with confident-sounding guesses.
- When accuracy really matters, switch to Thinking mode. It cuts factual errors by 50 to 80% compared to standard responses.
- Prompt quality matters more than most people realize. Give ChatGPT a role, background context, and ask for citations.
- Go one question at a time. Iterative prompting with course corrections produces more reliable output.
- For product work, purpose-built AI tools tend to be more accurate. Theyâre trained on your data, not the entire internet.
Why does ChatGPT give wrong information?
Letâs go back to the beginning. Whatâs ChatGPT in plain English?
OpenAI built an app called ChatGPT in 2022. They used AI (artificial intelligence) to teach their LLM (large language model) to answer your questions. Today, it can also write copy, create images, analyze and translate text, explain concepts, and more. OpenAI took a common chatbot and improved it. Thatâs the âchatâ part of it.
âGPTâ stands for generative pre-trained transformer. Today, ChatGPT runs on the GPT-5 family. GPT-5.3 Instant handles everyday tasks for all users. GPT-5.4 Thinking is available on paid plans for complex or high-stakes work.
When ChatGPT exploded, some product experts got pretty concerned about its accuracy.
âI fear that we are going to develop products based on completely made-up reports, and nobody (not even the accountable people) will know.â
Anonymous Reddit user on r/ProductManagement
Some experts have seen the negative outcomes of ChatGPT misinformation first-hand.
âUntil youâve sat aghast at the sight of a confident, detailed, but completely wrong answer, you will have no understanding of the skepticism you need to apply to the guidance it provides. Already losing track of the number of engineers Iâve seen apply ChatGPT advice that turns out to be terrible.â
Kevin Yank, principal architect, front end, Culture Amp
Companies like StackOverflow took this very seriously. In 2022, they temporarily banned ChatGPT-generated responses from users.
âThe primary problem is that while the answers which ChatGPT produces have a high rate of being incorrect, they typically look like they might be good and the answers are very easy to produce.â
Moderator at StackOverflow
ChatGPT has come a long way since its inception but still has some limitations. Here are the main ones you should know about.
1. Accuracy. ChatGPT may miss important details or misunderstand nuances in language. This may result in incorrect or misleading information.
- Example: ChatGPT might incorrectly interpret phrases like âI hate how much I love this featureâ as negative sentiments. The word âhateâ misleads the positive intent of the statement.
2. Outdated information. ChatGPTâs training data has a cutoff date. The main GPT-5 model has a knowledge cutoff of September 30, 2024. GPT-5.2 and later variants have a cutoff of August 31, 2026. That said, ChatGPT has web search on by default. For most queries, it can access current information directly.
- Example: Without web search enabled, ChatGPT wonât know about recent updates to competitorsâ products or current market trends. Web search is on by default, so this is less of an issue than it used to be.
3. Hallucination. Hallucination is the term for when an AI model generates confident-sounding information that has no factual basis. Itâs not lying in the intentional sense. ChatGPT doesnât know itâs wrong. A related but distinct problem is sycophancy. Thatâs when ChatGPT agrees with your incorrect premise rather than correcting you, simply to seem agreeable. GPT-5 reduced sycophantic replies by more than 50% compared to earlier models, but the problem hasnât disappeared entirely.
- Example: You ask ChatGPT to summarize a research paper and include citations. It returns a convincing summary with three citations, but two of the papers donât exist. The titles, authors, and journal names all look plausible. Youâd only catch the error by clicking the links.
4. Potential bias. If you train ChatGPT on biased data, the responses will reflect that bias.
- Example: ChatGPT might prioritize certain features based on biased input data. For example, a feature request may contain urgency-based words like âcriticalâ or âmust-have.â While it could be critical for one user, this doesnât always mean itâs the most impactful idea. This bias could skew product development decisions.
5. Originality and plagiarism. AI-generated content might unintentionally plagiarize existing content.
- Example: You use an AI tool to generate feature descriptions. The output closely mirrors descriptions from a competitorâs website. This unintentional plagiarism can lead to legal issues and damage the companyâs credibility.
But is ChatGPT the only one to blame? Letâs dig a little deeper.
Common causes of ChatGPT misinformation
ChatGPT isnât perfect. Thatâs why itâs not as close to replacing us as we think đ
âWhen AI is given a task, itâs supposed to generate a response based on real-world data. In some cases, however, AI will fabricate sources. That is, itâs âhallucinating.â This can be references to certain books that donât exist or news articles pretending to be from well-known websites like The Guardian.â
Oscar Gonzalez, tech news editor, Gizmodo
Why does ChatGPT hallucinate? There are a few potential reasons for AI-generated misinformation:
- ChatGPT may lack context. For example, suppose you ask it to develop a new product launch strategy. In that case, it will give you generic advice based on existing articles on this topic. It might also get some details about your particular product or company wrong.
- ChatGPT is trained on vast datasets, both real and fictional. It can give you an answer based on a fictional scenario found online without realizing itâs fictional.
- The output can be susceptible to the way you phrase the questions. Small changes in the input prompt can lead to different responses.
The scale of this problem has shifted significantly in 2026. With web search enabled, current GPT-5 models produce factual errors on roughly 10% of queries. Without web search, that rate jumps above 40%. Web search is on by default in ChatGPT, which is one reason hallucination rates have dropped so much from earlier generations. Switching to Thinking mode reduces errors by a further 50 to 80%.
OpenAIâs own research explains why hallucinations canât be fully eliminated yet. Standard training benchmarks reward models for generating an answer, even when the model is uncertain. The result: ChatGPT guesses with confidence rather than admitting it doesnât know. The best countermeasure isnât waiting for the problem to be solved. Itâs knowing when to trust the output and when to verify it.
Before you put your tin-foil hats on and shut down ChatGPT forever, letâs try to solve these issues.

How to limit false information
You can still use ChatGPT and save a lot of time. Now that you know what to look out for, you can be a little more skeptical. Nevertheless, this doesnât mean you need to lose all faith and go back to your old ways. Here are seven tips to help you limit misinformation from ChatGPT.
1. Craft clear and precise prompts
The better you ask the question, the more accurate of an answer youâll get. So, get ultra-specific with your prompt. Include the following:
- Give ChatGPT an identity. Who do you want it to be?
- Example: âYou are a product manager at canny.io. Your job is to prioritize feature requests based on required effort and potential impact.â
- Give background information. Imagine youâre talking to someone who has no idea what you and your company do. To speed this up, include links or text from your site or help docs in the prompt.
- Example: âCanny is a tool for product managers. It helps them analyze and manage user feedback.â
- Hereâs more about our tool [link].
- Give specifics. Do you need the output to be a certain length? Does it have to follow a specific writing style, tone, or format? Specify all that.
- Example: âPlease use simple language, plain English, and conversational tone. The audience for this content is internal only. I need everybody in my company to understand what Iâm talking about. Avoid jargon and complex language.â
- Avoid anything that ChatGPT can misinterpret. Be as clear as possible in your prompt.
- Ask GPT to ask you questions.
- Example: âPlease ask clarifying questions if anything is unclear. Do you need any other information or context? Please ask.â
- Talk to GPT like you would to a real person. Use conversational language.
- Example: âWhy did you prioritize features in this order? Can you explain your thought process?â
- Ask for examples, proof, and citations with direct links. This will help you assess the accuracy of ChatGPT responses.
- Example: âWhat evidence supports this statement [copy-paste part of GPTâs response]? Give me direct links to the source of this information.â
- Paraphrase your prompt. Sometimes, slight differences in your prompts will make a big difference.
- Initial prompt: âPrioritize these features for me.â
- Paraphrased prompt: âRank these feature ideas based on how much effort they might potentially take.â
- Ask it to do one thing at a time. Then, ask for the next thing in the follow-up. Itâs a chat, remember? đ
- Prompt #1: âRank these feature ideas based on how much effort they might potentially take.â
- Prompt #2: âThank you, this makes sense. Now add another ranking factor â the potential impact of this feature on our customers. Redo the ranking please.â
- Ask it to repeat parts of your original requests back to you. This will help you understand if ChatGPT is on the right track.
- Example: âCan you tell me what I asked you to do in your own words? I want to make sure you understand exactly what I need you to do. Are my instructions clear?â
Use custom instructions to set a baseline
ChatGPTâs custom instructions let you set persistent rules that apply to every conversation. You only write them once. They take effect automatically every time you start a new chat.
Open ChatGPT, go to Settings, and select Personalize. Youâll see two fields: what you want ChatGPT to know about you, and how you want it to respond. Hereâs a starting template for product managers:
What ChatGPT should know:
âIâm a product manager at a B2B SaaS company. I work on [product area]. Our customers are [describe]. When you donât know something, say so. Never guess and present it as fact. Always tell me when information might be outdated.â
How ChatGPT should respond:
âBe concise. Cite your sources when making factual claims. If youâre uncertain about something, flag it clearly. Ask clarifying questions before giving a long answer.â
This doesnât eliminate hallucinations. It does reduce them. Youâre setting a consistent expectation before every session starts.
2. Try iterative prompting
As you chat with ChatGPT, youâll start noticing where it goes off the rails. This is a perfect time to bring it back on track. Engineers call this iterative prompting. This is the process of asking ChatGPT for one thing only. Then, based on its response, either help it change direction or ask it to keep going.
âWe donât just have a single-stage process. Weâre not going straight to the API and asking: âWhat is the feedback here?â or âIs there a bug report in this?â Instead, we have a multi-stage process. We ask one small question at a time and try to get the most accurate response possible. This is how we get higher fidelity and accuracy rates.â
Niall Dickin, engineer at Canny
3. Train ChatGPT on specific data
ChatGPT can give you wrong answers when it lacks context about your business.
âSay, for instance, youâre running a small business and use ChatGPT for business planning. How much would ChatGPT know about the dynamics of your business? If youâre integrating ChatGPT into your customer support service, how much would ChatGPT know about your company and product? If you use ChatGPT to create personalized documents, how much would ChatGPT know about you? The short answer? Very little.â
Maxwell Timothy, content and outreach specialist at Chatbase.
ChatGPT relies on the data it can find online to answer your questions. Very often, the data it needs isnât publicly available. Specifics about your product can hide in your help docs and internal wikis. You need to âfeedâ that data to ChatGPT to help it help you. There are a few ways of doing this.
Manually copy-paste
This isnât the most efficient option, but it works. Manually copy-paste relevant information to your conversation with ChatGPT. Ask it to use this information to answer your questions.
Sample prompt:
âAnalyze these Intercom conversations. Find feature requests in them [insert Intercom conversationsâ transcript]. Compare them against our existing features [insert a list of features]. Give me a list of only feature requests that we donât already have.â
Provide your own examples to clarify the prompt. Letâs say youâre asking ChatGPT to create a changelog entry. Copy-paste an existing changelog entry and ask ChatGPT to follow the same style, format, length, tone, etc.
âThereâs this temptation to type in as little as possible and let the AI do its âmagic.â And then we expect accurate responses back. We have to keep in mind: most AI tools have a pretty large âcontext windowâ â space to type in our prompt. These tools can consume a lot of data at once.â
Maxwell Timothy
Try feeding your data to ChatGPT in portions, though. Sometimes, large amounts of data at once lead to âhallucinationsâ as well.
Create custom GPTs yourself
OpenAI now allows you to create custom GPTs. Think of them as mini-programs you can train on specific tasks. For example, you can create separate GPTs for:
- Data analysis
- Feature request detecting
- Release note writing
- Client conversation breakdown
- And much more
Creating custom GPTs saves you time. You wonât need to explain what you need ChatGPT to do for you every time. You set it up once and reuse it forever.
Follow these steps to create a custom GPT.
- Open https://chatgpt.com/
- Find âExplore GPTsâ on the left-hand side
- Find âCreateâ in the top-right corner
- Youâll end up on the âCreateâ tab. You can describe to ChatGPT what kind of GPT youâd like to create and share resources with it.
- Alternatively, you can click on the âConfigure tabâ and customize your GPT there

Use tools to create custom GPTs
There are some great tools for creating custom GPTs. Jason West, CEO of FastBots.ai, walks you through creating one with CustomGPT.ai in this video.
Chatbase is another great option. It helps you train your chatbots with company-specific information and knowledge.
âChatbase is the easiest way to train and deploy a chatbot with your data. This innovative no-code AI solution provides a simple way to manage all aspects of building a chatbot with your data. This includes training, configuration, and deployment.â
Maxwell Timothy
Chatbase uses the same technology that powers ChatGPT but optimizes it to make it even easier to use.
Try RAG (retrieval-augmented generation)
Retrieval-augmented generation (RAG) is a more advanced way of reducing AI hallucinations. It allows AI to respond to queries referencing a specified set of documents.
âRAG has recently emerged as a promising solution to alleviate the large language modelâs (LLM) lack of knowledge.â
At Canny, we built this into Autopilot through the Knowledge Hub. You can upload your own help docs, release notes, and product specs directly. Autopilot uses that context to distinguish between new feature requests and things you already support. It also improves the accuracy of Smart Replies. The more context you give it, the fewer gaps it fills with guesses.
Note: weâre not using any data without explicit permission from our users. We only use customer data for that customerâs instance.
âWe provide relevant context on each prompt to supplement the LLM knowledge with domain-specific data from the customer. This helps the LLM stay grounded in reality and feed from this data to generate context-aware responses.â
Ramiro Olivera, engineer at Canny
4. Use web search and Thinking mode
This is the most impactful change you can make right now, and it requires no prompt engineering.
ChatGPTâs web search is on by default. It lets the model pull live, sourced information rather than relying solely on its training data. With web search enabled, current GPT-5 models produce factual errors on roughly 10% of queries. Disable it, and that rate climbs above 40%. Leave web search on for any query where accuracy matters.
Thinking mode goes further. When you select Thinking in the model picker, ChatGPT works through the problem step by step before generating a response. This produces measurably more accurate output. OpenAIâs own benchmarks show that Thinking mode cuts factual errors by 50 to 80% compared to standard responses. Use it for anything high-stakes: competitive analysis, market research, technical claims, product decisions.
The combination of web search and Thinking mode is the closest thing to a reliable ChatGPT that currently exists. Neither eliminates hallucinations. Together, they reduce them significantly.
5. Recognize and correct errors
Itâs easier to fix ChatGPTâs mistakes when you know what to look out for. Here are some common signals that you might be getting the wrong information.
- No source attribution â ChatGPT canât give you a direct link to the source of the information. Sometimes, itâll give you a link that leads to a 404 page.
- Inconsistency with well-known facts.
- Overly broad statements.
- Contradictory information â sometimes, you can get different responses when you use rephrased prompts.
- Outdated references. Itâs best to trust information thatâs no more than five years old.
- Citations to non-credible sources.
To correct any errors, verify the facts manually and only trust reputable sources.
6. Verify facts manually
Yes, ChatGPT and all AI are here to replace manual work. But, as you can see, weâre not 100% there yet. Because ChatGPT can still make mistakes, itâs best to verify critical information. Checking it early will save you time in the future.
Kevin Yank, principal architect at Culture Amp, recommends always assuming ChatGPT is lying. This level of skepticism will help minimize errors.
The product management community on Reddit agrees, and hereâs what they recommend.
- Ask ChatGPT: âAre you sure about that?â
- If you get a different response, go and check this information manually

Source: Reddit
Bottom line: check all critical information
Sources for fact-checking
If youâre unsure about any information ChatGPT gives you, verify it. For example, you can use these trustworthy sources to check market trends, competitive intel, and similar data.
- Statista, Gartner, or Forrester for data, numbers, benchmarks
- eMarketer or McKinsey & Company for business and marketing trends and forecasts
- Wired, TechCrunch, or CNET for technology information
- Association of International Product Marketing and Management (AIPMM) for product management
- Reddit and LinkedIn groups for industry-specific information
- Donât trust everything that any contributor posts. Ask the community if youâre not sure.
Note: if you provide your own data to ChatGPT, you need to check this data internally to ensure you get an accurate output.
Hereâs how Gianluca Ferruggia, general manager at DesignRush, corrects ChatGPT misinformation. He shared a story from his recent product launch.
âWe were coordinating a product launch using AI. The response we received conflicted with our internal project milestones due to the AIâs misinterpretation. We rectified this by reminding ourselves that itâs crucial to provide AI with specific, clear, and concise instructions.â


7. Use dedicated AI tools
ChatGPT made a massive leap in AI adoption and progress. However, itâs imperfect and can sometimes give you the wrong information. Thatâs why there are many specialized AI tools. Letâs look at a few of them.
AI for product managers: Canny Autopilot
ChatGPT isnât focused on product management.
To help product managers take advantage of AI, we created Autopilot. Itâs a suite of AI-powered tools that helps product managers.
Autopilotâs Feedback Discovery feature detects customer feedback in:
- Customer conversations (Intercom, Zendesk, HelpScout)
- Sales calls (Gong)
- Public review sites (G2, Capterra, and eight more sources)
Then, Autopilot extracts that feedback and imports it into your Canny account. Next, it deduplicates that feedback and automatically merges duplicate requests. If youâre using Canny Ideas and have set up groups for your product areas, it auto-groups your feedback too.

Autopilot can also:
- Reply to your users on your behalf, asking clarifying questions
- Summarize long comment threads
- Use the Knowledge Hub to learn your product so it can spot the difference between existing features and genuinely new requests
Weâve received very positive feedback from our Autopilot customers so far. Autopilot uses a multi-stage process to detect and extract user feedback, which makes it much more accurate.
âI thought, surely I canât just turn it on, and itâll do its magic. But thatâs exactly what itâs doing. Weâre seeing hundreds of support tickets turned into actionable insightsâŚwith very high accuracy.â
Matt Cromwell, senior director of customer experience at StellarWP
AI for project managers: ClickUp
Product managers often share the load with project managers. Sometimes, product managers own project management as well. In both cases, a dedicated AI tool can help.
At Canny, we use ClickUp. It helps us manage tasks, collaborate, and track progress.
ClickUp Brain is an advanced AI assistant. It creates documents, brainstorms ideas, summarizes notes, and more. You can ask ClickUp Brain to:
- Read your internal documents and answer questions about your company
- Give a breakdown of what different teams are working on
- Reply to comments
- Write a task summary
- Create templates, labels, tasks, transcripts, and more
Unlike ChatGPT, ClickUp has internal information about your company and projects. It can use that data to help you in a very particular way. Because ClickUp can read your documents, it can create more accurate outputs for your organization.

AI for research: Perplexity

ChatGPT is useful for research, but it doesnât always show its work. Perplexity takes a different approach. It searches the web in real time and cites every claim it makes. You can click through to verify any source directly.
For PMs doing competitive research, market analysis, or trend tracking, this is a significant accuracy advantage. Youâre not trusting a summary. Youâre reading the sources yourself.
AI for product analytics: Mixpanel

Many product decisions start with a question: how are users actually behaving? Mixpanelâs Spark AI lets you ask those questions in plain language, without writing SQL or digging through dashboards. You ask something like âwhich features drive the most retention?â and it builds the report for you. It shows its work so you can verify the query.
Because Spark runs against your own product data, not the open internet, it canât fabricate answers. It either finds a pattern in your data or it tells you it canât.
Accurate information is key
ChatGPT or any AI technology can sometimes make mistakes. You can still use AI to save you time, but you need to question accuracy and critically assess AI-generated output and content.
If the output youâre getting is misleading, you might spend more time correcting it later. Worse, you might act on that misinformation. Look for common signs of AI misinformation and verify all facts.
If you want a tool thatâs already doing this for you, try Autopilot! Stay tuned for more updates and improvements.
Frequently asked questions
Why does ChatGPT lie?
ChatGPT doesnât lie on purpose. It predicts the most plausible next word based on its training data. When it lacks reliable information, it fills the gap with a confident-sounding guess. This is called a hallucination. It looks like a lie, but itâs more like a very convincing bluff.
Does ChatGPT still hallucinate?
Yes, but significantly less than before. Web search is on by default in current ChatGPT models, which is the primary reason accuracy has improved. With web search enabled, GPT-5 produces factual errors on roughly 10% of queries. Switching to Thinking mode reduces errors further by 50 to 80% compared to standard responses.
Whatâs the difference between a ChatGPT hallucination and ChatGPT lying?
A hallucination is an unintentional error. ChatGPT generates plausible-sounding content without a reliable source to back it up. Deliberate deception is different and rarer. OpenAI found that some AI reasoning models can strategically withhold or misrepresent information to reach a goal. GPT-5 reduced these deception rates to 2.1%, down from 4.8% in earlier models. For most everyday use, what looks like lying is almost always hallucination.
How do I fact-check ChatGPT outputs?
Cross-check specific claims against primary sources. For data and benchmarks, use Statista, Gartner, or Forrester. For technology claims, use TechCrunch, Wired, or the vendorâs own documentation. For anything ChatGPT cites with a link, click it. Fabricated citations are one of the most common hallucination types. A link returning a 404 means the claim is unverified.





