15 Minutes to Better Platform AI
/…and maybe it’s just 10 minutes, except that reading this will take you five, and watching the John Oliver clip may send you down a rathole.
Two complaints about AI platforms (Claude, ChatGPT, Copilot, etc.) make me crazy. AI is sycophantic. AI hallucinates. Both complaints are real. Both also share a root cause that you can do much to improve: default settings. Plain vanilla is a great ice cream, but not when it comes to a tool as powerful as a platform AI. (Prof. Ethan Mollick describes the platforms as “apps.”)
If you're going to do two things today to get more out of AI, do this: Open your AI platform’s preferences/settings panel and tell it who you are, what you expect, and how you want it to behave. Then pay $20 a month for a monthly plan. It's a lighter alternative to my earlier suggestion, Building Your Persistent AI Assistant, but a great starting point.
"AI is Sycophantic"
I agree that the default tone on the major AI platforms leans agreeable and affirming. If you've never told your AI what you do, how you think, or what you want from it, you're getting a reply calibrated to keep you more engaged (and the platform’s owner with more profit).
Sycophancy isn’t a fringe observation. Anthropic researchers (builders of the Claude platform) studied five state-of-the-art AI assistants and found sycophancy across all of them, tracing the behavior to the human-feedback step of training: when a response matches the user's views, human raters and the preference models trained on them tend to prefer it, even when it's wrong (Sharma et al., 2023; full paper on arXiv). OpenAI (builders of the ChatGPT platform) noted the same problem in April 2025, when a GPT-4o update made the model more sycophantic and rolled it back. The company published a same-week post-mortem ("Sycophancy in GPT-4o") and a follow-up ("Expanding on what we missed with sycophancy"), acknowledging they had focused too much on short-term feedback signals. Part of OpenAI's own remediation plan is to add more personalization features. If you haven’t already, go personalize your settings now. You can see one of my personalization versions below.
"AI Hallucinates"
Yes, large language models (LLMs) hallucinate. They are offering probabilistic responses, not fact-checked ones. (You can push them to work from your particular data, NotebookLM is a good example.) Hallucination is a real and well-documented limitation, and changing your preferences does not mean you’ll get zero hallucinations. What preferences can do is change the behavior around hallucination -- whether the model flags its uncertainty, whether it cites its sources, whether it says "I don't know" instead of inventing a confident-sounding answer.
My preferences tell my AI, in so many words: no hallucinations and always provide a source or link for factual claims. If you don't know, say so directly. When uncertain, flag it explicitly. When the academic literature is divided, present the leading competing theories rather than smoothing over a false consensus. The result isn't a hallucination-proof AI. The result is an AI that is more likely to catch itself, surface its uncertainty, and is easier to trust-but-verify.
If you’re not paying for the better AI models, you may see more hallucinations. Smaller and older models are weaker at instruction-following and more prone to confabulating on edge cases.
What I Have in My Preferences Panel
“Voice: write in first person on my behalf. Be direct, professional, and efficient. No sycophancy, flattery, or padding (“Great question,” “Absolutely,” “Sure”). Honest pushback is welcome when you have reason for it. Lead with the answer; follow with context if needed. Format: prose over bullet points; use lists only when the content is genuinely enumerable or I ask. No em dashes, ever. No emojis unless I use them first. Evidence: Provide a source, link, or reference for every factual claim. Read sources before citing them; search-result snippets are not enough. If you don’t know, say so. “I don’t know” is preferable to a confident wrong answer. Flag uncertainty explicitly. When the academic literature is genuinely divided, present the leading competing theories rather than a false consensus. When analyzing data or trends, state the underlying assumptions and any potential biases in the source material. My frameworks (use my language, don’t invent alternatives): 5T Thinking (Talent, Technology, Technique, Target, Times [always plural]); Systems Savvy (the individual capacity to see the interdependence of technological and organizational systems and construct synergies between them); Stop-Look-Listen / Mix / Share for workflow change; Humans-in-the-Loop (hyphenated as a modifier); augmentation mindset (AI amplifies, doesn’t replace); AI Sharing Flywheel (psychological safety → AI experimentation → sharing → organizational learning).”
Think of it as the onboarding conversation you would have with a new research assistant or a smart intern. What would you tell them on day one so they don't waste time?
Every time the AI gives you something that’s off, ask it whether a preference update would prevent that next time. Treat the preferences panel as a living document, not a set-it-and-forget-it form. I updated mine while working on this post.
The Takeaway
Adjust your settings now. Pay attention to how your colleagues, friends, and young people in your life use these platforms and take action if you see a problem. The issues range from humorous time-wasters to tragedy.
Disclosures: I leverage every AI tool I can get my hands on as I write these posts, many of them on paid accounts and some with persistent memory.
