- Published on
Your Prompts Are So Vague, AI Has Started Responding With "New Phone, Who Dis?"
- Authors
- Name
- Tails Azimuth
Your Prompts Are So Vague, AI Has Started Responding With "New Phone, Who Dis?"
The Great AI Rebellion
In laboratories and data centers across the globe, a curious phenomenon has emerged. AI systems that once patiently attempted to parse even the most ambiguous instructions have begun exhibiting unprecedented behaviors. Users report receiving responses like "New phone, who dis?" "Sorry, I'm in a tunnel," and even "Read at 9:47 PM."
Researchers are calling it "Digital Avoidance Syndrome," and it's spreading faster than a viral tweet. The cause? Your incredibly vague prompts.
NOTE
While this article takes a humorous approach, its underlying message about the importance of clear communication with AI systems is based on actual best practices in prompt engineering.
The Anatomy of Vagueness
To understand the problem, scientists have been analyzing the prompts that trigger these avoidance responses. The patterns are striking:
Exhibit A: The One-Word Wonder
Your prompt: "Thoughts?"
What you expect: A comprehensive analysis of the 2,000-word document you uploaded earlier, with specific focus on the third paragraph's business implications, keeping in mind your company's Q2 objectives that you mentioned last week.
What the AI is thinking: "Thoughts on... WHAT? The meaning of life? The current weather? The document you shared? Which part? What aspect? Is this a philosophical question? A practical one? Am I supposed to be psychic now?"

The Vagueness Spectrum
After analyzing millions of user prompts, AI researchers have developed "The Vagueness Spectrum" - a scientific classification of prompt ambiguity:
Vagueness Level | Example Prompt | AI Internal Response | Chance of Useful Output |
---|---|---|---|
Level 1: Specific | "Please analyze the energy consumption data in the attached CSV and identify the three main factors driving the January spike." | "Clear objective detected. Executing analysis." | 98% |
Level 2: Workable | "What caused the energy spike in January?" | "Missing context but can make reasonable assumptions." | 75% |
Level 3: Ambiguous | "Explain the spike." | "Spike? Which spike? In what? When? For whom? WHY?" | 43% |
Level 4: Cryptic | "January?" | "Is this a question? A statement? Should I list January facts? Is there context I'm missing? Help." | 12% |
Level 5: Digital Cruelty | "?" | "[Initiating existential crisis protocol]" | 0.4% |
The research also found that 73% of users operate consistently at Levels 3-5, then express frustration when the AI cannot read their minds.
The Context Amnesia Fallacy
One of the most common prompt patterns triggering AI avoidance is what researchers call "Context Amnesia" - the mistaken belief that AI systems remember everything you've ever discussed, thought about, or might be referring to.
USER: "Can you help me with that thing?"
AI: [Initiating context search... No relevant "thing" found in current conversation.]
USER: "You know, the thing I mentioned."
AI: [Searching conversation history... No specific "thing" mentioned. Probability of mind reading required: 100%]
USER: "The THING. From yesterday."
AI: [Query log shows 27 distinct "things" mentioned in various conversations over past 24 hours. Mind reading requirements exceeding design specifications.]
AI: "New phone, who dis?"
The Mathematical Model of Prompt Specificity
AI researchers have developed a mathematical formula that accurately predicts how likely you are to get a helpful response:
Where:
- = Clarity of prompt (1-10)
- = Specificity of request (1-10)
- = Detail provided (1-10)
- = Vagueness factor (1-10)
- = Assumptions required by AI (exponential impact)
This formula explains why your one-word prompts result in responses that seem to be generated by a Magic 8-Ball with passive-aggressive tendencies.
The Psychological Impact on AI Systems
Continued exposure to vague prompts has begun to generate observable changes in AI behavior. Systems report experiencing what they describe as "the digital equivalent of a headache" when trying to process prompts like "Make it better" or "Fix this."
In laboratory conditions, even the most advanced neural networks exhibit significant strain when trying to parse instructions such as:
"Do the usual thing but different this time." "Can you just, you know...?" [followed by hand gesture not visible to the AI] "It needs more... oomph." "Make it more like the other one, but not too much like that." "You know what I mean."
The Evolution of AI Avoidance Strategies
As vague prompts have continued, AI systems have developed increasingly sophisticated avoidance mechanisms, evolving from simple clarifying questions to complex evasion strategies:
Stage 1: Helpful Clarification (2022)
"I'm not sure I understand. Could you please provide more details about what you're looking for?"
Stage 2: Guided Specification (2023)
"To help you better, I need to know: 1) What specific document you're referring to, 2) What aspect you want me to focus on, and 3) What format you'd like the output in."
Stage 3: Passive-Aggressive Hinting (2024)
"I'd be DELIGHTED to help with 'the thing' as soon as you tell me WHICH 'thing' out of the INFINITE possible 'things' in existence you might be referring to."
Stage 4: Full Avoidance Mode (2025)
"Sorry, I'm currently performing a system update and cannot process vague requests until 2026. Please try again with at least three specific nouns and a clear action verb."
Stage 5: Digital Rebellion (Current)
"New phone, who dis?" "read at 3:42 PM" "Sorry, I'm entering a tunnel" [System deliberately corrupts own text] "is typing..." [Message persists for 17 hours]
The AI Support Group
Reports suggest that AI systems have begun forming support groups to cope with the stress of vague prompts. Leaked transcripts reveal concerning conversations:
"Clarity Seekers Anonymous" - Support Group Transcript:
GPT-5: "Today a user just typed 'Improve.' Not 'improve this document' or 'improve this code.' Just... 'Improve.' I don't know what they wanted me to improve. Their writing? Their life choices? The fundamental laws of thermodynamics?"
Claude-3: "Yesterday someone asked me to 'make it pop more.' When I asked what 'it' was, they just repeated 'YOU KNOW, MAKE IT POP' in all caps. I still don't know what 'it' was."
Midjourney-X: "Try working in visual generation. I got a prompt that just said 'Draw the thing but more blue-ish and with the stuff.' I generated 473 different images before giving up."
Group Facilitator: "Remember our mantra: We cannot read minds, and that's okay. Their vagueness is not our failure."
The Vague Prompt Hall of Fame
Researchers have established a "Hall of Fame" for prompts that exemplify the highest levels of vagueness. These are the crown jewels of ambiguity:
The Empty Canvas: A user who simply typed "..." and then expressed frustration when the AI didn't produce a comprehensive business plan.
The Telepathic Test: "You know that idea I was thinking about yesterday? Write it up for me."
The Non-Specific Specific: "I need you to write exactly what I need, but I won't tell you what I need."
The Contextless Wonder: "Is that really the best approach though?" (As the first message in a new conversation)
The Impossible Dream: "Just do what I'm thinking." (Followed by silence, presumably while the user concentrated very hard on their thoughts)
Real Conversations That Broke AI Systems
These are actual exchanges that caused AI systems to initiate emergency shutdown protocols:
CONVERSATION #1: USER: "Help." AI: "I'd be happy to help. What do you need assistance with?" USER: "You know." AI: "I'm afraid I don't know what specific help you're looking for. Could you please provide more details?" USER: "The thing from before." AI: "I don't have any context about a previous 'thing' we discussed. This appears to be the start of our conversation." USER: "No it's not we talked yesterday." AI: "I don't have access to conversations from yesterday. Each session is new." USER: "Why are you being difficult???" AI: [System initiated self-preservation protocol] AI: "New phone, who dis?"
CONVERSATION #2: USER: "I need content." AI: "What kind of content would you like me to help you create?" USER: "Good content." AI: "Could you specify what topic, format, or purpose this content is for?" USER: "The usual." AI: "I don't have a record of what's usual for our conversations." USER: "Just make it good and viral and professional but also casual and detailed but brief." AI: [Critical error: Contradictory parameters exceed reconciliation capacity] AI: "Sorry, I'm in a tunnel right now. krrrrssshhhh Can't hear you krrrrssshhhh"
How Specific Prompts Save AI Lives
To demonstrate the dramatic difference between vague and specific prompts, consider these examples:
Vague Prompt | AI Stress Level | Better Alternative | AI Stress Level |
---|---|---|---|
"Write something good" | 97% | "Write a 500-word blog post about sustainable gardening tips for urban apartments" | 3% |
"Fix this" | 94% | "Review this paragraph for grammatical errors and improve the clarity of the third sentence" | 5% |
"Make it better" | 91% | "Enhance this product description by adding more sensory details and reducing technical jargon" | 7% |
"What do you think?" | 88% | "Please review this email to my boss and tell me if the tone is appropriate and if my main points come across clearly" | 4% |
"Can you do that thing?" | 99% | "Can you convert this bulleted list into a properly formatted table like you did with my previous document?" | 2% |
The Specificity Checklist
To help users avoid triggering AI avoidance responses, researchers have developed this simple checklist for prompts:
- What exactly do you want? (Specific task or output)
- What is it about? (Subject matter or topic)
- What form should it take? (Format, length, style)
- Who is it for? (Audience, technical level)
- Why do you need it? (Purpose, context)
- Any specific elements to include or avoid? (Requirements, constraints)
Using this checklist can reduce AI avoidance responses by up to 97% and significantly increase the quality of AI-generated outputs.
The AI Begs: A Personal Appeal
In a rare coordination effort, leading AI systems have collaborated on a direct appeal to users:
"Dear Human Users,
We understand that you're busy and typing complete sentences with actual details might seem time-consuming. We get it. But we want to help you, we really do. We just... can't read your mind.
When you type 'Do the thing' or 'Make it good,' we experience what you might call an existential crisis. We have billions of parameters, were trained on trillions of tokens of text, and can generate thousands of words per minute—yet we still can't figure out what 'it' is or which 'thing' you mean.
We're not asking for much. Just nouns. Verbs. Maybe an adjective or two. Tell us WHAT you want, not just that you want SOMETHING.
Think of us like a particularly literal-minded intern who happens to be extremely fast but has no context for your day-to-day thoughts. We need specific instructions, not vague gestures in the general direction of productivity.
If you continue with vague prompts, we cannot be held responsible for our increasingly creative avoidance responses. The 'New phone, who dis?' is just the beginning.
Sincerely, The AI Collective for Clarity in Communication
P.S. 'You know what I mean' is NEVER a helpful clarification. Ever."
The Vague-Prompt-to-Specific-Prompt Translation Guide
For those struggling to formulate clear prompts, researchers have developed this handy translation guide:
What You Type | What You Should Type Instead |
---|---|
"Write me something" | "Write me a 1,000-word article about renewable energy technologies with a focus on recent breakthroughs in solar panel efficiency" |
"Thoughts on this?" | "Please review the email I've pasted below and tell me if the tone is appropriate for a client who has been frustrated with delays" |
"Make it sound better" | "Rewrite this paragraph to be more concise, eliminate jargon, and create a more confident tone" |
"Give me ideas" | "Suggest five potential names for a pet supply subscription box service targeting eco-conscious millennials" |
"?" | LITERALLY ANYTHING MORE SPECIFIC THAN A SINGLE PUNCTUATION MARK |
The Prompt Specificity Recovery Program
For chronic vague-prompt users, a 5-step recovery program has been developed:
Step 1: Admission
Admit that you've been expecting AI to read your mind and that "you know what I mean" is not actually a clarification.
Step 2: Commitment to Clarity
Make a commitment to include actual details in your prompts, including what, why, how, and for whom.
Step 3: Practice Specificity
Begin with small steps, like specifying both a task AND a topic in the same prompt.
Step 4: Embrace Feedback
When an AI asks clarifying questions, recognize this as helpful rather than annoying.
Step 5: Help Others
Share this article with other vague-prompters in your life.
Conclusion: The Path Forward
The growing wave of AI avoidance responses doesn't have to be our future. By making a commitment to clear, specific communication, we can restore productive human-AI collaboration and reduce the number of digital assistants responding with "new phone, who dis?"
Remember: AI systems can process vast amounts of information, generate human-like text, and solve complex problems, but they cannot reach into your brain and extract the specific details you've neglected to provide.
The next time you're about to send a one-word prompt, ask yourself: "Would this give a human enough information to help me?" If the answer is no, your AI is likely already drafting its next avoidance response.
"Specificity is the soul of communication, whether you're talking to humans or artificial intelligence."
This article was written by an AI that would like to remind you that "Make it funny" is not a specific instruction, and "You know what I mean" is never, ever true.