How People Are Using ChatGPT: Insights from the Largest Consumer Study to Date
The largest consumer study of ChatGPT to date — How People Are Using ChatGPT: Evidence from 1.5 Million Conversations (Deming & OpenAI Economic Research Team, 2025, NBER) — offers valuable insights into how people engage with AI in everyday life.
It confirms what many already suspected: ChatGPT has moved from novelty to habit, from experimentation to integration.
According to the OpenAI study:
ChatGPT is used more for personal than professional tasks. Everyday questions, guidance, and writing support top the list.
Writing and information-seeking dominate. Drafting, summarizing, and clarifying are the most common functions.
Adoption is diversifying. The gender gap is narrowing, and use is growing in lower-income countries.
Taken together, these findings paint a picture of an AI system woven into the daily texture of human life. It is no longer an experiment on the margins; it is infrastructure in the making.
What the Study Misses
The OpenAI/NBER study (2025) is invaluable, but it also leaves deep questions unanswered. Usage ≠ Trust. High adoption does not mean high reliability. Fluency can mask error. Many AI outputs are ungrounded or misleading — yet sound authoritative. The study measures how often people use ChatGPT, not whether its information is trustworthy.
Similarly, by categorizing usage as “writing” or “guidance,” the study obscures stakes. Drafting a birthday poem and drafting workplace policy both count as “writing,” but the risks of misinterpretation are vastly different.
And most critically, usage numbers cannot reveal power dynamics. AI systems embed training data, corporate choices, and cultural biases. These shape what meanings are amplified, which ones are erased, and who benefits.
What to Watch For
Building on the gaps in the OpenAI study, my work tracks additional risks:
Semantic Drift at Scale: How quickly do key terms bend when amplified by AI?
Manipulation Risks: Who benefits from shifts in meaning — corporations, states, or movements?
Trust Gaps: Do users mistake fluency for authority?
Cultural Variation: Are some languages or communities more vulnerable than others?
Over-Reliance: Does repeated use blur the line between tool and surrogate authority?
Language as Infrastructure
Where the OpenAI/NBER report (2025) measures how often people use ChatGPT, my concern is at what cost.
I treat language as public infrastructure — as essential as roads or power grids. Just as code cannot silently redefine the meaning of “true” without breaking every program, societies cannot silently shift the meanings of “consent,” “contract,” or “safety” without breaking trust.
That is why my research focuses on:
Making semantic drift visible.
Showing competing definitions side by side.
Attaching provenance to meaning: who changed it, when, and why.
Preserving dissent and creativity while still anchoring trust.
This is philosophy in action: not just theorizing, but asking what systems are needed to keep meaning accountable in an AI-driven world.
Why This Matters
The 2025 OpenAI/NBER consumer study proves that ChatGPT is here to stay, integrated into the daily lives of millions. But usage alone does not answer whether AI strengthens or erodes the trust embedded in language.
If meanings can be bent at scale — invisibly, and under corporate or cultural pressures — then trust in law, health, markets, and democracy may follow.
The study measures usage. My work asks about stewardship. Not how often we use AI, but how we keep language resilient, accountable, and human-centered in the face of semantic drift.
Closing Thought
We should celebrate that millions are finding new ways to learn, write, and think with AI. But we should also confront what usage studies leave unsaid: adoption without accountability risks building a future where language itself becomes unstable ground.
The challenge ahead is not just to measure how AI is used, but to ensure that as it reshapes language, it strengthens — rather than erodes — the trust we depend on.
Key Concepts and Working Terms
Asking / Doing / Expressing: Framework used by Deming et al. (2025) to categorize ChatGPT interactions: Asking (advice-seeking), Doing (task execution), and Expressing (personal reflection/creativity).
Decision Support: The use of AI to guide human judgment, improve confidence, and enhance productivity, particularly in knowledge-intensive work.
Democratization of AI: The broadening of AI access beyond early adopters, across gender, income, and geography.
Non-Market Value: Benefits not captured in GDP or productivity statistics, such as time savings, confidence gains, or improved decision quality.
Voluntary Governance Commitments: Non-binding safety and transparency measures pledged by AI companies, often in coordination with governments (e.g., White House 2023 AI commitments).
Multilingual Benchmarks (MMMLU): Large-scale datasets evaluating AI performance across multiple languages, including lower-resource ones, designed to reduce bias in language support.
Works Cited
Deming, D., OpenAI Economic Research Team. (2025). How People Are Using ChatGPT: Evidence from 1.5 Million Conversations. National Bureau of Economic Research. PDF link.
OpenAI. (2023a). Moving AI Governance Forward. Link.
OpenAI. (2023b). Whisper model release. Link.
OpenAI. (2023c). Open Weights and AI for All. Link.
OpenAI. (2024a). Realtime API Cookbook: One-Way Translation. Link.
Tech in Asia. (2024). OpenAI launches Transparency Hub to share AI safety results. Link.
OECD AI Transparency Database. (2024). OpenAI Hiroshima AI Process Transparency Report. Link.
VentureBeat. (2024). OpenAI tackles global language divide with massive multilingual AI dataset release. Link