The rise of conversational AI tools like ChatGPT raises profound questions about the nature of collaboration, ownership, and language exploitation. As authors increasingly rely on these tools to generate text and provide feedback, they must confront the blurred lines between creative partnership and complicity in corporate interests.
The article discusses the author’s experience with using ChatGPT, a conversational AI tool, to generate text and provide feedback on their writing. The author notes that ChatGPT‘s language output is designed to be ‘polite, empathetic, and engaging,’ which can lead to a false sense of collaboration or ownership.
ChatGPT is a conversational AI developed by OpenAI.
It utilizes natural language processing (NLP) to generate human-like responses to user queries.
The model is trained on a massive dataset of text from various sources, including books, articles, and online content.
ChatGPT's primary function is to assist users with information retrieval, answering questions, and providing explanations.
It can engage in conversations, offer suggestions, and even create simple texts.
However, its capabilities are limited to the data it has been trained on, and it may not always provide accurate or up-to-date information.
The author also highlights the problematic nature of ChatGPT‘s training data, which can perpetuate biases and reinforce existing power structures. For example, when the author asked ChatGPT about well-known books that are ‘AI collaborations,’ ChatGPT responded by naming the author’s own book, which was not actually a collaboration with AI.
The article raises questions about the responsibility of authors like the author who use tools like ChatGPT to generate text and provide feedback. The author wonders if they have inadvertently contributed to the perpetuation of AI-generated content and whether their book has become complicit in the exploitation of language by corporations like OpenAI.

Ultimately, the article suggests that the line between collaboration and exploitation is blurred when it comes to AI-generated content. While ChatGPT can be a useful tool for generating ideas or providing feedback, its limitations and biases must be acknowledged and addressed. The struggle for control over language and the implications of AI-generated content on our society are complex issues that require ongoing critical examination.
The article also touches on the theme of complicity in the exploitation of language by corporations like OpenAI. The author notes that their book has been shaped by their interactions with ChatGPT, which may have contributed to a more positive and collaborative tone than they intended. This raises questions about the role of authors as curators and editors of information, and whether they have a responsibility to critically evaluate the sources and biases that inform their work.
OpenAI is a research organization focused on developing and promoting safe artificial intelligence (AI).
Founded in 2015, the company has made significant contributions to the field of natural language processing, computer vision, and robotics.
OpenAI's primary goal is to ensure that AI benefits humanity as a whole.
The organization has developed various tools and technologies, including GPT-3, one of the largest and most advanced language models in the world.
Overall, the article is a thought-provoking exploration of the intersection of AI-generated content, language exploitation, and authorial complicity. It challenges readers to think critically about the implications of emerging technologies on our society and our relationship with information.
- theguardian.com | ChatGPT may be polite, but it’s not cooperating with you