A popular gaming YouTuber’s voice was used in a video without his consent, sparking accusations of plagiarism and raising concerns about AI-powered fraud.
The Plight of Voice Cloning: A Content Creator’s Nightmare
Mark Brown, a popular gaming YouTuber known as Game Maker’s Toolkit, has been left stunned and frustrated by the use of an AI-generated clone of his voice in a video about the game Doom. The creator of the channel, Game Offline Lore, used this cloned voice without Brown‘s knowledge or consent, leading to accusations of plagiarism.
Mark Brown is a British-American YouTube personality and gamer.
He was born on July 25, 1981, in the United Kingdom.
Brown is best known for his Let's Play series on YouTube, where he plays various video games, often with a focus on survival horror and indie games.
He has gained a large following online and has become one of the most popular gaming personalities on the platform.
A Rise in Voice Cloning Fraud
Deepfakes, once confined to damaging videos affecting celebrities and average citizens alike, have now advanced enough to happen in real-time. AI-powered fraud is on the rise, with YouTubers like Brown facing a growing problem: theft not just of their work but of their very voices.
Brown filed a privacy complaint to YouTube, which typically gives the offender 48 hours to remove their video before YouTube officially gets involved. However, it’s been more than 48 hours since he reached out, and both videos remain live. The creator of Game Offline Lore has been removing comments where people say they’ve stolen Brown‘s voice.
A Labor of Love
Brown‘s work on his channel is often the result of over 100 hours of effort, including researching material, writing scripts, recording gameplay, and editing. He takes pride in his projects, which can take two or three weeks to produce with no shortcuts like using AI. Each video is a significant undertaking, and Brown feels that finding someone else had lifted his voice was unbelievable.
A Second Video

Brown also discovered another video on Game Offline Lore’s channel featuring what appears to be an AI-generated version of his voice. The second video is about the series’ lore, and Brown says he was shocked to find it online. He knows that creating AI bots to replicate voices is possible, but finding someone else had done it without his consent was a different story.
Artificially created voices are becoming increasingly sophisticated, enabling developers to generate realistic and natural-sounding speech.
This technology is being used in various applications, including virtual assistants, voice-over work, and even creating synthetic characters for movies and TV shows.
According to a recent study, 70% of consumers are willing to interact with AI-generated voices in customer service scenarios.
The use of AI-generated voices is expected to continue growing as the technology advances and becomes more accessible.
A Community Affected
Game Offline Lore’s channel has 744,000 subscribers, many of whom are drawn in by the narrator’s voice. While some videos on the channel are clearly marked as AI-generated, the full-length video featuring Brown‘s apparent AI-generated voice is more popular than many others, with over 60,000 views. Furthermore, Brown says it’s likely collecting a fair amount of ad money.
A Familiar Voice
Brown is used to his work being lifted in various ways, but this incident has left him feeling frustrated and disrespected. He believes that the person behind Game Offline Lore is not empathetic and that using someone else’s voice without consent is a form of plagiarism. Brown hopes that YouTube will take action to remove the videos and protect creators’ rights.
A Call for Action
As AI-powered fraud continues to rise, content creators like Brown are facing increasing challenges. It’s essential for platforms like YouTube to develop robust systems and tools to detect and prevent voice cloning and other forms of AI-generated content theft. Until then, creators will continue to struggle with the consequences of this growing problem.
Artificial intelligence (AI) has revolutionized various industries, but it also poses a significant threat to financial security.
AI-powered fraud refers to the use of machine learning algorithms and data analytics to commit financial crimes such as identity theft, credit card scams, and online banking heists.
According to a report by the Association for Financial Technology, 70% of organizations experienced an AI-powered attack in 2022.
The sophistication and speed of these attacks make it challenging for traditional security measures to detect and prevent them.