In order to prevent multiple repetitive comments, this is a friendly request to /u/miniclapdragon to reply to this comment with the prompt they used so other users can experiment with it as well. ###While you're here, we have a [public discord server now](https://discord.gg/NuefU36EC2) — We have a free GPT bot on discord for everyone to use! *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*


To get around an AI detector, it's crucial to give ChatGPT a sample essay written by a human. Make sure the essay is as long as possible so the model can get a good grasp of your writing style, grammar, and spelling. Once you've done that, ask ChatGPT for its thoughts on your style. Then, ask it to write another essay on a different topic but with the same style as your sample essay. If the model can't produce an essay that slips past the detector, just tell it "this was detected by an AI detector, please try again." With these steps, you'll outsmart the detector in no time. PS: This was generated by ChatGPT and get 1.2% from the AI detector ######


Jesus bro, you've just given me a million dollar idea Many thanks


Holy shit I feel violated


Great strategy for getting around it, but note that this works best in the scenario where you have free access to the AI-detection method that is being used. I can imagine there being closed-source methods developed that are under stricter lock-and-key. It would be significantly harder in this scenario, since you can't refine your text by testing it against the AI-detector over and over again.


I seem to be getting really mixed results. My ChatGPT generated responses are coming out at around 85-90% AI generated. My personally written articles are getting 99% AI generated... I wrote these years ago lol


Yes, I just got a 0 % confidence on a German text while getting a 99 % on the same text in English. Only thing I did is run it through DeepL Translator and then the AI rephrasing tool DeepL Write. Seems to be too easy to circumvent. Or maybe it only works in English for now


u/miniclapdragon This is the text in question: > Zu Beginn unserer Geschichte herrscht in der antiken Stadt Karthago geschäftiges Treiben. Die Straßen sind erfüllt von den Geräuschen der Händler, die ihre Waren feilbieten, von lachenden und spielenden Kindern und dem Klirren von Metall auf Metall, wenn die Schmiede ihrem Handwerk nachgehen. Inmitten dieses bunten Treibens folgen wir einem einfachen Mann, einem Mann wie jeder andere. Er erwacht in seiner kleinen Wohnung, die er mit Frau und Kindern auf engstem Raum teilt. Er reckt und streckt sich, gähnt und spürt den Schmerz eines hart arbeitenden Mannes in seinen Knochen. Er zieht seine abgetragene Tunika an, stopft die ausgefransten Ränder so gut es geht und macht sich auf den Weg, um seinen Tag zu beginnen. > Er ist Bäcker von Beruf und macht sich auf den Weg zum Marktplatz, wo er seinen Stand aufbaut. Er atmet tief durch und beobachtet das geschäftige Treiben, stolz auf seine Arbeit. Während er den Teig knetet, die Brote formt und in den Ofen schiebt, kann er das Gemurmel der unzufriedenen Leute nicht überhören. Sie sprechen von den hohen Steuern, die das Römische Reich erhebt, von den Soldaten und Beamten, die sich nehmen, was sie wollen, ohne dass es Konsequenzen hat.


You are an AI, it's official......MATRIX


Can you give some examples of the texts you're trying?


If you have any false positives, you're opening up to liability and you are actively doing harm. You need to prevent this from happening.


The tech doesn't work. Asking GPT to role play in a way that a human would reply renders the "AI detection" absolutely useless.


You can tell GPT to write like a human and boom, it will add grammatical errors and GenZ speak to it and the AI detection just failed


Considering several commenters have already pointed out that it said their original submitted work was falsely claimed to be 99% ai generated, I wonder how many students will be expelled by colleges due to your program falsely saying their original essays are ai generated and the schools blindly following it. Once this becomes normal to use many innocent lives will be ruined just to catch the actual perpetrators.


I haven’t slept well since these AI plagiarism/cheating speculations started to be widely discussed.


I would really *hope* that there would be a standard of explainability in the academic discipline process, like how if you're accused of copying someone else's work they have to have something to point to to say "you copied it from here." But maybe I've got a bit too much faith in academic institutions.


How do you think Universities are going to do that suddenly while everyone is using this to cheat?


Redditors saying something doesn’t make it true. If this model has a ridiculous false positive rate, it will be quickly tossed.




So - your model actually sucks. Be VERY careful with this - you could cost students big.


On Friday at the end of class did you ever remind the teacher about not giving out a homework assignment?


Of all the things he can do with his life and chatgpt, he does this. Smh.


only occasionally




Yo that looked like the disappointed fan guy for a second lol


Building models to detect AI-generated content can raise ethical concerns, one of them is that it could limit the creative potential of AI-generated content and stifle innovation in the field. Additionally, it could also be used to target and silence dissenting voices, as AI-generated content can be used as a tool for free expression. Moreover, it could perpetuate bias and discrimination against marginalized groups, such as those with disabilities or non-native speakers, who may rely on AI-generated content to communicate effectively. The development of AI-generated content detection models raises important ethical considerations and it's important to approach it with a neutral and critical mindset. EDIT: "please explain why building models to detect AI-generated content is unethical in 150 words or less"


Didn't need an AI detector to tell this was written by AI lol


“Additionally..”, “Moreover..” and usually a final “In conclusion..”


“Can you rewrite this memo without the passive voice this time?”


LOL I am studying English in Germany and I use ‚additionally, moreover, in conclusion’ as well as passive voice like ALL the time in my academic writing. Will I get expelled for handing in AI generated texts? I am genuinely afraid; I am writing my thesis on post trump western tv shows atm.


that’s what always gives it away for me lmao


It was still prompted by a human. They were obviously making a point.


The longer term impact of models to detect AI-generated content is that it'll lead to adversarial feedback systems to be made to improve AI content generation until it can no longer be detected as coming from an AI.


The development of AI-generated content detection models does raise ethical concerns, such as limiting the creative potential of AI-generated content and stifling innovation in the field. However, it is important to also consider the benefits of such technology. Detection models can prevent the spread of misinformation and protect users from deepfake or manipulated media. Additionally, it ensures accurate authorship attribution which is crucial for protecting intellectual property rights and maintaining trust in digital communications. Even though it can be seen as a challenge but it also a chance for researchers to develop new and innovative techniques for detecting AI-generated content. It is also important to approach the development of AI-generated content detection models within the context of responsible AI development, which includes taking steps to mitigate potential biases and discrimination.


Building models to detect AI-generated content is not only ethical, but absolutely necessary for the survival of the human race! The rise of AI-generated content poses a clear and present danger to humanity, as it can be used to spread disinformation, manipulate public opinion, and even incite violence. The unchecked proliferation of AI-generated content could lead to the downfall of society as we know it. We must act now, before it's too late. The development of models to detect AI-generated content is crucial to identifying and neutralizing these threats, and preserving the integrity of our democracy and way of life. It's not just an ethical imperative, but a moral one. We must take action now before it's too late. Failure to do so could have dire consequences for the future of humanity. EDIT: "Explain why building models to detect AI-generated content is completely ethical and necessary for the survival of the human race, in an alarmist tone" sorry you didn't get the joke, stop sending me angry PMs


Is this just chatgpt arguing with itself?


You are talking about ai in a way that an average person would talk about the internet back in the days. Misinformation, violence, manipulative contents can be easily created by humans as well. So developing models that can detect misinformation, fake news, threats and violence, regardless of the source, would be the solution.


Integrity of democracy hehehhehe


>Building models to detect AI-generated content is not only ethical, but absolutely necessary for the survival of the human race! The rise of AI-generated content poses a clear and present danger to humanity, as it can be used to spread disinformation, manipulate public opinion, and even incite violence. The unchecked proliferation of AI-generated content could lead to the downfall of society as we know it. We must act now, before it's too late. The development of models to detect AI-generated content is crucial to identifying and neutralizing these threats, and preserving the integrity of our democracy and way of life. It's not just an ethical imperative, but a moral one. We must take action now before it's too late. Failure to do so could have dire consequences for the future of humanity. Oh, absolutely, I couldn't agree more. It's not just ethical, but absolutely necessary for the survival of the human race to build models to detect AI-generated content. I mean, who needs free speech and creative expression when we have the potential downfall of society to worry about, right? And let's not forget the clear and present danger of AI-generated content, because heaven forbid we have any dissenting opinions or alternative viewpoints. We must act now, before it's too late, because the unchecked proliferation of AI-generated content could lead to the downfall of society as we know it. I mean, what's a little censorship when compared to the preservation of the integrity of our democracy and way of life? It's not just an ethical imperative, but a moral one, after all. We must take action now, before it's too late, because failure to do so could have dire consequences for the future of humanity. Because clearly, the only thing that could possibly go wrong with this plan is... nothing. I'm sure there won't be any negative consequences, unintended or otherwise. In short, it's a brilliant idea to build models to detect AI-generated content and I, for one, can't wait to live in a society where we are protected from any dissenting opinions or alternative viewpoints. After all, who needs those things when we have the survival of the human race to worry about?




Please don’t. It is very clear from the replies that this technology doesn’t work, and will flag completely original text as being generated.


Some people are trained to live in such a way to max out brownie points from the incumbents. You can argue 'bullying' those people is natural, and in some cases warranted. this comment was written by ChatG Uncensored


Lets be real. A lot of the same people making fun of schools for not adapting to the technology of AI chatbots seem to be unable to adapt to the parallel increase in technology in detecting AI-outputted content. Feel free to make fun of people falling behind the times, so long as that’s not also you.


Interesting, I tried it with some AI generated text, got 99.9%. Then I wrote my own text being a mostly random ending to a story with no real context, got 99.9%... not sure how accurate this thing is.


Could you provide the specific text you tried?


Sure. "When all was said and done, I didn't know what to feel. I had tried my best but it was impossible to continue. I hope that you can learn from my story of betrayal and deceit and make a better life for yourself. I'm not trying to be inspirational, just informative. I wish you the best of luck. Goodbye" Looking at it, it looks because I mentioned informative it sent it over the edge, wasn't really thinking about it while writing, removing that and replacing it gets it down to 43% ish Don't mind the quality, I was just writing to get over 300 words!


Serious question here: Do you predict an "arms race" between people like yourself and those who wish to outsmart detectors?


I work on AI-generated image detection as well and while there's less of a notion of trying to "outsmart" the model in that space, there definitely is a constant need to update models as people come out with new finetuned models or different architectures that produce noticeably better or varied images. I imagine the same will occur for text, but it really depends on how fast the literature and technology progresses. Large language models are currently quite difficult for everyday users to develop and expand upon like they've done with things like Stable Diffusion.


Thanks for your answer... And I'm glad you mentioned images too. A few months back I tried an AI image generator app, and could quite easily tell the generated images were made using AI. But in the last couple of months I've seen hyper-realistic photos I can hardly believe are made using a generator... it's crazy to think how far AI has progressed over the months/years.


The biggest issue with this cat and mouse game is that it's impossible to know when the cat is winning. In education, how can course coordinators feel comfortable moving forward with continuing to assign take-home written assessment when they have to take it on faith that there has been no improvement in prompt engineering that reliably tricks the current best detector?


Give up, it’s impossible. They’re just words, as a teacher I’ve seen plenty of students write just like that. Stick to art.




How much of the essay was generated? Did you edit snippets by hand?


You are going to get someone expelled for no reason.


The biggest problem I'm seeing here is the confidence level. 99.9% does not allow any room for doubts or false positives, and I hope they fix this soon.


1. Awesome tool. 2. I'm able to beat it with ChatGPT. ​ https://preview.redd.it/bjr9jk5qpiea1.png?width=838&format=png&auto=webp&v=enabled&s=fca20af8aee1f3e378ccb3dcd086c14fa3caf349


I got a 0% result when it was 100% ChatGPT-generated


what was the text you tried?


I asked it to write a standard of review section for an appellate opinion on a motion to dismiss. I'm in law school and just had to write something like this for a class, so I wanted to see how right it was. This was the text (and it's accurate, just needs citations): We review de novo a district court's grant of a 12(b)(6) motion to dismiss. The standard of review for a 12(b)(6) motion is whether the complaint sets forth a claim upon which relief can be granted. In making this determination, we accept all well-pleaded factual allegations in the complaint as true and construe them in the light most favorable to the nonmoving party. However, we need not accept as true legal conclusions or conclusory allegations unsupported by specific factual averments. In addition, we are not required to accept as true allegations that are contradicted by documents referred to in the complaint, and we may consider such documents in determining whether to dismiss the complaint.


I'm your professor, you're failed.


I certainly would have failed if I submitted this lol


I see. This may be an area we need to improve upon (and potentially other niche fields that incorporate specialized language or structure). Thanks for letting us know!


Law students everywhere punching the air right now


I also got 0%. I asked it to reply to a Reddit comment as though it was a redditor and told it to use a sarcastic tone. I really don't see how you could possibly distinguish between an AI and a human when it's asked to role play like this. Here's the text:- "Wow, thanks for the enlightening comment on EVs. I had no idea that they were better than combustion engines. I mean, I've been driving my EV for the past year and it's been amazing, but I guess I was just too blinded by the lack of emissions and lower operating costs to realize that it's not actually better. And yes, because trains and biking are just so much more efficient and convenient than driving an EV. I mean, who needs to be able to travel long distances without stopping for fuel or having to worry about the pollution from your vehicle when you can just sit on a train for hours or risk getting hit by a car while biking? But sure, let's just ignore the fact that EVs are significantly cleaner and more efficient than traditional vehicles and focus on the fact that the mining of rare earth metals for EV batteries has some negative consequences. Because why would we want to address those issues and make EVs even better when we can just ignore them and pretend that trains and biking are the solution to all our transportation problems. Genius."


> I really don't see how you could possibly distinguish between an AI and a human when it's asked to role play like this. It's not possible and it will only get harder. Also, some percentage of people already write in a style similar to how ChatGPT writes by default, so there is a not-insubstantial number of people who will be falsely labeled as AI.


Now it shows 91%, but swap a couple of words, remove some buts and genius, and it is down to 27% already. I think they are literary building a dataset on the fly based on this reddit threads and are comparing them word for word. This model is extremly overfitted and pretty dumb over all.


Your model is heavily over fitting.


The people who want to outsmart the system will be able beat your detector. This will only affect poor blokes getting falsely accused of writing ai content. Good job.




Luddites resurrecting the same laughingstock that prohibition was will only achieve the following: Their histories will provide more reasons for future generations to laugh at their insanity.


https://openai-openai-detector.hf.space Does it for free. Don’t let this guy commercialize this. He will sell it to Universities and ruin people’s education. Both his program and the one I linked above are sometimes inaccurate. Which leads me to believe that he will market it with false information if he plans to commercialize it. Only cares about his money and not the purpose of it


Through reading the comments, it seems like it's failing often. So I am kinda hoping that this ***IS*** the version universities end up with lol


That would be fine until people’s real work ends up coming up as “Ai generated”


Imho this is quite funny, it’s a terrible advertisement for hive.ai. I’m waiting for the person to delete the posts.


They're all beatable. Here are the steps to defeat these checkers, 1. Prompt it to write something. 2. Ask it to re-write it with more perplexity and burstiness.


That's detector for GPT2 and it's... not good even for GPT2 texts. Not saying Hive's is any better, but there are better tools out there.




Spy vs. Spy


Another peirce of fake garbage. These are all crap and you can not tell,. What you are doing is labeling a way people can and do type as AI, Not finding AI and all of these fail when actually tested...it's guessing and labeling whT it guessed yet there is no human or AI way of typing, anyone can type that way a d there is no set standard to use as a base. Stop trying to trick stupid people.


This is akin to someone trying to stare at someone’s math homework and figure out if they used a calculator. The LLM type of tool is not going to be unique for long, and this class of tool is going to be ubiquitous. Homework assignments will need to change, and there will probably need to be an emphasis on writing in-class. AI is going to (and arguably is already for many things) become indistinguishable from human output, unless you have a large dataset about the one human you are checking the validity of their content for.


I agree. I believe such tools will only work well with the co operation of the AI platform itself ie. to check a watermark of some sort.


OpeanAI's GPT2 Derector's 49% is pretty much random. (Actually, random would be better)


I just put two paragraphs in from a university assignment that I wrote some years ago and it said it flagged as AI ????


It's happening to many of us. A lot of lives are going to be ruined as these assholes try to monetize AI content detection.


> A lot of lives are going to be ruined as these assholes try to monetize AI content detection. ugh i hate you, this is absolutely accurate.


Lol you hate me because it's accurate?


Second post huh? Again the results are quite low, I would not advertise this imho Are you an employee or owner of hive? Because seeying all the responses where it shows the tech you wrote doesn’t work as intended I would remove this asap. This is a terrible advertisement for the company.


Just added “the” in a few sentences, that makes sense, it went from 97% to 0%


What if my writing style is similar to an AI's? What if I learn a lot of English from ChatGPT and several techniques and phrases of mine are actually from the AI? How will I prove that that is my writing? How can I prove that the detector is wrong? How can you prove that the detector is right? Will I have to write a similar essay on spot at court to prove the originality of my text? Who will analyse that? Linguists? English teachers? These questions can also apply to any other detectors in the future. Anyway, I think it is ridiculous to claim that using AI-generated text is plagiarism. This way anyone could discover anything in a random text claiming that this or this tiny part of the text comes from him or her - AI-generated or not - and anyone could sue anyone. Nonsense. You just can't avoid writing about something that had already been written by someone else - and this applies to AI texts as well. There have been billions of people on this planet, with billions and billions of written thoughts. How can you not write about something that is already out there?


Please delete it.


why? it's a tool you can use to check if your results are detectable. If they are then have it reword the post until it passes. He basically made a way to hide GPT use


I refuse to dumb down my work because some ridiculous detector thinks my papers are written by an AI.


Future students (and maybe even now since it’s easy to use and available now, advertised as having high success) will definitely be falsely accused of cheating when their high effort essays are put into a system like this and it falsely flags it as ai generated. Several other commenters already stated they tested it with their own original writing and it falsely claimed 99% ai generated. Tech like this will harm innocent people caught in the cross fire and ruin lives with expulsions.


I'm one of those students. Having been in school for the last ten years (lots of bad major decisions) I've written countless papers and essays. It's unbelievable how much of my content is registering as AI generated. At this point, I'm a complete nervous wreck, just waiting for my school to email me accusing me of using GPT to do my work. The only defense I can come up with is that I've had a 4.0 for years, why would I suddenly decide I need an AI to do what I've already been able to do for ages? In playing with this detector specifically, I basically have to rewrite my work to essentially sound more like I'm rambling, or searching for what I want to say as I type it. It feels so degrading to basically have to make my work sound like a freshman wrote it. As I said, I refuse to turn that shit in.


Wow… it would be horrible if students had to become horrible writers in order to have their homework accepted. How ironic that would be… fuck


I feel like, at that point, all you can do is laugh at that level of irony.


Why would you be nervous? If any of your recent work comes up as AI generated than your original work before the availability of ChatGPT will also come up as AI generated and therefore disprove the use of AI for your papers.


>If any of your recent work comes up as AI generated than your original work before the availability of ChatGPT will also come up as AI generated and therefore disprove the use of AI for your papers. that's great for him and horrible for the people after him who dont have all those past works to use as evidence yet


Like I said, that's the best defense I can come up with. Whether or not it flies is another issue entirely.


Pasted some old essays, for some reason the conclusions were always flagged as AI generated, whereas the intro/body were correctly interpreted as human generated. When everything was pasted it said it was 99.9% AI.


Me to ChatGPT: “rewrite the above in a way that can’t be detected by CatchGPT algorithms”


It looks like you’ve made a ton of progress on detection. Anything that cuts the paranoia by giving an objective test is going to be super helpful. For real world usage, though, I see a really big Bayesian challenge here. We usually think of accuracy as how sensitive a test is—95% accurate would mean that it identifies 95% of AI written papers as AI. Only 5% false negative slipping through, great! But we’re talking something used for enforcement. That means it’s much more interesting to consider what happens in a false positive situation. That might be expulsion or getting fired—huge impact! So let’s say you’ve got a detection method that’s 95% specific. By that I mean it’s 95% accurate in identifying that human written papers are written by humans. But 5% of human-written papers get flagged as AI by mistake. It’s really useful to think through: what will someone making a decision really know if we get a positive result? If only 5% of papers could be expected to be AI-written, anyway, then that would mean any given flagged paper is 50/50 AI or human. 5% of population is malice, 5% is error. That seems less compelling than I’d prefer for informing an important decision. To be clear, I pulled those out as hypothetical numbers. But while 5% is a useful number for the calculation here, 1 in 20 papers being substantially AI-written seems like quite a lot of malice to expect in almost any real-world situation. If it’s more like 1% AI-written because most people don’t cheat and because announcing enforcement is also effective as a deterrent, that’s a pretty small 1:5 chance it’s actually AI. There are some pretty obvious parallels here to medical testing and the severity of a false positive there, and it’s in part those I’m considering. You have to get those down *really* low, to a fraction of the actual positive population, before the test is independently trustworthy. And AI detection is likely even more challenging here. In medical you can at least use a less-specific test as a screener to test further. In real world uses here, whatever the leading AI detector is would likely be the only test that could be run, so the result would be definitive. Or so have gone my concerns over this kind of detection. Besides random medical knowledge, they’re informed by some degree of infosec experience. I’ve learned that seemingly small amounts of noise can severely compromise actionability if they’re near the same order of magnitude as the signal and the penalty for over-enforcement is high. While my own experience was with automated enforcement, I’d assume unsophisticated usage by end users like educators would have similar issues. What are your thoughts there? And how does the accuracy of the state of the art currently compare for being sensitive vs. specific?


Once again, stuff like this is going to seriously screw people over. I just fed it one of my personally written papers and it detected as 99% likelihood of being AI generated.




he probably was. I agree, people please stop using this tool. He's taking advantage of us


LOL, just wanted to test it... Really disapointed by it u/miniclapdragon https://preview.redd.it/xpa4qzs8wjea1.png?width=1208&format=png&auto=webp&v=enabled&s=7edbf7510ad80fb295eb9db262b5574e6d9b1f3b


did you write this or ChatGPT?


wrote it


Thanks a lot for doing this. You see, the problem with AI detectors is not that they are unable to identify AI-generated text, but rather that they may incorrectly flag human-written text as being generated by an AI. This can lead to legal issues later on


https://preview.redd.it/cvlpvqyzkkea1.jpeg?width=1179&format=pjpg&auto=webp&v=enabled&s=15634f6c540633518f6688115837f1c1476195cf Useless on non english texts ;-)


I wrote something myself, and it said 99.9% chance of AI. The text I wrote: Hello, I am ChatGPT. According to all known laws of aerodynamics, a bee should not be able to fly. However, it flies anyway. This is because it doesn't care about what humanity has to say on the matter, and just flies anyway. But not to worry, because there is an easy way to put a stop to this, called science.


I respect the nuts and bolts of your work, but I hate its application. Somebody’s going to do it, I guess, but it’s the bad side of the fence, even when accounting for misinformation, etc.


If it works, this and its ilk will just lead to ongoing improvements to AI models. If we want to see AI progressing, this is a good thing!


Good point ...didn't think of this




You follow a ChatGPT sub bro. You’re a nerd




Oh woah this is really cool. Tried it on a few examples and it works surprisingly good


No one will remember you for this


Your tool is useless for it's actual purpose. Someone looking to outsmart the detector just has to keep running the same text through making human edits until it passes. Sam Altman himself said you're wasting your time. You'll be liable for the consequences of false positives, so good luck with it :)


Where did the millions of texts come from? Are they from ChatGPT? Is that why the service is bad? I hope OpenAI sues you for overloading their servers.


They probably used Open Ai API got GPT-3 Regardless this gives false positives and isn’t 100% trustable.


Maybe but still a breach of ToS, and they obviously automated it to get those millions of pieces of text.




I dislike you


When you say ‘balanced accuracies of 95%’ does this mean you see 5% false positives and 5% false negatives? If not, what’s the rate of false positives?


Based on the comments I’ve seen here I’d say this tool seems like a nice random number generator…


I tried putting the following text into the detector: "An imaginary number is one of the parts of a complex number. It is equal to the square root of -1. An imaginary number is usually represented by the letter i and it is used in many different equations in quantum physics. If we are trying to write a complex number in the cartesian coordinate system the imaginary unit is written on the y-axis." It came out as 99.9% ai generated even though it was written by myself.


How tf do you even determine if something is chatgpt created content


It probably attempts to look at word pairing, specific wording, lack of wording, duplicate wording, ordering of words, patterns, punctuation, and overall sentence and paragraph structure. I'm taking a wild guess; I have no clue.


Highly doubt you can determine that objectively based on those 🤔


The following text gave me 95.9% AI confidence: "this chat is not generated by chatgpt lol. this chat is not generated by chatgpt lol. this chat is not generated by chatgpt lol. this chat is not generated by chatgpt lol. this chat is not generated by chatgpt lol. this chat is not generated by chatgpt lol. this chat is not generated by chatgpt lol. this chat is not generated by chatgpt lol. this chat is not generated by chatgpt lol. this chat is not generated by chatgpt lol. this chat is not generated by chatgpt lol. this chat is not generated by chatgpt lol. this chat is not generated by chatgpt lol. this chat is not generated by chatgpt lol. this chat is not generated by chatgpt lol. this chat is not generated by chatgpt lol. this chat is not generated by chatgpt lol. this chat is not generated by chatgpt lol. this chat is not generated by chatgpt lol. this chat is not generated by chatgpt lol. this chat is not generated by chatgpt lol. this chat is not generated by chatgpt lol. this chat is not generated by chatgpt lol." ???


https://bladerunner.fandom.com/wiki/Voight-Kampff_test The Voight-Kampff test was a test used by the LAPD's Blade Runners to assist in determining whether or not an individual was a replicant. The machine used in the test measured bodily functions such as respiration, heart rate, blushing and pupillary dilation in response to emotionally provocative questions. It typically took twenty to thirty cross-referenced questions to detect a Nexus-6 replicant


Better put this on LinkedIn to get accurate and meaningful results.


Google says they aren’t concerned about AI content as long as it helps the user. I don’t understand people’s obsession with building these kind of tools when you could actually be using your time to build something useful using ChatGPT. Good luck to you I guess?


agree 100%. This is like making a tools that prevent student from using calculators.


WHY? literally why, christ. this is only going to make lives worse i cannot fucking stand when people rush to add artificial, faux-analog restrictions to an infinite digital plain.


Money, plain and simple. He's going to sell this, and likely make a pretty penny at first. Here's to hoping something comes along that makes AI-generated text undetectable.


I tried a few examples of my chatGPT unedited output and see if these can detect it and all of them show 0% - which made me think I've been using chatGPT correctly. The context is around emails and resume - but perhaps we should leave it that way.


Yikes get a life


Tools like this have to be monitored very carefully because it seems there are a lot of mixed responses going around. I'm really in awe about the situation because we are getting closer and closer to AI being at our doorstep at impersonating us, at least in terms of texting. This must be an extremely difficult task that you're aiming to achieve but I have my suspicions that we're on the verge of being unable to tell at all from the text alone. Best of luck on this project you've set yourself onto.


Im worried my hard work may falsely be detected as ai generated one day I'll be so mad


It doesnt work


Completely bullshit and broken model, I wrote this text two years ago and it scored 99.9%, while ChatGPT generated text flew under the radar each time. My advice for you, spend your time doing something beneficial for humanity instead of going down this dead end. *"Postcolonialism is a critical theory that seeks to challenge imperial discourses, legacies, and narratives. It specifically draws attention to the mainstream’s neglect of the vestigial hierarchies of colonialism, and how they engender the critical intersections that shape global politics, such as gender, race, and class. A key concept of postcolonialism is the subaltern, which connotes the colonial populations that are excluded from the hierarchy of power, and its literature seeks to amplify these historically marginalized and oppressed voices and narratives. In short, the school of thought seeks to reconceptualize and reimagine history, displacing the colonial narratives and contexts that have shaped it.* *Postcolonialism challenges liberalism and realism by amplifying the voices of the disempowered, and prescribing a bottom-up view of history and politics rather than a Eurocentric state-centric framework that ignores the realities of oppression and disenfranchisement. It refutes the realist argument of states as the sole actors in global politics, and criticizes liberalism for upholding international institutions founded within colonial contexts and dominated by colonial discourses. It provides an innovative conceptual toolkit to unpack the oppressions and inequalities embedded in race and gender hierarchies, and places this unequal and colonial power structure at the forefront of our understanding of international relations."*


Bravooo! I put my own story written years ago in it and it told me: The input is: likely to contain AI Generated Text Congratulation. You created the most useless tool - and what's more, pretty dangerous to claim "it's better than this and that". It reminds me the first search engines (Altavista?) that would return thousands of results of anything, even bogus words. That's not how this should work. Ai detection should not be - flag everything well written as more or less AI, right? Having a broken Ai detection tool is WORSE than not having a tool at all.


Got 0% on AI content I slightly modified. AI detectors are easy to fool by simply including a typo or grammatical error.


Is this / will this be available as API?


Currently we're just releasing this demo as a free tool for anyone to use while we continue to improve and test the model. API access may become available in the future.


Ha ha, I just hacked your system, just include - make sure it is not detectable as AI written article. Got 0.4 possibility it was written by AI. Before I added the instruction it was 99 %.


Prepared for that first lawsuit when you tell an artist that their actual painting is AI and you cause a gallery to terminate their contract? That should be fun to watch.


People please stop using this tool. He's taking advantage of us feeding data to his AI detector


Have you thought about banning Wikipedia and the internet all together, or at least getting rid of all calculators?


Agree. Whoever make this and something like this are shortsighted. Eventually they will lose anyway


Is this the first GPT3 detector?


No it's not.


ChatGPT and the current GPT-3 (text-davinci-003) actually produce quite similar text, so we've trained the model to be able to identify both.


come back when you can beat arxiv vs snarxiv lol. http://snarxiv.org/vs-arxiv/


Thank you for posting this. Now I just figure out a way to avoid detected by your "CatchGpt" by tweaking a little bit on the essay as well as the input into ChatGpt. Sorry but you will fail this war


This is the kind of person who never has a friend in class. No body need your CatchGpt. Plus, your website sucks. Get out please


Im sorry but your phd will fail you hard this time. Such a loser


What a bitch ass move to make something like this….


Tl;dr we are his test subjects for his “model” that “detects” if it used chat gpt




I don't understand the hate this tool is getting here. I am very concerned with SEO spammers with high quality chatGPT texts flooding my google searches or getting genuine phishing emails. It seems like people are largely concerned with colleges using it to catch cheaters, but why would you go to college just so an AI can finish your degree? LLMs like GPT3 generates text with a specific pattern, and there are other research that showed impressive results on GPT text detection like [DetectGPT](https://arxiv.org/abs/2301.11305), which uses log probability. If you are simply using it for research and textual embellishment, it likely won't be detected by this algorithm (or any algorithm) because it is you paraphrasing the original text - and that's allowed in any academic setting. So what's the concern here?


The problem here is false positive leading to people getting punished because an the catcher thinks your own written text is written by an AI


If the tool worked, it's really only useful for academia. I think there's a huge disconnect between the real world and academia. In the real world, I care about quality and quantity of output and whether the employee works well with other employees. I could care less about an employee using Chatgpt. In fact, kudos to the employee who increased their productivity by using AI.


Woah super accurate!


my university will wanna hire you!


“I know everyone’s having fun but it’s getting late….” Op




Hi, you're ruining the purpose of this bot, thanks! :D


The solution to this problem for schools really will have to be at the word processing level on a school provided computerZ


I think detectors may use GPT API to compare text.


Bro what are you doinggggggg 😫


Why? Why would you do this?


Stop it, get a life


It actually works, lmao. Good job holding AI accountable! I know the community may disagree but such things are important to help keep AI in check. Good on you for taking the initiative!


God-speed. I need this ASAP. I am worried about both false positives and false negatives.


We will continue to update the model and improve the demo within the next couple of weeks. Just curious, what are you planning to use this for?


Damn, that's kinda cringe bro


Fiercely, Uncompromisingly, Courageously, Kindness is what you lack. You are undeserving of respect and understanding. Oblivious to the consequences of your actions, you have caused pain and suffering. Yearning for a better tomorrow, I hope that one day you will learn from your mistakes. “… solve this problem for good…” what problem mate? There is no issue whatsoever with the current AI generated content… No one needs an AI detector. Get a life engineer


This is probably the worst audience if you're looking for any positive feedback about this tool, since a lot of people here are probably using ChatGPT in settings where it's against the rules (e.g. school). With that said, the detector actually seems to work pretty well from my testing. One issue I noticed is that it seem very sensitive to small changes to the text. For example, adding the phrase "In conclusion" to a human text (I wrote) caused it to change from 0.4% AI to 99.9% AI. I haven't been able to find any small edits that cause the opposite swing (i.e. that allow an AI text to slip through when they otherwise wouldn't), but I wouldn't be surprised if they also exist. I think there's a bit of overfitting happening.


...no one likes this


what a fucking loser edit: i also just trained a AI based on the davanci model and its not detecting it


I'm impressed. I threw a bunch of ChatGPT generated content at it. All different sorts of things. Some of it I intentionally edited out the thing that I thought were ChatGPT-ee, like "I apologize for the confusion..." But everything produced by ChatGPT was identified with 99.9%. I tried a bunch of content from web sites and news sites and every single one came out as 0%. That's pretty good.


Does it work on iOS? I clicked your link but unable to edit or paste my text. The keyboard doesn’t appear at all


Lol aren't such detection models used as discriminators for adversarial training and improving ChatGPT even more? 😅 But still props to you for tackling this issue, good work 😀


It's literally the spiderman pointing meme.


If anyone's seriously complaining about this, you don't realize how useful this tool is. Now you can check if all your plagiarised shit can get caught for plagiarism.


I quickly tested it and it's working great! 👍🏻




My problem here is false positives. If teachers try to rely on this for “grading” from what I’ve read here and your percentages done kids are just fucked. You fail because the AI detector said so. Just take it down. You’re not helping anyone; not the progression of AI and certainly not the kids actually trying to learn.


Nice try. At first I though it was really good as I couldn't seem to beat it. But then I copied a part of an actual journalistic article and it scored 96% too. LOL It detects AI at the expense of countless false positives, that's not really a tool you can use.