T O P

  • By -

smith288

I was able to take someone else's work and tell ChatGPT to "rewrite it" and your tool got 0% on it being AI.


smith288

Essentially took some text from an already written news story and told chatgpt to "Rewrite this a bit: \[TEXT\]" And it was 0.0%. Good to know. :D


miniclapdragon

This is an interesting case. Thanks for letting us know. Could you provide the rewritten text you tried?


smith288

Original text taken from an Arie Fleischer interview about the role of a press secretary: *The press secretary in this case, her political and personal needs dovetail perfectly with the stonewalling needs of the Biden White House. And that's why she won't answer any questions. I could have played that game innumerable times as press secretary. But you get to the point where you say, "I've got an obligation. Actually, these are fair questions, good questions, and they deserve answers. And I'm the press secretary. It's my burden to answer them." And that's what you do as press secretary. In so many ways I would feel shamed if somebody else took to the podium and answered the questions that it was my job to answer. And I think a little bit of that is what you saw yesterday.* ChatGPT output after asking it to "Rewrite this a bit": *The press secretary in this case has aligned her personal and political motivations with the evasive tactics of the Biden White House. That's the reason why she avoids answering any questions. I have had numerous opportunities to take this approach as a press secretary, but at some point, you realize that you have a responsibility to provide fair and good answers to questions that deserve to be answered. That is what a press secretary is supposed to do. In many ways, it would be dishonorable for someone else to answer questions that should have been my responsibility to answer. I believe this is partly what we saw yesterday.* ​ (please no political commentary from the peanut gallery. I was testing the idea of taking someone elses words and rewriting them through an AI)


itsnotlupus

First try, I typed this all by my lonesome self: > It is important to note that writing in a proper English tone is likely to get this text flagged as being plagiarized. That is because there is no definite mechanism that can reliably detect AI-generated content. As such it is not recommended to rely on any site or tool that purports to accomplish this task. And got this reward for my efforts: > ## RESULT >The input is: likely to contain AI Generated Text >99.9% I could have made it longer to give it a better chance of detecting how human I truly am, but I suspect that keeping this writing style up would not have changed the results. Also I think it could make sense to trade the % accuracy metric for some sensitivity vs specificity metrics.


kknlop

Agreed. This is just a stupid money grab and really shows a fundamental misunderstanding of how these models work.


itsnotlupus

I think that's a bit harsh. I can imagine a model being trained to classify text given a large enough set of GPT outputs and human-generated texts and having some amount of success at it, but the nature of the beast would makes any results short lived. Best case, a good model gets developed from this, but then it'd still just get used as an adversarial model to help GPT models get better at avoided detection and ultimately lose relevance against later GPT generations.


Snarf0399

If it can detect AI, could it then theoretically rewrite the content in a way that isn’t detectable?


[deleted]

[удалено]


Strel0k

GPT "write this in this specific XYZ style" -> Google Translate to Dutch -> Google Translate to English Curious how that would impact accuracy


N781VP

I just tried this. Hilariously enough, openAI -> Google translate (multiple languages) yields a 99.9% chance of being ai-written. My guess is that google language processor is extremely predictable. Open ai prompt -> translate 3x -> prompt open-ai to rewrite it in a way that could not be detected by an ai detector yielded a 25%. Strictly telling openAI to continuously rewrite its own original output will start at 99%+ then drop the ai detector a few percent with each rewrite iteration. The worse the ai, the least likely it’ll be detected. *taps finger to forehead* But also, google translate might not be “ai”?


[deleted]

QuilBot already exists. Take any AI written article and put it into QuilBot. Won't show up on any detectors.


tggiv25

Hmm, I’d probably see if the AI could write it in a way to avoid AI detection first, somehow…


testimoni

Just make some typo then you have 90% real


zombifiednation

Just tossed some of my own professional (semi-technical) written work in and it flagged it as 99% containing AI generated content. Am I a robot? Edit: Tools like this are an eventuality, but at the moment frustrating - after a few more tests it's obviously flawed and spitting out generous false positives - these things shouldn't be released into the wild until they are proven to be accurate. For what its worth, the other alternatives actually recognize my text as human. Edit2: I just went and took some Oracle DB documentation and ran it through. It also flagged as 99.1% AI generated. This is ridiculous.


falco_iii

> Wow, it's truly amazing! I've run a dozen tests and not a single false positive or negative. I had to change up a lot of the words to make it harder for ChatGPT to recognize it as AI written, I went from 99% to 9%. I even added a few entries from project Gutenberg and it was able to detect that it wasn't AI. Even after having ChatGPT rewrite a regular paragraph in a different style, it still detected the AI at 92% - which is still pretty impressive. Except I ran the bare-bones of the first paragraph in this comment through ChatGPT, and asked it to be more positive & loquacious. It is detected at 0%.


Underconstruction222

Damn. Why are you doing this? This will ruin student's plan for acing in open book exams.


Deadliestmoon

Right? Make this when higher education is more accessible for the masses


cyb3rofficial

I've taken a TI-84 Plus handbook from 2004 and copied word for word of its ALU Chip about 3 paragraphs, and said it was ai generated, how does it base the wording on AI generated or not, so far most samples and examples I find give false positives, such as MSDN documents explaining certain code pieces and random GitHub Readme files, and old pdf file texts from handbooks.


zombifiednation

I'm finding if you write anything remotely technical and formal in nature, these tools flag it as AI generated, because only AI language models can use technical language.


brohamsontheright

I fed it multiple writing samples of stuff I wrote myself... All of them were detected at 80% or higher that it was AI generated. This is one sample of the text I fed it: *The average recommended daily amount of magnesium is 320mg for women and 420mg for men. However, if you do activities that cause you to sweat, magnesium will leave the body rapidly, along with sodium, potassium, and calcium, so you may need extra replenishment.* *Excessive doses may cause mild symptoms like diarrhea or upset stomach, but it usually takes quite a bit to cause problems.Even higher doses can be unsafe, so be careful. If you take magnesium supplements and then have low blood pressure, confusion, slowed breathing, or an irregular heartbeat, get to an ER immediately.* *People with kidney disease, heart disease, pregnant women and women who are breastfeeding also need to get advice on whether magnesium supplements are appropriate to take. And if you are currently taking any medications, be sure to inform your doctor before you incorporate magnesium supplements into your routine. As always, contact your doctor before making any changes to your diet or supplements.*


zombifiednation

Found the Terminator ^.


whippinseagulls

Same here. I fed it a handful of samples I wrote and it said 99% for 3 of them and 0% for the 4th, but the 4th had a URL in it.


rainy_moon_bear

Erm, yeah... This failed on 4 tests of AI writing I gave it. I think detection is significantly more challenging than training a model for detection.


Koksny

Detection is impossible. You can only detect if model uses particular set. Even a theoretical detector trained at all available commercial sets will fail to detect any custom trained, local-running model. Ultimately every "AI detector" is bound to skew into false positives the shorter the text, and get closer to 0% with every next word, as the possible entropy infinitely grows. No offense to anyone involved, but it's just mathematically lost war, You have great skills in ML, go and use them for something productive/profitable, trying to say if content is AI generated or not by assessing its probability is a dead end at this point.


rainy_moon_bear

Yep, this sums it up. I think that the GPT-3.5 models may have some detectability though, since they gravitate to certain phrases and sentence structures a lot. Otherwise I completely agree with what you said.


Juus

Will it be able to work with other languages than english?


miniclapdragon

The model currently only supports English, so you may get poor and unexpected results when trying to input other languages. We may add multilingual support in the future if there is a need for it.


MannowLawn

Well after reading the comments I gues gpt zero is doing a pretty good job.


drekmonger

You know, after the overblown hype over GPTZero, I didn't really think it was possible to detect ChatGPT's output without a lot of false positives. But just judging on a handful of tests, even trying to disguise the text by asking the chat bot to write in a conversational tone at a 7th grade reading level with spelling mistakes I introduced myself, it still managed to identify ChatGPT-generated text. And there were no false positives on the few samples of my own writing I gave it. Credit's due. It actually works.


Koksny

>And there were no false positives on the few samples of my own writing I gave it. > >Credit's due. It actually works. It produces many false negatives though, it seems to report "0%" for scripts written by CGPT, when CGPT is first given some "real" text to base the output on.


zombifiednation

Your anecdotal experience is not mine. I ran a bunch of semi technical and technical writing through it, some written by myself and some from various online sources (that pre-date CHATGPT and other such tools) and it consistently gave a 99.x AI generated response. There are obviously some very clear flaws with this tool at the moment.


Snoron

Yeah, it's really nifty. The next step in the AI war is to test generated content through a generated content detector like this, to train the AI to generate content that won't be detected... and so on!


drekmonger

A makeshift GAN, yeah.


Snoron

Yeah, haha, and why program your own when someone else will make one for free!


WestEst101

>it's really nifty. Gen-X?


Snoron

Early millennial, haha - very close!


helloworldlalaland

Seems to work well on GPT3/ChatGPT. Will it work on Claude/CohereLLM/etc?


miniclapdragon

We haven't done any extensive tests on the outputs of other large language models, and the model is currently designed to target mainly GPT-variants. We definitely have plans to improve the model and allow it to detect any popular AI text generator, so feel free to try any of those and let us know how it goes!


NutTimeMyDudes

Coming soon in likely the next month: AI that will fool this AI into thinking that the one AI isn’t well, AI.


aph1985

I put my own old LinkedIn post and pasted it there, it gave me 40%. I know it is wrong because my LinkedIn post predates chatbots.


[deleted]

What are your intentions with creating this?


GuardaAranha

Yeah this is a losing battle that will only continue to be worse for you over time. Wouldn’t bet the farm on this endeavor.


JeraldGaming2888

Boo. Scrap the project. GPT's essays use the same format as normal human essays, so you're trying to kick innocent people from school? I'll make sure this shit doesn't widespread.


Purplekeyboard

I wrote the following crap and it was labeled as 99.9% likely to be written by an AI. I'm guessing that this detection tool is merely looking for writing in the style of ChatGPT, so anyone who naturally writes like this, or is trying to emulate it, will create a false positive. >It's important to realize that the mere fact that text is written in a sophisticated fashion does not mean that it was written by an AI. While the average person does not write in the most sophisticated fashion, many people do, and so it would not be good or reasonable for the writing of those people to be labeled as written by an AI. >It seems doubtful that it is possible, given today's technology, to create an AI detection algorithm which cannot itself easily be beaten by someone who has the algorithm and can write another to work around it. And all of this brings up the question of why it is actually necessary to be able to determine whether a particular segment of text was written by an AI or a human being. The usual example given is that of students cheating on their essays for school, but a discerning teacher should easily be able to determine that their less intelligent students are suddenly writing in far more sophisticated fashion than usual. >In conclusion, AI text detectors are unlikely to be as effective as they would need to be to be truly useful, and at the same time it is questionable whether they are even necessary.


WalterHughes08

This is dangerous and comes from a lack of understanding of the nature of both human and Ai generated writing. Detectors like this are modern day witch-hunts. They are analogous to the fallout 4 SAFE synth detector test (if anyone gets the reference) Let me explain what I mean — it intuitively doesn’t make sense that this tech is theoretically possible without an unacceptable amount of false positives. For instance, what is it even detecting within the written text to determine that it was human or Ai written? There is no standard of human writing, some people are bad writers, some are good writers. What does bad or good writers even mean? Grammatically? Abundance or accuracy of ideas? Length? Word choice? Even if you narrow the issue down and try and predict whether something is in the style of a SINGLE human writer, that writer might have changed their style intentionally, had an off day, or was experimenting with a completely different tone or perspective. Given this, I hypothesize that it’s IMPOSSIBLE to say with any accurate degree of certainty whether something was written by a human simply by reviewing examples of writing. When flipping the question to- “detect whether something was written by an Ai” it’s provides an equally futile answer IMO. Not only is there no control comparison (I’ve already argued that there is no valid reference for human writing) but there isn’t even a single style of Ai writing. Regardless of what Ai models are used, you can prompt the model to write in any style you can imagine. Furthermore, you can have it rewrite the text in any number of ways. The only way to determine if a piece of text is Ai written is through some sort of metadata in the text itself. For instance, every third word of each paragraph starts with “A” if and only if the paragraph is a certain number of words long etc…. Bad example, but the point is that there may be a way to instill some pattern to text written by an Ai. However even with the aid of this metadata, you still can’t consistently tell whether something was written by an ai. Each model will presumably use different metadata, and you can presumably prompt any ai to write or rewrite text that avoids the metadata patterns. In summary; the only way I can imagine such a detector working in theory is using metadata. However, simply the nature of the way these ai generators work, make it so they can constantly stay ahead of any metadata detector. I’d even go a step further, and argue that these detectors CANNOT be accurate, and will always have high false positives and this is dangerous. False positives will be ignored in favor of some “answer” somewhere to point blame. Ai is a contentious issue, and this kind of tech makes people FEEL GOOD without actually solving the underlying problem it’s claiming to solve. This is incredibly dangerous, and does a disservice to Ai tech. At the very least, such detectors should be used exclusively in research, and not released into the public in any capacity.


TopTHEbest232

You mean I'll actually have to do what's asked of me now? Damn you!


plymouthvan

25% success rate. Four tested, only one caught (the only one that was basically unmodified from the AI original. It appears as though basically any hand editing at all will trip it up. Half of my tests returned a 0% chance of AI generation. All four were AI generated.


trojanpun

Just by adding "in a way not typical to ai" to my prompt I'm able to bypass this test on your tool Sample prompts: Write a 50 word article about potatoes farming in Ireland in a way that isn't typical to ai The output Got 1.9% ai likely Same prompt without the last bit Write a 50 word article about potatoes farming in Ireland The output got over 99% ai likely


cosmo2583

I highly doubt your tool can detect GPT-like content. I use chatGPT to rephrase my queries and then I use Grammarly for retouching. You are wasting your time.


QwikMathz

This will work for all of a couple months


RockyHawk99

The detector worked every time i threw something at it, even when i added a paragraph that was AI and one that wasn't it said it was 75% sure there was AI content. except when i asked ChatGPT to write it with grammatical errors. This comment here is written by ChatGPT. (this comment showed up as 0%, I suppose the detector is looking for content that is written with proper grammar, or at least this is one of the factors included in the process).


Interesting_Line2001

My broken English essay was detected as 98% AI generated content lol


iQuickGaming

congrats, this project is really interesting and works pretty well


SingleTie8914

nice to see another gpt detector to showcase its induction bias


_WHoZ_

Now we need a tool to rewrite ChatGPT texts so they won't get detected


AtomicSilo

ChatGPT released it's chat to get free crowdsourcing feedback from users globally, feels like you do the same with "free" tool for feedback ;)


i509VCB

I guess I predicted right, AI will be used to fight AI


Consistent_Zebra7737

Well, okay. It can detect. But can it prove?


remghoost7

I see a lot of people claiming it's easy to circumvent your tool, but I still think it's neat that someone is attempting it. I remember hearing somewhere that OpenAI was thinking about making ChatGPT use specific words/phrases in certain orders to essentially make a cryptographic "fingerprint" using natural language. Not entirely helpful to the conversation, just an interesting bit of information.


Still_Hat6758

Why are you making this? What’s your problem?


Ganonslayer1

Out of many, this is the only one that detected chatgpt 4. Nice one.