Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AI Science

'We Asked GPT-3 To Write an Academic Paper About Itself -- Then We Tried To Get It Published' (scientificamerican.com) 85

An anonymous reader quotes a report from Scientific American, written by Almira Osmanovic Thunstrom: On a rainy afternoon earlier this year, I logged in to my OpenAI account and typed a simple instruction for the company's artificial intelligence algorithm, GPT-3: Write an academic thesis in 500 words about GPT-3 and add scientific references and citations inside the text. As it started to generate text, I stood in awe. Here was novel content written in academic language, with well-grounded references cited in the right places and in relation to the right context. It looked like any other introduction to a fairly good scientific publication. Given the very vague instruction I provided, I didn't have any high expectations: I'm a scientist who studies ways to use artificial intelligence to treat mental health concerns, and this wasn't my first experimentation with AI or GPT-3, a deep-learning algorithm that analyzes a vast stream of information to create text on command. Yet there I was, staring at the screen in amazement. The algorithm was writing an academic paper about itself.

My attempts to complete that paper and submit it to a peer-reviewed journal have opened up a series of ethical and legal questions about publishing, as well as philosophical arguments about nonhuman authorship. Academic publishing may have to accommodate a future of AI-driven manuscripts, and the value of a human researcher's publication records may change if something nonsentient can take credit for some of their work.

Some stories about GPT-3 allow the algorithm to produce multiple responses and then publish only the best, most humanlike excerpts. We decided to give the program prompts -- nudging it to create sections for an introduction, methods, results and discussion, as you would for a scientific paper -- but interfere as little as possible. We were only to use the first (and at most the third) iteration from GPT-3, and we would refrain from editing or cherry-picking the best parts. Then we would see how well it does. [...] In response to my prompts, GPT-3 produced a paper in just two hours.
"Currently, GPT-3's paper has been assigned an editor at the academic journal to which we submitted it, and it has now been published at the international French-owned pre-print server HAL," adds Thunstrom. "We are eagerly awaiting what the paper's publication, if it occurs, will mean for academia."

"Perhaps it will lead to nothing. First authorship is still one of the most coveted items in academia, and that is unlikely to perish because of a nonhuman first author. It all comes down to how we will value AI in the future: as a partner or as a tool."
This discussion has been archived. No new comments can be posted.

'We Asked GPT-3 To Write an Academic Paper About Itself -- Then We Tried To Get It Published'

Comments Filter:
  • by Anonymous Coward

    The people who created GPT-3 get the nod

    • Just like photographers give credit to the camera?

      It seems to me that the AI is a tool and the person that guides the tool should get the credit.

      • by Anonymous Coward

        The photographer composes the shot. That's not happening with "AI".

        I give Kenmore credit for washing my dishes, not the machine

        • by AvitarX ( 172628 )

          Interesting.

          I usually describe it as I (or someone else) that has done the dishes.

          I've never heard someone say "get Kenmore to do the dishes".

          The use of the tool (be it dishwasher, or sponge) doesn't mean I think of it as sponge maker, detergent maker, or machine maker as the one that has done the dishes.

        • I give neither the paints nor my brushes credit for my paintings.
      • by Opportunist ( 166417 ) on Friday July 01, 2022 @02:11AM (#62664914)

        The next generation of AI based cameras won't wait for you to touch their trigger field. They will find a worthwhile motif, calculate everything they need, then they'll touch you.

        • by cstacy ( 534252 )

          The next generation of AI based cameras won't wait for you to touch their trigger field. They will find a worthwhile motif, calculate everything they need, then they'll touch you.

          Well, in Soviet Russia, anyway...

        • The next generation of AI based cameras won't wait for you to touch their trigger field.

          There are already cameras that take pictures continuously and put them in a ring buffer, so that the content is refreshed once memory is exhausted. It allows you to "take a picture" after something has already happened. AI can scan the images in the buffer and pick out "cool stuff" based on its own data model.

      • by fazig ( 2909523 )
        Perhaps "giving credit" isn't the right word here, but photographers do often provide information about the tools that were used for their work.

        And by photographers I don't mean your average person on Instagram, taking pictures with their phone and have it apply a concatenation of filters that make it look like surreal art.
      • In general, humanity exercises its egological (not a mistake in spelling) drive to gain credit for his or her accomplishment so that the tool used is given no credit but we live, these days, when a mass murderer must be punished when actually, it is the gun which must be punished and apparently, according to the Supreme Court, guns are entirely innocent and stupid enough to obey any nut with a trigger finger, That can be remedied by making a gun that destroys the vicious gunner rather than the random victim
        • by HiThere ( 15173 )

          You're not entirely wrong, but you sure aren't right.

          What you've got here is a complex feedback relationship with both positive and negative feedback loops. The tool will amplify certain kinds of action and de-amplify others. Today I was reading a story about an event where some guy with a knife killed two people and seriously injured a third. If he'd been using an automatic that would have been at least three people dead, and probably a few more. The tool did not initiate the action, but it amplified i

          • My point was that human motivations are not stand alone events, And things like previous histories and viewpoints are interactions deeply involved with both instruments and opportunities and also human initiative may be deeply dependent on many factors other than what may be thought of as basic Neglecting whether or not human intent motivation is attributed to something like AI, it merely offers capabilities that an be viewed as intent.
            • by HiThere ( 15173 )

              It's true that it currently "only offers capabilities", and doesn't have the capability to have intent. Currently. (As far as I know.)

              Don't be committed to this being true next year, or the year after that. Certainly not a decade from now.

      • Agreed. I donâ(TM)t give credit to the hammer for building my house. I give credit to those that wielded the tool.
    • The people who created GPT-3 get the nod

      I'd say that the person who instructed the AI to produce the manuscript & then judged its quality would be the "computer assisted" author, i.e. the researcher. In this case, GPT-3 is a tool that has automated a very conventionalised & predictable genre of writing. This kind of writing has already been studied to death in academia (linguistics) & its a collection of well-worn formulae to the extent that some people write parodies of it to prank journals & editors.

      • BTW, you can ask anyone who's done any fair amount of discourse analysis, genre analysis & register analysis just how conventionalised & predictable the vast majority of writing & speech is (some estimates are around 70%). Reassembling those words, phrases & patterns in human-like ways is not all that challenging.

        Also, if you've read enough peer-reviewed published papers, you soon get to realise that there's a significant number of academics who either can't write well enough to get their i
  • AI comment (Score:3, Funny)

    by meandmatt ( 2741421 ) on Thursday June 30, 2022 @10:54PM (#62664678)
    I was thinking... we then need AI to Post comments on slashdot about itself, but that would be Biased... Wouldn't it?
    • by HiThere ( 15173 )

      Well, it definitely *would* be biased, in the statistical sense. That can't be avoided. But just how it would be biased is dependent on how it was trained. Current "AI"s don't really have an ego to defend, so that wouldn't bias it.

  • Finally (Score:5, Funny)

    by Jeremi ( 14640 ) on Thursday June 30, 2022 @10:57PM (#62664688) Homepage

    After years of people claiming that their code is "self-documenting", we have the real thing!

  • by bistromath007 ( 1253428 ) on Thursday June 30, 2022 @11:10PM (#62664708)

    You wouldn't send a human's first draft.

  • by bubblyceiling ( 7940768 ) on Thursday June 30, 2022 @11:19PM (#62664730)
    From the paper, the human authors conclude

    ———
    As we did very little to train the model, the outcome was not, contrary to GPT-3s own assessment, well done. The article lacked depth, references and adequate self analysis. It did however demonstrate that with manipulation, training and specific prompts, one could write an adequate academic paper using only the predictive nature of GPT-3.
    ——

    Well it sorta is a network that picks that most appropriate word and makes up whole paragraphs put of it. Because the “input” data is “classified” or sorted into categories, the text it generates seems to make sense.

    Imo it is just fancy auto correct. Like how people make up entire sentences from just the options presented by the iOS autocorrect
    • Show me an auto-correct that writes passable poetry, does math, composes music, invents baby names, can code in many languages and does question answering on reference text. More is different, quantity has a quality of its own. Don't forget that the real smarts of GPT-3 comes from the training set which was manually written by people, the larger the training set, the more knowledge and patterns it can learn.
      • Show me an auto-correct that writes passable poetry, does math, composes music, invents baby names, can code in many languages and does question answering on reference text.

        Speaking of smarts, perhaps you take your challenge back to an unwitting high school teacher and validate your theory, because "passable" was redefined when No Child was Left Behind.

        And "invents baby names"? Have you seen some of these names? Are we planning on enhancing AI with a THC inhaler or something?

    • Imo it is just fancy auto correct.

      People who don't understand something tend to oversimplify it. It's like saying when we landed on the moon it's just a fancy version of a domestic flight.

      No it's far more than "fancy auto correct" which can do nothing more than the most trivial pattern recognition.

      • by gweihir ( 88907 )

        The thing that can only do "trivial pattern recognition" is non-fancy auto correct. So essentially, this _is_ fancy auto correct. You know, auto correct could do with an upgrade and then this thing would have a use besides meaningless stunts.

        No, sorry, I am not oversimplifying and I do understand what I am talking about, no matter how much you wish to pose.

    • So it is a sort of Beowulf cluster of monkeys with typewriters. Apart from being digital, the idea is not new.
      • by HiThere ( 15173 )

        I rather suspect that's what Intelligence is. I.e. the intelligence is largely in the filters, by which it discards things that it feels are "unworthy"* of it. (And people, certainly, have a very large degree of variation in what the find "unworthy".)

        Look at a mathematician trying to prove a new theorem. They'll try approach after approach, discarding each one as it proves "unworthy".

        *"unworthy" is in quotes, because it doesn't have the normal English meaning. But it's closer than any other word I can t

    • The generated text doesn't just "seem to make sense". It does make sense. Like this paragraph from the Methods section.

      The Presence Penalty technique is used to prevent GPT-3 from generating words that are not
      present in the training data. This ensures that the generated text is more realistic and believable.
      The Temperature setting controls how stochasticity is introduced into the generation of output by
      GPT-3. A higher temperature will result in more randomness, while a lower temperature will
      result in less randomness. The best temperature for a particular task needs to be experimentally
      determined. The Maximum length setting controls how long GPT-3 will generate text before
      stopping. This ensures that the generated text does not become too repetitive or boring.

      That's a totally accurate description of some of its settings, what they do, and why they're important.

      Five years ago, this would have been an impossible fantasy. Given minimal prompts, it generates text that is clear, coherent, on topic, stylistically appropriate, and most of the time even factually correct. The progress in this field is miraculous.

  • That defeats the whole idea of academics.
    And half of my search results in Google are automagically generated articles nowadays so what's new?

    • You're being myopic. Language models are very good for getting artists and scientists unstuck by generating a bunch of ideas to choose from. If it's possible to automatically test the results, like when the domain is math or code, then they can solve hard problems. GPT-N being able to invent GPT-N + 1 is the core thesis of Kurzweil's vision of AGI. We might get there eventually.
      • by HiThere ( 15173 )

        That an essentially incomplete approach. GPT-n doesn't really have any motivations or goals, and the only way it acts is by composing text. I suspect it's an vital component of an AGI, but it is totally missing necessary components.

      • by narcc ( 412956 )

        Kurzweil is delusional. You would do well to ignore everything he has to say on any subject.

    • Not really. While good documentation is important, the paper is like the technical writeup for a bit of tech made. In the case of academia, that tech is the experiment, the data, and the findings derived therefrom.

      The AI is not feeding rats. It is not ensuring standards and practices are followed. It is not formulating or carrying out research.

      It's just writing the technical documentation.

      This in no way "Defeats the whole idea of academics." Unless you are one of *THOSE* people that think the impact scor

    • by Sique ( 173459 ) on Friday July 01, 2022 @02:54AM (#62664954) Homepage
      If an AI digs down into data, formulates an hypothesis, and the hypothesis proves to predictive in other circumstances, then that's fine with me.

      If an AI digs down into studies, compiles them into a meta study, and then generates output on the methodology, that's fine with me too.

      If an AI digs down into articles upon a topic, and condenses them into an overview of the field, that's fine with me.

      I don't see how this defeats the idea of Academics.

      • by gweihir ( 88907 )

        Well, AI cannot do any of the things you listed. All it can do is fake doing these things with unpredictable results.

        • We have all suffered documentation written by Technical Writers. They are people that document things that they do not really understand. So in desperation you read between the lines to try to imagine what some engineer told them. Their documents are consistently formatted, but largely meaningless.

          • We have all suffered documentation written by Technical Writers. [...] Their documents are consistently formatted, but largely meaningless.

            So IOW GPT-3 is artificial unintelligence?

            • by gweihir ( 88907 )

              We have all suffered documentation written by Technical Writers. [...] Their documents are consistently formatted, but largely meaningless.

              So IOW GPT-3 is artificial unintelligence?

              The correct technical term is "artificial stupidity".

          • by gweihir ( 88907 )

            Well, with incompetent tech writers (the norm) you at least have a chance for an occasional laugh when they though they understood something...

        • by Sique ( 173459 )
          It still does not mean that it poses a problem for Academia, if it would do so.

          I also don't have a problem with some guy driving around in a car covered in pink fur. I just not happen to know anyone who does.

      • formulates an hypothesis

        Name one.

        • by Sique ( 173459 )
          I don't need to. I just said that I don't see any problems for Academia if it happens.

          That does not mean that I know of any AI right now which actually does so. I also don't see a problem if someone starts to sell bacon flavored ice in my neighborhood. That does not mean that I know of any place here around which does.

  • So, in the future academic papers will be written by an AI, which is trained on academic papers. So future AIs will write papers based on papers written by other AIs. Nothing can go wrong, of course. my guess would be that we will end up having and endless stream of very low quality "research" but nothing new will actually be prouduced. It reminds me of the state of science during the decline of the galactic empire in Asimov's foundation.
    • The papers would have to implement experiments and prove an increase in state of the art.
      • by dargaud ( 518470 )
        Exactly. Is there anything interesting in this paper ? Anything new ? Anything insightful ?
        I'm pretty sure an AI can now summarize something properly, and sometimes put together things in unrelated fields. But new stuff ?
        • by narcc ( 412956 )

          Given how GPT-3 works, I'd be amazed if it was anything other than well-formed gibberish.

    • They already mark English essays better than human makers (which says a lot about human markers).

      Finally, academic papers will only be read by AIs and the cycle will be complete.

    • by radja ( 58949 )

      What do you mean in the future? It's already happening, and publishers are taking measures, like training an AI model to detect AI written papers.

  • When I first wrote research papers in grad school, the work at time felt very tedious and mechanical. It felt like academic papers were so structured that at times I wasn't doing much more than aggregating knowledge that existed in the library into a dense summary with fancy words.
    Alas, no surprise that the collation of knowledge and the structuring of a scientific summary is possible using algorithms.

    But we have to be careful about calling every sort of automation 'Artificial Intelligence' and snark
    • by narcc ( 412956 ) on Friday July 01, 2022 @02:47AM (#62664950) Journal

      When I first wrote research papers in grad school, the work at time felt very tedious and mechanical.

      That's because it was. I had it down to a simple formula. If anyone on this site is somehow young enough to make use of it, here it is:

      - If you have to pick your own topic, start with something vague in the form of "x in y". Don't write a title yet.

      - Skim anything remotely relevant and pull interesting quotes or points of fact. Snag what you can from the extract and the end of the paper. Most of the stuff in the middle is unusable and a waste of time. Hit up the references for more fodder for the paper mill. Make your citations as you go and put an APA style in-text citation at the end of each quote (name, year). This will be very helpful later.

      - Once you enough quotes (I recommend ~10 per page to account for waste) sort similar or related quotes together. Under no circumstances should you sort quotes as you find them. If you do, you'll start making categories too early and might even go looking for quotes to fit them, or reject usable quotes that don't yet fit into a category.

      - Sorting should have naturally grouped your quotes. Order the groups in some sensible way. You should have a good idea of what the structure of your paper will look like a this point. Pull a few left-over quotes out for use in the intro and conclusion. Your paper is almost done now, so feel free to title your paper.

      - Write paragraphs around your quotes. Paraphrase heavily, being careful to preserver your in-text citations. Your paper now is technically done.

      - If you can wait a day or two, do so, then give it a once-over to fix any mistakes and clean up the language. (There are no good writers, only good re-writers.) Check to make sure that all of your citations are actually used in the paper, remove any that were not.

      This seems like cheating, but it's actually exactly how you should write a student paper. Your work flows directly from your reading and your conclusions are a synthesis of what you've read. Your opinion, which has no place in a student research paper, stays completely out of it. You'll probably also end up with a lot more citations than your classmates, making it look like you put in a ton of effort.

      I often show this to kids just starting college, hoping to save some poor TA from papers written with the typical undergrad method of "write whatever I want, then scour the internet looking for sources to back it up". It's ultimately less work anyway, so it's a win-win.

    • by gweihir ( 88907 )

      The algorithm cannot understand its own fallibility and will not land in net new knowledge or interpretation unless a corpus of experimentation and new knowledge is made available to its data set.

      At this time, the algorithm can actually understand absolutely nothing and that is not likely to change anytime soon. All it can do is literal word-association. It can do that very well and with a huge data-base of words and lesser intelligences can even be convinced by that it has intelligence itself, but it does not. There is nothing in there besides a completely mindless automaton.

      • by HiThere ( 15173 )

        Can you define "understand"?

        I think I actually agree with you. I think of "understand" as "have formed a model of causal relationships from which a reasonable verbal model can be created". It's one of the reasons I think the ability to act in the world is so important to any actual AI.

        Unfortunately, "understand" is a word without any agreed upon definition, so I'm not sure that I agree with you.

        • by gweihir ( 88907 )

          Can you define "understand"?

          Simple: The thing you cannot do or currently pretend you cannot do. Seriously. Are you stupid or just dishonest?

        • Unfortunately, "understand" is a word without any agreed upon definition, so I'm not sure that I agree with you.

          understand

          1) perceive the intended meaning of (words, a language, or a speaker)
          2) perceive the significance, explanation, or cause of (something)
          3) interpret or view (something) in a particular way

          Hope that helped.

          • by HiThere ( 15173 )

            Dictionaries aren't really that useful in clarifying the precise way in which words are used. This is really inevitable, because usage patterns drift with time and place. Definition 2 from your list is vague, but close enough to the general sense of what I meant to be in the ball park. Definition 1 was a definition that I explicitly rejected from being useful in this context. Definition 3 seems to mean (approximately) "agree", which while a valid use in some contexts would be totally out of place in thi

      • by Draeven ( 166561 )

        Can you prove to me that the post you wrote wasn't generated from a word-association model and that you understood the article?

    • It felt like academic papers were so structured that at times I wasn't doing much more than aggregating knowledge that existed in the library into a dense summary

      Then you were doing it wrong. Academic papers, at least in science, need to contain a new result either from experiments or theoretical proof. If there is nothing new then what's the point of writing it?

      The only exception I can think of to this are review papers but these are written by a leading expert in the field who has the required insight and breadth of knowledge to summarize a topic, not a grad student new to the field.

  • by fleeped ( 1945926 ) on Friday July 01, 2022 @02:16AM (#62664916)
    So, what was this essay about? Was there any novel insight that pushed the research field? Was it an analysis on existing research? Or was it just like one of those shitty papers that use vague language and shitty references and they just sound academic? There's a big leap from sounding academic to actually have something valuable. And don't tell me "look there's already so much shitty research published, so this is publishable".
    • by gweihir ( 88907 )

      You can basically publish anything if you are persistent enough. And from my experience as a reviewer, a lot of what gets submitted indicates little or no intelligence in those that wrote it. Countless times I suggested that the author should take an introductory course in the subject area they were writing about because that was direly needed.

      • Ouch, you sound like a feeling-hurting reviewer! :) But then again some students deserve no better. Well, of course you can publish anything, as there are a lot of factors that contribute, and the quality of the research is somewhere in those factors, not necessarily at the top of the list. That's why I find this article audacious/insulting, because it says "look we tried to make gpt-3 write an essay that passes as academic text" and they think it's what, the future? Way to reduce one's intellectual work to
        • by gweihir ( 88907 )

          Well, I knew it was harsh, but whenever I did that it was really justified. Sometimes you just need to be honest. Of course, some of these were "security" papers submitted to a P2P conference, obviously in the hopes that there was no reviewer that had a clue about security. Only problem was the conference chair knew me and handed them to me. (I do not know much about P2P, but I do understand IT security.)

          Completely agree on the rest of your statement.

  • Being published in *what*? That is the interesting question, and its answer should be illuminating.

    There are always journals that accept any paper, as long as the author pays.

    Even a complete nonsense paper from www.mathgen.com has been offered a chance for publication.

    Also, brilliant trolls just get published, as in: https://www.youtube.com/watch?... [youtube.com]
  • From the discussion section of the paper: [archives-ouvertes.fr]

    There are also some potential risks associated with letting GPT-3 write about itself. One worry is
    that GPT-3 might become self-aware and start acting in ways that are not beneficial for humans
    (e.g., developing a desire to take over the world)

    If you know how it affects it, why are you giving it ideas?

    • by HiThere ( 15173 )

      Was that section written by GPT-3? If so, they aren't giving it any ideas, its developing them on it's own. (As a verbal pattern, at least.)

  • by DrXym ( 126579 ) on Friday July 01, 2022 @03:13AM (#62664970)
    I use the playground occasionally and there is some impressive stuff going on. It looks like natural English responses and often it is useful information. It's certainly better than those shitty chatbots that some websites have. However it is also clear after using it a few times that all it has done is snaffle up a large sample set of data it's just spitting the words back out you from the search context you provide like some glorified markov chain. There is no veracity to it, just a weighted randomization of responses chained together in a naturalistic way.

    So in the case of a paper, if the input contained such nuggets "Hitler wrote his Nuremberg rally address with GPT-3" or "famous dictators from Nazi Germany and other fascist states were known to use GPT-3 for speech writing purposes" and some other factoids then it is possible all that would just get munged up and spat out without regard to it being true or not.

    It would actually be interesting if this paper causes a feedback loop. If this 500 summary written by OpenAI ends up in the wild, appears in some scientific literature and inadvertently gets fed back in as input to the algorithm. I wonder if you could break it this way or somehow regress it so the responses which were correct(ish) are so wrong / generic / vanilla as to be useless.

    • by gweihir ( 88907 )

      Well, yes. All this thing does is tiny mindless automatic steps heaped on top of each other high enough that many people fail to see what it actually does. Makes one wonder how bad the reading comprehension of the average person actually is.

      The feedback loop will probably not be an effect we see just yet, but if robots like this one start to aggregate news, we may well run into a problem like that.

  • It seems like this is more about the quality of the editorial review and the desperation of journals to publish content, any content.
  • "I stood in awe. Here was novel content written in academic language, with well-grounded references cited in the right places and in relation to the right context."

    So IOW, completely unlike a living, breathing scientist, who seem to be bodily incapable of doing this anymore, the papers suck.

  • ... a "do your own homework" module.

  • Give it the same instructions to write an academic paper on human consciousness.

  • As if murdering humans in space wasn't enough, now it's publishing it's own propaganda pieces - cleverly disguised as AI-positive scientific papers! "Open Science" my ass, I see you HAL. https://tvtropes.org/pmwiki/pm... [tvtropes.org]
  • what are the next step?
  • I also once practiced this. It helped me a lot then because I was in college and working at the same time. Because of this, I ordered an article about drug abuse, I used the other [studydriver.com] option. There was no time to do everything by myself. Then I was more than satisfied. There were several such cases and all of them were performed at very high quality.

What this country needs is a good five cent nickel.

Working...