• @gelberhut@lemdro.id
    link
    fedilink
    English
    87 months ago

    Can one suggest a good explation why? The model is trained and stays “as is”. So why? Does opensi uses users rating (thumb up/down) for fine tuning or what?

    • @foggy@lemmy.world
      link
      fedilink
      English
      107 months ago

      AI trains on available content.

      The new content since AI contai s a lot of AI - created content.

      It’s learning from its own lack of fully understood reality, which is degrading its own understanding of reality.

        • @foggy@lemmy.world
          link
          fedilink
          English
          47 months ago

          There is no gpt5, and gpt4 gets constant updates, so it’s a bit of a misnomer at this point in its lifespan.

        • FaceDeer
          link
          fedilink
          37 months ago

          It’s possible to apply a layer of fine-tuning “on top” of the base pretrained model. I’m sure OpenAI has been doing that a lot, and including ever more “don’t run through puddles and splash pedestrians” restrictions that are making it harder and harder for the model to think.

    • magic_lobster_party
      link
      fedilink
      87 months ago

      They don’t want it to say dumb things, so they train it to say “I’m sorry, I cannot do that” to different prompts. This has been known to degrade the quality of the model for quite some time, so this is probably the likely reason.

      • @gelberhut@lemdro.id
        link
        fedilink
        English
        17 months ago

        Hm… Probably. I read something about chatgpt tricks in this area. Theoretically, this should impact web chat, not Impact API (where you pay for usage of the model).