• @[email protected]
    link
    fedilink
    English
    543 months ago

    And the system doesn’t know either.

    For me this is the major issue. A human is capable of saying “I don’t know”. LLMs don’t seem able to.

    • @[email protected]
      link
      fedilink
      English
      353 months ago

      Accurate.

      No matter what question you ask them, they have an answer. Even when you point out their answer was wrong, they just have a different answer. There’s no concept of not knowing the answer, because they don’t know anything in the first place.

      • @[email protected]
        link
        fedilink
        English
        183 months ago

        The worst for me was a fairly simple programming question. The class it used didn’t exist.

        “You are correct, that class was removed in OLD version. Try this updated code instead.”

        Gave another made up class name.

        Repeated with a newer version number.

        It knows what answers smell like, and the same with excuses. Unfortunately there’s no way of knowing whether it’s actually bullshit until you take a whiff of it yourself.

        • @[email protected]
          link
          fedilink
          English
          53 months ago

          So instead of Prompt Engineer, the more accurate term should be AI Taste Tester?

          From what I’ve seen you’ll need an iron stomach.