Over half of all tech industry workers view AI as overrated::undefined

    • Humanius
      link
      fedilink
      English
      30
      edit-2
      8 months ago

      As someone who works in the tech industry and has used AI tools (or more accurately machine learning models), I do think it is overrated.
      That doesn’t mean that I don’t think it can be useful, just that it’s not going to live up to the immense hype surrounding it right now.

      • @[email protected]
        link
        fedilink
        English
        10
        edit-2
        8 months ago

        I work in tech and have used the tools. I am mostly neutral on its prospects. I think it’s somewhat overrated right now for many purposes, but just seeing how rapidly things are progressing gives me pause to outright dismiss its potential for immense utility.

        We have to consider that few saw ChatGPT coming so soon and even fewer predicting ahead of time for it to work as well as it does. Now that Microsoft is fully bankrolling its development-- providing their newly acquired former-OpenAI team virtually unlimited resources with bleeding edge hardware custom built for its models–I really have no idea how far and quickly they’ll progress their AGI tech. For all we know right know, in 5+ years LLMs and their ilk could be heralding another tech revolution.

        • @[email protected]
          link
          fedilink
          English
          6
          edit-2
          8 months ago

          They probably won’t advance much because currently it has two opposite but equally difficult problems. On the one hand, AI still hasn’t achieved sensor integration, or creating an ontologically sound world model that includes more than one sensor data stream at a time. Right now it can model based on one sensor or one multidimensional array of sensors. But it can’t model in-between models. So you can’t have, let’s say, one single model that can hear, see light and radar at the same time. The same way that animal intelligence can self-correct their world model when one sensor says A but another sensor disagrees and says B. Current models just hallucinate and go off the deep end catastrophically.

          On the opposite end, if we want them to be products, as seems to be MS and Altman fixation. Then it cannot be a black box, at least not for the implementers. Only in this past year there have been actual efforts to really see WTF is going on inside the models after they’ve been trained and how to interpret and manipulate that inner world to effective and intentional results. Even then, the progress is difficult because it’s all abstract mathematics and we haven’t found a translation layer to parse the model’s internal world into something humans can easily interpret.

          • @[email protected]
            link
            fedilink
            English
            38 months ago

            I don’t disagree with you, there are certainly some major hurdles to overcome in many areas. That’s why I caveated my comment to say it’s overrated for many purposes; however, there are certain use cases where current AI is truly an amazing tool.

            Regardless, OpenAI has made it clear that they never intended to relegate themselves purely to specific use cases for AI, they desire AGI. I would assume this is Microsoft’s desire, too, but I’m sure they’d be okay making numerous specialized models for each of their products. But yes, unless they can overcome all of those issues as you point out, its generalized usefulness will be severely stunted. Whether they can accomplish this in the short term (<10 years), I guess time will tell.

    • @Eccitaze
      link
      fedilink
      English
      5
      edit-2
      8 months ago

      I’ve had an AI bot trained on our company’s knowledge base literally make up links to nonexistent articles out of whole cloth. It’s so useless I just stopped bothering to ask it anything, I save more time looking it up myself.