The new global study, in partnership with The Upwork Research Institute, interviewed 2,500 global C-suite executives, full-time employees and freelancers. Results show that the optimistic expectations about AI’s impact are not aligning with the reality faced by many employees. The study identifies a disconnect between the high expectations of managers and the actual experiences of employees using AI.

Despite 96% of C-suite executives expecting AI to boost productivity, the study reveals that, 77% of employees using AI say it has added to their workload and created challenges in achieving the expected productivity gains. Not only is AI increasing the workloads of full-time employees, it’s hampering productivity and contributing to employee burnout.

  • silasmariner@programming.dev
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    3
    ·
    5 months ago

    Yeah but the idea of AI in that kind of workflow is so that the product guy can actually do it themselves without asking you and in less than 30 mins

    • jjjalljs@ttrpg.network
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      1
      ·
      5 months ago

      Yeah but that’s like using an entire gasoline powered car to play a CD.

      Competent product guy should be able to learn some simpler tools like Google sheets.

      • silasmariner@programming.dev
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        2
        ·
        5 months ago

        No arguments from me that it’s better if people are just better at their job, and I like to think I’m good at mine too, but let’s be real - a lot of people are out of their depth and I can imagine it can help there. OTOH is it worth the investment in time (from people who could themselves presumably be doing astonishing things) and carbon energy? Probably not. I appreciate that the tech exists and it needs to, but shoehorning it in everywhere is clearly bollocks. I just don’t know yet how people will find it useful and I guess not everyone gets that spending an hour learning to do something that takes 10s when you know how is often better than spending 5 mins making someone or something else do it for you… And TBF to them, they might be right if they only ever do the thing twice.

        • Balder@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          ·
          5 months ago

          I think the actual problem here is that if the product people can’t learn such a simple thing by themselves, they also won’t be able to correctly prompt the LLM to their use case.

          They said, I do think LLMs can boost productivity a lot. I’m learning a new framework and since there’s so much details to learn about it, it’s fast to ask ChatGPT what’s the proper way to do X on this framework etc. Although that only works because I already studied the foundation concepts of that framework first.

          • silasmariner@programming.dev
            link
            fedilink
            English
            arrow-up
            4
            ·
            5 months ago

            I think the actual problem is that they won’t know when they’ve got something that compiles but is wrong… I dunno though. I’ve never seen someone doing this and I can only speculate tbh. I only ever asked ChatGPT a couple of times, as a joke to myself when I got stuck, and it spouted completely useless nonsense both times… Although on one occasion the wrong code it produced looked like it had the pattern of a good idiom behind it and I stole that.