• Keener@lemm.ee
    link
    fedilink
    English
    arrow-up
    192
    ·
    3 months ago

    Former shopify employee here. Tobi is scum, and surrounds himself with scum. He looks up to Elon and genuinely admires him.

  • besselj@lemmy.ca
    link
    fedilink
    English
    arrow-up
    69
    ·
    edit-2
    3 months ago

    What these CEOs don’t understand is that even an error rate as low as 1% for LLMs is unacceptable at scale. Fully automating without humans somewhere in the loop will lead to major legal liabilities down the line, esp if mistakes can’t be fixed fast.

    • CosmoNova@lemmy.world
      link
      fedilink
      English
      arrow-up
      21
      ·
      3 months ago

      Yup. If 1% of all requests result in failures and even cause damages, you‘ll quickly lose 99% of your customers.

      • VanillaFrosty@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        ·
        3 months ago

        It’s starting to look like the oligarchs are going to replace every position they can with AI everywhere so we have no choice but to deal with its shit.

    • wagesj45@fedia.io
      link
      fedilink
      arrow-up
      11
      arrow-down
      1
      ·
      3 months ago

      I suspect everyone is just going to be a manager from now on, managing AIs instead of people.

      • vinnymac@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        3 months ago

        Building AI tools will also require very few of the skills of a manager from our generation. It’s better to be a prompt engineer, building evals and agentic AI than it is to actually manage. Management will be replaced by AI, it’s turtles all the way down. They’re going to expect you to be both a project manager and an engineer at the same time going forward, especially at less enterprising organizations with lower compliance and security bars to jump over. If you think of an organization as a tree structure, imagine if the tree was pruned, with fewer branches to the top, that’s what I imagine there end goal is.

  • darkpanda@lemmy.ca
    link
    fedilink
    English
    arrow-up
    66
    ·
    3 months ago

    Dev: “Boss, we need additional storage on the database cluster to handle the latest clients we signed up.”

    Boss: “First see if AI can do it.”

    • ramielrowe@lemmy.world
      link
      fedilink
      English
      arrow-up
      19
      ·
      3 months ago

      A coworker of mine built an LLM powered FUSE filesystem as a very tongue-in-check response to the concept of letting AI do everything. It let the LLM generate responses to listing files in directories and reading contents of the files.

  • Dr. Moose@lemmy.world
    link
    fedilink
    English
    arrow-up
    31
    arrow-down
    1
    ·
    edit-2
    3 months ago

    I develop AI agents rn as part time for my work and have yet to see one that can perform a real task unsupervised on their own. It’s not what agents are made for at all - they’re only capable of being an assistant or annotate, summarize data etc. Which is very useful but in an entirely different context.

    No agent can create features or even reliably fix bugs on their own yet and probably not for next few years at least. This is because having a dude at 50$ hour is much more reliable than any AI agent long term. If you need to roll back a regression bug introduced by an AI agent it’ll cost you 10-20 developer hours as minimum which negates any value you’ve gained already. Now you spent 1,000$ fix for your 50$ agent run where a person could have done that for 200$. Not to mention regression bugs are so incredibly expensive to fix and maintain so it’ll all scale exponentially. Not to mention liability of not having human oversight - what if the agent stops working? You’ll have to onboarding someone on an entire code base which would take days as very minimum.

    So his take on ai agents doing work is pretty dumb for the time being.

    That being said, AI tool use proficiency test is very much unavoidable, I don’t see any software company not using AI assistants so anyone who doesn’t will simply not get hired. Its like coding in notepad - yeah you can do it but its not a signal you want to send to your team cause you’d look stupid.

    • taladar@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      3 months ago

      Honestly, AI coding assistants (as in the ones working like auto-complete in the code editor) are very close to useless unless maybe you work in one of those languages like Java that are extremely verbose and lack expressiveness. I tried using a few of them for a while but it got to the point where I forgot to turn them on a few times (they do take up too much VRAM to keep running when not in use) and I didn’t even notice any productivity problems from not having them available.

      That said, conversational AI can sometimes be quite useful to figure out which library to look at for a given task or how to approach a problem.

      • Ledivin@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        2
        ·
        3 months ago

        Honestly, AI coding assistants (as in the ones working like auto-complete in the code editor) are very close to useless unless maybe you work in one of those languages like Java that are extremely verbose and lack expressiveness.

        Hard disagree. They’re not writing anything on their own, no, but my stack saves at least 75% of my time, and I work full-stack across pieces in 5 different languages.

        Cursor + Claude was the latest big shift for me, maybe two months ago? If you haven’t tried them, it was a huge bump in utility

        • taladar@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          8
          arrow-down
          1
          ·
          3 months ago

          If you spend 75% of your time writing code you are in a highly unusual coding position. Most programmers spend a very high percentage of their time understanding the problem domain and on other parts of figuring out requirements and translating them into something resembling some sort of semi-formal understanding of what the program actually needs to do. The low level detailed code writing is very rarely a bottleneck.

    • ShittyBeatlesFCPres@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      1
      ·
      3 months ago

      Dear CEOs: I will never accept 0.5% hallucinations as “A.I.” and if you don’t even know that, I want an A.I. machine cooking all your meals. If you aren’t ok with 1/200 of your meals containing poison, you’re expendable.

      Humans or even regular ass algorithms are fine. A.I. can predict protein folding. It should do a lot else unless there’s a generational leap from “making shitty images” to “as close to perfect as it gets.”

      • taladar@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 months ago

        Cooking meals seems like a good first step towards teaching AI programming. After all the recipe analogy is ubiquitous in programming intro courses. /s

        • ShittyBeatlesFCPres@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          2
          ·
          3 months ago

          Did you see the wack ass Quake II version Microsoft bragged about? It wasn’t even playable. A fucking 12 year old could do better.

        • doodledup@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          28
          ·
          3 months ago

          Na man. It’s being used extensively in many jobs. Software development especially. You’re misinformed or have a biased view on it based on your personal experience with it.

          • ShittyBeatlesFCPres@lemmy.world
            link
            fedilink
            English
            arrow-up
            22
            ·
            edit-2
            3 months ago

            I use it in software development and it hasn’t changed my life. It’s slightly more convenient than last gen code completion but I’ve never worked on a project where code per hours was the hold up. One less stand-up per week would probably increase developer productivity more than GitHub Copilot.

            • jubilationtcornpone@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              5
              ·
              3 months ago

              Tried using Copilot on a few C# projects. I didn’t find it to be any better than Resharper. If anything it was worse because it would give me auto complete samples that were not even close to what I wanted. Not all the time but not infrequently either.

          • ShittyBeatlesFCPres@lemmy.world
            link
            fedilink
            English
            arrow-up
            7
            arrow-down
            1
            ·
            3 months ago

            Even if it does the basic shit at the expense of me working one less hour a week, it’s not worth paying for. And that ignores the downsides like spam, bots, data centers needing power/water, and politicians thinking GPU cards are national security secrets.

            I don’t think we need a Skynet scenario to imagine the downsides.

  • RandoMcRanderton@lemmy.world
    link
    fedilink
    English
    arrow-up
    25
    ·
    3 months ago

    “Stagnation is almost certain, and stagnation is slow-motion failure.”

    This has some strong Ricky Bobby vibes, “If you ain’t first, you’re last.” I never have understood how companies are supposed to have unlimited growth. At some point when every human on earth that can use their service/product is already doing so, where else is there to go? Isn’t stagnation being almost certain just a reality of a finite world?

    • Trailblazing Braille Taser@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      25
      ·
      3 months ago

      At some point when every human on earth that can use their service/product is already doing so, where else is there to go?

      Ooh, I know:

      • Charge more (for less)
      • Autocannibalize (layoffs)

      I don’t even have an MBA, can you believe that?

    • halowpeano@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      3 months ago

      This concept is very often misinterpreted by these tech CEOs because they’re terrified of becoming the next Yahoo or Kodak or cab company or AskJeeves or name any other company that was replaced by something with more “innovation” (aka venture capital). It’s all great they’ll lose wealth.

      The underlying concepts are sound though. Think of a small business like a barber shop or restaurant. Even a very good owner/operator will eventually get old and retire and if they haven’t expanded to train their successor before they do, the business will close. Which is fine, the business served the purpose of making a living for that person. Compare with McDonalds, they expanded and grew so the business could continue past the natural lifetime of a single restaurant.

      A different example of stagnation is Kodak. They famously had the chance to grow their business into digital cameras early on, their researchers and engineers were on the cutting edge of that technology. But the executives rejected expansion in favor of sticking with the higher profit margins (at the time) of film cameras. And now they’re basically irrelevant. Expanding on this example, even digital cameras are irrelevant, within 20 years of Kodak’s fall. The market around low- to mid-end stand-alone cameras had disappeared in favor of phones.

      So the real lesson is not so much infinite growth like these tech CEOs believe in, the lesson is adaptability to a changing world and changing technology, which costs money in the form of research, development, and risk taking trying to set up production on products you’re not sure will sell, but might replace your current offerings.

  • psvrh@lemmy.ca
    link
    fedilink
    English
    arrow-up
    24
    ·
    3 months ago

    Just reminding everyone that Lutke is a right-wing shitheel, and that he and Shopify explicitly platform, support and make money from Nazism.

    Carry on.

  • affiliate@lemmy.world
    link
    fedilink
    English
    arrow-up
    18
    ·
    3 months ago

    should just be a matter of saying “AI can’t do this job because it can’t properly do any job”. could even make that your email signature.

  • 11111one11111@lemmy.world
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    3
    ·
    3 months ago

    Id tell them I can get AI to do anything they want. They’re the ones who will be paying for me to spend not hours but days tweaking prompts to get whatever shit they want done that could’ve been done faster cheaper and better with appropriate resources so fuck it I’m in.