• AnAverageSnoot@lemmy.ca
    link
    fedilink
    English
    arrow-up
    185
    arrow-down
    1
    ·
    3 days ago

    AI is funded solely by sunk cost fallacy at this point. I wonder how long it will be before investments start getting pulled back because of a lack of ROI. I can already feel the sentiment towards AI and it getting pushed in everything turning negative amongst consumers recently.

    • Taldan@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      2 days ago

      I wouldn’t have a problem if they were actually investing the money in something useful like R&D

      Nearly all the investment is in data centers. Their approach for the past 2 years seems to be just throwing more hardware at existing approaches, which is a really great way to burn an absurd amount of money for little to nothing in return

      • brucethemoose@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        2 days ago

        It’s very corporate, isn’t it? “Just keep scaling what we have.”

        That being said, a lot of innovation is happening, but goes unused. It’s incredible how my promising papers come out, and get completely passed over by Big Tech AI, like nothing matters unless it’s developed in house.

        The Chinese firms are picking up some research in bigger models, at least, but are kinda falling into local maxima too.

    • SSUPII@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      36
      arrow-down
      2
      ·
      edit-2
      2 days ago

      Investment is done really to train models for ever more miniscule gains. I feel like the current choices are enough to satisfy who is interested in such services, and what really is lacking is now more hardware dedicated to single user sessions to improve quality of output with the current models.

      But I really want to see more development on offline services, as right now it is really done only by hobbyists and only occasionally large companies with a little dripfeed (Facebook Llama, original Deepseek model [latter being pretty much useless as no one has the hardware to run it]).

      I remember seeing the Samsung Galaxy Fold 7 (“the first AI phone”, unironic cit.) presentation and listening to them talking about all the AI features instead of the real phone capabilities. “All of this is offline, right? A powerful smartphone… makes sense to have local models for tasks.” but it later became abundantly clear it was just repackaged always-online Gemini for the entire presentation on $2000 of hardware.

      • Taldan@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 days ago

        what really is lacking is now more hardware dedicated to single user sessions to improve quality of output with the current models

        That is the exact opposite of my opinion. They’re throwing tons of computing at the current models. It has produced little improvement. The vast majority of investment is in compute hardware, rather than R&D. They need more R&D to improve the underlying models. More hardware isn’t going to get the significant gains we need

      • ferrule@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 days ago

        The problem is there is little continuous cash flow for on prem personal services. Look at Samsung’s home automation, its nearly all online features and when the internet is out you are SOL.

        To have your own Github Copilot in a device the size and power usage of a Raspberry Pi would be amazing. But then they won’t get subscriptions.

      • humanspiral@lemmy.ca
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        2
        ·
        2 days ago

        more development on offline services

        There is absolutely massive development on open weight models that can be used offline/privately. Minimax M2, most recent one, has comparable benchmark scores to the private US megatech models at 1/12th the cost, and at higher token throughput. Qwen, GLM, deepseek have comparable models to M2, and have smaller models more easily used on very modest hardware.

        Closed megatech datacenter AI strategy is partnership with US government/military for oppressive control of humanity. Spending 12x more per token while empowering big tech/US empire to steal from and oppress you is not worth a small fraction in benchmark/quality improvement.

    • jordanlund@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      1
      ·
      2 days ago

      One of our biggest bookstores contracted with a local artist for some merch. That artist used AI with predictable results. Now everyone involved is getting raked over the coals for it.

      No surprise, they just announced a 4th round of layoffs too. 😟

      https://lithub.com/everything-you-need-to-know-about-the-powells-ai-slop-snafu-and-what-we-can-all-learn-from-it/

      https://www.koin.com/news/portland/powells-layoffs-employees-10292025/

    • Strider@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      2 days ago

      Why do you think AI is pushed so hard?

      Everyone is aware this has to be useful. Too much money.

      Still the powers that be will do everything to avoid a hard crash, which would be so much earned.