• Sludgehammer@lemmy.world
    link
    fedilink
    English
    arrow-up
    76
    ·
    6 days ago

    Just a few hundred billion more and I’m sure that somebody will figure out a profitable use for AI that isn’t scamming old people.

    • rottingleaf@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      ·
      6 days ago

      I can imagine one - maintaining adversarial interop with proprietary systems. Like a self-adjusting connector for Facebook for some multi-protocol chat client. Or if there’s going to be a Usenet-like system with global identities of users and posts, a mapping of Facebook to that. Siloed services don’t expose identifiers and are not indexed, but that’s with our current possibilities. People do use them and do know with whom they are interacting, so it’s possible to make an AI-assisted scraper that would expose Facebook like a newsgroup to that.

      Ah. Profitable. I dunno.

        • rottingleaf@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          5 days ago

          I’m more about separation of addressing data and data model from addressing services and service model for storing and processing it, to make those uniform, because in uniformity lies efficiency and redundancy and ability to switch service models, and uniformity inside proprietary services is already achieved, so in this case uniformity works for the people.

          I mean, that’s probably what you meant, I’m being this specific to fight my own distractions and fuzziness of thought.

    • Trapped In America@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      26
      ·
      edit-2
      5 days ago

      I’m honestly hoping for a repeat. Hopefully Microsoft goes down this time too, since they’re heavily into AI. Twitter, Meta, Google and Amazon too. It’s really just the worst of the worst.

    • MysteriousSophon21@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      5 days ago

      This makes the dot-com bubble look like a kiddie pool - at least those companies were trying to build actual products, while today’s AI spending is burning through more money than the GDP of most countries just to have the biggest model with no clear path to profitability beyond “trust us bro”.

    • wewbull@feddit.uk
      link
      fedilink
      English
      arrow-up
      3
      ·
      5 days ago

      They’re different, and I think this one has the capability of being more devastating.

      The dot-com bubble was really broad. Hundreds or thousands of companies, all without vowels in their names trying to break new ground. A wild west style gold rush. When it popped a lot of small companies went bankrupt.

      This is a handful of companies with billions of capital buying GPUs from NVidia to be make the largest hungriest machine they can. All in the pursuit of being first to create “AGI”. If one of them succeeds, the others are toast and multiple 500+B dollar companies will collapse in on themselves. If none of it works, the same thing happens and it takes a large chunk out of $4T Nvidia too.

    • JealousJail@feddit.org
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      3
      ·
      edit-2
      5 days ago

      At least they‘ve wasted their money for research of what doesn‘t work instead of just building silly products as for the .com bubble.

      Humanity will gain insights to the kind of AI approaches that won‘t work much faster than without all the money. It‘s just an allocation of human efforts

      • Feyd@programming.dev
        link
        fedilink
        English
        arrow-up
        8
        ·
        edit-2
        5 days ago

        Not really. None of what has been going on with transformer models has been anything but hyper scaling. It’s not really making fundamental advances in technology it’s that they decided what they had at the scale they had makes convincing enough demos that the scam could start.

        • JealousJail@feddit.org
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          4
          ·
          edit-2
          5 days ago

          It has been more than just hyperscaling. First of all, the invention of transformers would likely be significantly delayed without the hype around CNNs in the first AI wave in 2014. OpenAI wouldn‘t have been founded and their early contributions (like Soft Actor-Critic RL) could have taken longer to be explored.

          While I agree that the transformer architecture itself hasn‘t advanced far since 2018 apart from scaling, its success has significantly contributed to self-learning policies.

          RLHF, Direct Policy Optimization, and in particular DeepSeek‘s GRPO are huge milestones for Reinforcement Learning which arguably is the most promising trajectory for actual intelligence. Those are a direct consequence of the money pumped into AI and the appeal it has to many smart and talented people around the world

    • shalafi@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      3
      ·
      5 days ago

      This is no revelation. THEY KNOW. The play is obvious.

      Not one these investors wants to risk missing out on being the next Google or FaceBook or Twitter or Amazon. They know damned well the vast majority will fail. They’re gambling on not being the one left holding the bag.

      AI is here to stay, will continue to improve, and there will be a killer app, probably a dozen. My money is on life sciences, particularly medicine.

  • Jrockwar@feddit.uk
    link
    fedilink
    English
    arrow-up
    24
    arrow-down
    1
    ·
    5 days ago

    Imagine what we could have achieved globally if we had spent all that money on a different cause.

    We could have managed to establish a colony on Mars, or perhaps we could have even finished developing Star Citizen.

  • cley_faye@lemmy.world
    link
    fedilink
    English
    arrow-up
    19
    ·
    5 days ago

    Love that the pic associated with that link is Mark “Metaverse” Zuckerberg. A hallmark of successful dubious ventures, if any.

  • xiwi@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    32
    arrow-down
    1
    ·
    5 days ago

    Chatgpt, what is sunken cost fallacy and why are rich people so profoundly stupid?

    • shalafi@lemmy.world
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      1
      ·
      5 days ago

      This is not that. They’re all hoping to be the next Google or FaceBook. They know damned well most are going to lose. The gamble is that they won’t be the one holding the bag when the bubble pops.

      This is as high stakes as tech gets today.

      • boonhet@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        6
        ·
        5 days ago

        Some of them are already Google or Facebook tbh. They could run many safer gambles for the same money. But I suspect investors demand AI right now.

  • CosmoNova@lemmy.world
    link
    fedilink
    English
    arrow-up
    31
    ·
    5 days ago

    It’s the most inefficient technology but praised as the most efficient because it simply runs on investor money. But that well will run dry eventually and who will bear the cost then? Consumers without jobs?

    • JealousJail@feddit.org
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      1
      ·
      5 days ago

      I disagree a bit. Any money the ultra-rich invest into research is better spent than on their next Mega-Yacht. Even if AI cannot meet the expectations of AGI etc.

      • CosmoNova@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        ·
        5 days ago

        This research is cooking us alive right now and for what? So machines can do all the creative things while we fight for scraps? I‘d rather the overly rich spend it on something harmless but silly. At least the average joe can make a living producing luxury items. As grim as it sounds but that‘s preferable to what‘s coming.

        • JealousJail@feddit.org
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          2
          ·
          edit-2
          5 days ago

          I believe that we are not yet in the end stage of AI. LLMs are certainly useful, but they cannot solve the most important problems of mankind.

          More research is required to solve e.g. a) Sustainable Energy Supply b) Imbalanced demographies of industrialized countries c) Treatment of several diseases

          Like it or not, AI that can do research for us, or even increase efficiency of human researchers, is the most promising trajectory for accelerating progress on these important problems.

          Right now, AI has not exceeded this scope. Yeah, AI can generate quite realistic fake videos. But propaganda has been possible before (look at China, Russia or Nazi Germany - even TikTok without any AI is dangerous enough to severely threaten democracies).

          As a researcher in the domain, let me tell you that no one who seriously knows about video generation etc. is afraid of the current state of AI

      • wewbull@feddit.uk
        link
        fedilink
        English
        arrow-up
        5
        ·
        5 days ago

        It’s spent on NVidia GPUs. Jensen Huang just buys leather jackets from what I can tell.

    • SkunkWorkz@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      5 days ago

      I think Meta’s AI initiative doesn’t run on investor money since they do share buybacks instead of selling more shares to keep afloat. Meta makes more than a hundred billion of revenue from selling ads on Facebook and Instagram. So Meta’s AI program runs on boomers clicking on ads that have been generated with AI.

    • rozodru@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      5 days ago

      Already starting to, at least for smaller companies and startups that were trying to use it to build things end to end.

      If you use it to provide you with content, sure, easy no worries. building a website? sure no problem as long as it doesn’t require any sort of logins or security stuff. an application? well now you’re going to have some problems.

      Most AI can’t scale something. and most are absolutely horrible at any sort of security. and all of them can’t UX themselves out a wet paper bag.

      Now if you utilize them as a tool, a sort of rubber duck, sure they’re great. The issue is, and I’m seeing this first hand because of my job, is that many smaller companies and start ups aren’t doing that. They’re assigning someone, a “vibe coder”, to feed the thing prompts to build stuff from end to end. Naturally the end product is an insanely resource heavy, convoluted code, exploitable mess that can’t scale. It creates a massive amount of tech debt. All to save a couple grand instead of hiring actual devs. So now when I get a call or email from one of my contacts that “so and so’s company/start up needs someone to clean up their app because it’s very broken due to a vibe coder” I charge them an arm and a leg.

      So you’re right, it is going to fail and implode on it’s own weight but I’m going to damn well be sure to take advantage of these people before it completely does and I encourage other freelance/consultant developers to do the same.

    • TankovayaDiviziya@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      5 days ago

      I definitely hope so, like it will with dotcom bubble. If the bubble burst delays the rise of killer robots, then I am all for another economic recession!

  • MrSulu@lemmy.ml
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    3
    ·
    5 days ago

    Unimaginable amounts of money spent just to provide a free service to help improve the human race by sharing knowledge. Such marvellous gentlemen.

  • Rose56@lemmy.ca
    link
    fedilink
    English
    arrow-up
    7
    ·
    5 days ago

    Do so please, dump some more money! No need to make jobs, but destroy them from AI.

  • multifariace@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    5 days ago

    Those numbers seem odd to me. I feel like the truth is 1 billion was spent on productive programmers and hardware. The small remainder of 154 billion was used to improvise profit growth through totally valid payment to some CEOs ego account.

    • aesthelete@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      5 days ago

      Nobody seems to have noticed that the business model here is to funnel as much traffic and spend to the big AI corporations as possible with no foreseeable return (except vague nonsense about “productivity gains”).

      Just wait until someone requires one of these things to make a profit, that’s when if you’re a corporation that integrated this shit deeply into your business, you’ll be covered top to bottom in rug burn from the inevitable rug pull of price increases.

  • Grandwolf319@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    7
    ·
    6 days ago

    I wanna see a breakdown of cost vs revenue for each big tech and their AI stuff.

    I know it’s all negative, I wanna know who is the most negative. My money is on google.

      • snooggums@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        ·
        edit-2
        5 days ago

        This snippet summarizes my AI forced into everything experience, especially when I was prompted to have my text message summarized. I said no, and the message was “Ok”

        What text messages are being sent that need to be summarized?

        which mostly means that Apple aggressively introduced people to the features of generative AI by force, and it turns out that people don’t really want to summarize documents, write emails, or make “custom emoji,” and anyone who thinks they would is a fucking alien.

        Great analysis. I still have no idea how they think they will ever make their money back.

        • dylanmorgan@slrpnk.net
          link
          fedilink
          English
          arrow-up
          6
          ·
          5 days ago

          Ironically, Apple has some of the best odds of coming out of this reasonably healthy. “Apple Intelligence” follows the trend of most Apple services products, in that it is really intended to lock people into their ecosystem and keep buying iPhones. I’m just waiting for someone in the big 7 or whatever they’re called to publicly bow out of AI. I suspect the first one to do it might benefit a lot.

        • Feyd@programming.dev
          link
          fedilink
          English
          arrow-up
          5
          ·
          5 days ago

          There are people who both get llm summaries of their emails and get llm to rewrite what they send. It’s amazing that anyone can’t see how incredibly stupid it is

    • rottingleaf@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      5 days ago

      You mean when the bubble bursts and there are lots of people who worked on this available on the job market?

      I’d expect them to be big data specialists, mostly knowledgeable in Python and matrix operations, narrow optimizations needed there, and not very competitive for other typical tech specialties.

      They’ll just have to become data analysts, assistants in labs working on things like genome analysis, and so on. Perhaps medical RnD will get a boost due to all the willing slaves, LOL.