• 0 Posts
  • 43 Comments
Joined 1 year ago
cake
Cake day: March 8th, 2024

help-circle




  • This is true. That said, presumably at least some of those have either a pre-existing install base they can keep selling digital games and services to or built-up stock.

    Nintendo has zero Switch 2 units in US households and will be expected to honor preorder prices. Who knows how much stock they have in the US at this point. Probably next to zero.

    US gamers won’t have cheaper choices to buy new hardware, but they sure will have the obvious choice of not spending money on unnecessary new toys at all. Especially because for how messed up gaming hardware is going to get there are going to be entire other market segments getting much worse that you don’t get to just opt out of.

    This is atrocious timing for Nintendo. But hey, Europe has 450 million people and you weren’t going to sell 100 million Switches day one. Shave fifty euros off that sticker and I betcha some of them will take that unused US stock out of your hands and even buy some games on top.





  • I’m not in the US. I haven’t done an Easter egg hunt in my life. “Easter eggs” have always been a chocolate treat. The thing I remember most from Easter as a child was the big fair that set up camp in town, and by extension the food I remember the most are caramel apples and candy floss. My grandma would make meringue pies and yes, there were some chocolate eggs and bunnies changing hands when other relatives came over.

    And lots of pork.





  • A quick look at US Amazon spits out that the only 24Gb card in stock is a 3090 for 1500 USD. A look at the European storefront shows 2400EUR for a 4090. Looking at other assorted stores shows a bunch of out of stock notices.

    It’s quite competitive, I’m afraid. Things are very stupid at this point and for obvious reasons seem poised to get even dumber.


  • Yeah, for sure. That I was aware of.

    We were focusing on the Mini instead because… well, if the OP is fretting about going for a big GPU I’m assuming we’re talking user-level costs here. The Mini’s reputation comes from starting at 600 bucks for 16 gigs of fast shared RAM, which is competitive with consumer GPUs as a standalone system. I wanted to correct the record about the 24Gig starter speccing up to 64 because the 64 gig one is still in the 2K range, which is lower than the realistic market prices of 4090s and 5090s, so if my priority was running LLMs there would be some thinking to do about which option makes most sense in the 500-2K price range.

    I am much less aware of larger options and their relative cost to performance because… well, I may not hate LLMs as much as is popular around the Internet, but I’m no roaming cryptobro, either, and I assume neither is anybody else in this conversation.


  • Go outside. With a sign, maybe, but you may also find you have a sound-making enabled face-hole.

    Voting also helps, if the chance is ever provided. That ship may have sailed, though, so you may find you need to go purchase a time machine type device instead.

    Maybe it’s just getting grumpier in my old age, but I’m increasingly annoyed at all these posts going “here’s how to lock down your comms from all the people coming after you for all the protesting you’re not doing. Now hold tight while sitting at home, I’m sure the official summons to go do the revolution is incoming from the official revolution organizers any day now”.



  • You didn’t, I did. The starting models cap at 24, but you can spec up the biggest one up to 64GB. I should have clicked through to the customization page before reporting what was available.

    That is still cheaper than a 5090, so it’s not that clear cut. I think it depends on what you’re trying to set up and how much money you’re willing to burn. Sometimes literally, the Mac will also be more power efficient than a honker of an Nvidia 90 class card.

    Honestly, all I have for recommendations is that I’d rather scale up than down. I mean, unless you also want to play kickass games at insane framerates with path tracing or something. Then go nuts with your big boy GPUs, who cares.

    But for LLM stuff strictly I’d start by repurposing what I have around, hitting a speed limit and then scaling up to maybe something with a lot of shared RAM (including a Mac Mini if you’re into those) and keep rinsing and repeating. I don’t know that I personally am in the market for AI-specific muti-thousand APUs with a hundred plus gigs of RAM yet.


  • Thing is, you can trade off speed for quality. For coding support you can settle for Llama 3.2 or a smaller deepseek-r1 and still get most of what you need on a smaller GPU, then scale up to a bigger model that will run slower if you need something cleaner. I’ve had a small laptop with 16 GB of total memory and a 4060 mobile serving as a makeshift home server with a LLM and a few other things and… well, it’s not instant, but I can get the sort of thing you need out of it.

    Sure, if I’m digging in and want something faster I can run something else in my bigger PC GPU, but a lot of the time I don’t have to.

    Like I said below, though, I’m in the process of trying to move that to an Arc A770 with 16 GB of VRAM that I had just lying around because I saw it on sale for a couple hundred bucks and I needed a temporary GPU replacement for a smaller PC. I’ve tried running LLMs on it before and it’s not… super fast, but it’ll do what you want for 14B models just fine. That’s going to be your sweet spot on home GPUs anyway, anything larger than 16GB and you’re talking 3090, 4090 or 5090, pretty much exclusively.


  • This is… mostly right, but I have to say, macs with 16 gigs of shared memory aren’t all that, you can get many other alternatives with similar memory distributions, although not as fast.

    A bunch of vendors are starting to lean on this by providing small, weaker PCs with a BIG cache of shared RAM. That new Framework desktop with an AMD APU specs up to 128 GB of shared memory, while the mac minis everybody is hyping up for this cap at 24 GB instead.

    I’d strongly recommend starting with a mid-sized GPU on a desktop PC. Intel ships the A770 with 16GB of RAM and the B580 with 12 and they’re both dirt cheap. You can still get a 3060 with 12 GB for similar prices, too. I’m not sure how they benchmark relative to each other on LLM tasks, but I’m sure one can look it up. Cheap as the entry level mac mini is, all of those are cheaper if you already have a PC up and running, and the total amount of dedicated RAM you get is very comparable.