Every day a new Einstein is born, and their life and choices are dictated by the level of wealth and opportunity they are born into.
We would see stories like this every week if wealth and opportunities were equally distributed.
I didn’t see where the article was about capitalism. Did you comment the right post? It seems off-topic.
This doesn’t seem off topic to me. A smart person had access to the tools and support system to enable them to do something incredible, but thousands of people equally capable didn’t have the opportunity. Seems pretty easy to follow the logic
You might not be that new Einstein…
The model was run (and I think trained?) on very modest hardware:
The computer used for this paper contains an NVIDIA Quadro RTX 6000 with 22 GB of VRAM, 200 GB of RAM, and a 32-core Xeon CPU, courtesy of Caltech.
That’s a double VRAM Nvidia RTX 2080 TI + a Skylake Intel CPU, an aging circa-2018 setup. With room for a batch size of 4096, nonetheless! Though they did run into some preprocessing bottleneck in CPU/RAM.
The primary concern is the clustering step. Given the sheer magnitude of data present in the catalog, without question the task will need to be spatially divided in some way, and parallelized over potentially several machines
That’s not modest. AI hardware requirements are just crazy.
For an individual yes. But for an institution? No.
I mean, “modest” may be too strong a word, but a 2080 TI-ish workstation is not particularly exorbitant in the research space. Especially considering the insane dataset size (years of noisy, raw space telescope data) they’re processing here.
Also that’s not always true. Some “AI” models, especially oldschool ones, function fine on old CPUs. There are also efforts (like bitnet) to get larger ones fast cheaply.
So a 5090, 5950x3d & 192gb of RAM would run it on “consumer” hardware?
That’s even overkill. A 3090 is pretty standard in the sanely priced ML research space. It’s the same architecture as the A100, so very widely supported.
5090 is actually a mixed bag because it’s too new, and support for it is hit and miss. And also because it’s ridiculously priced for a 32G card.
And most CPUs with tons of RAM are fine, depending on the workload, but the constraint is usually “does my dataset fit in RAM” more than core speed (since just waiting 2X or 4X longer is not that big a deal).
I’ve managed to run AI on hardware even older than that. The issue is it’s just painfully slow. I have no idea if it has any impact on the actual results though. I have a very high spec AI machine on order, so it’ll be interesting to run the same tests again and see if they’re any better, or if they’re simply quicker.
I have no idea if it has any impact on the actual results though.
Is it a PyTorch experiment? Other than maybe different default data types on CPU, the results should be the same.

Begging your pardon Sir but it’s a bigass sky to search.
Been wanting that gif and been too lazy to record it!
AI accomplishing something useful for once?!
I havent read the paper and surely he did a great job. Regardless of that, and in principle, anyone can do this in less than hour. The trick is to get an external confirmstion for all the discoveries you’ve made.
Think of all the astronomers he put out of work. :(
/s, right?
Yeah. It’s disheartening when obvious jokes like that are missed by so many.
It’s a little too plausible, heh.







