

Well-earned by Mamdani, but also I hope it’s the last anyone has to see of Cuomo.
Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot


Well-earned by Mamdani, but also I hope it’s the last anyone has to see of Cuomo.


You didn’t address your misuse of “girl” which I pointed out. Now you’ve used “moron”, “imbecile” and “mentally deficient” all of which are (outdated) psychology diagnoses. And “mouth breather” which… the meaning is right there. Oh and “dumber” (“dumb” means “unable to speak”).
It seems to me that you actually don’t have any problem with misusing words. Except for one specific word. Why’s that, I wonder?


If by any chance, one’s identity is reviled, their entire behaviour history would be out in the wild.
So close to a sweet meter. What do you think of
“If, by some chance, one’s handle’s reviled / their foul history would be out in the wild.”
? It’s not perfect. Probably just a little more work-shopping.


The theory is that the new hire gets better over time
It always amazes me how few people get this. Have they only ever made terrible hires?
The way that a company makes big profits is by hiring fresh graduates and giving them a cushy life while they grow into good SWEs. By the time you’re paying $200k for a senior software engineer, they’re generating far more than that in value. And you only had to invest a couple years and some chump change.
But now businesses only think in the short-term and so paying $10k for a month of giving Anthropic access to our code base sounds like a bargain.


Executives are mostly irrelevant as long as they’re not forcing the whole company into the bullshit.
I’m seeing a lot of this, though. Like, I’m not technically required to use AI, but the VP will send me a message noting that I’ve only used 2k tokens this month and maybe I could get more done if I was using more…?


It does seem silly, but it’s perfectly aligned with the marketing hype that the AI companies are producing.


Not really that weird? Basically every organization does this or is immediately unsuccessful. The only exception is companies under a government that’s stable enough to carry water for them.
This doesn’t mean that the organizations are a net benefit.


The nature of language is to change over time. Help yourself to internalize that and a lot of these things may become less stressful.
Your attempt to use a non-inflammatory example doesn’t really work: “literally” has meant “exaggeratedly” for at least the past 200 years (likely more, it’s just been used extensively with that meaning for 200y).
If you try to demand that language have strict meaning, it’s just not going to work. Watch:
girls
What’s the definition of “girl”? Do they need to be prepubescent? Pre-postpubescent? Or are you using it in the (gasp, shock) sloppy sense of “a girl or a woman”? You’ve fallen into the exact same pitfall that you were complaining about.
The resolution isn’t to claim that all words have an absolute meaning, but to understand that human language is fluid and extraordinarily context-sensitive.


“Why would anyone in Europe care?”
I think the point of it would be to signal to Trump that Europe is his vassal. Trump says it’s sad that this guy is dead, therefore Europe is sad. Doesn’t really matter who it is or what’s up. You’re just following the pledge of fealty.
So, I think it’s good that the EU decided they’re sovereign for now. This sort of thing is always an ongoing project.


Their “real” job was some standard cog-in-the-machine engineering work, which is why they got laid off. Just another number.
Most open-source work happens outside of corporate planning and so it’s invisible to the company. When the reality is, it would absolutely be worth it to Intel to pay a 40/w salary just to maintain this little bit of code. The value is there, but the humans running the company would never be able to get over the hurdle of “he’s not working very hard so he doesn’t deserve the money.”


I think the US will be fine as long as we don’t repeatedly elect some kind of cabal of pedophile authoritarians.


I once did some programming on the Cybiko, a device from 2000 that could form a wireless mesh network with peers. The idea was that you could have a shopping mall full of teens and they’d be able to chat with each other from one end to the other by routing through the mesh. It was a neat device!


This is good advice for all tertiary sources such as encyclopedias, which are designed to introduce readers to a topic, not to be the final point of reference. Wikipedia, like other encyclopedias, provides overviews of a topic and indicates sources of more extensive information.
The whole paragraph is kinda FUD except for this. Normal research practice is to (get ready for a shock) do research and not just copy a high-level summary of what other people have done. If your professors were saying, “don’t cite encyclopedias, which includes Wikipedia” then that’s fine. But my experience was that Wikipedia was specifically called out as being especially unreliable and that’s just nonsense.
I personally use ChatGPT like I would Wikipedia
Eesh. The value of a tertiary source is that it cites the secondary sources (which cite the primary). If you strip that out, how’s it different from “some guy told me…”? I think your professors did a bad job of teaching you about how to read sources. Maybe because they didn’t know themselves. :-(


I think it was. When I think of Wikipedia, I’m thinking about how it was in ~2005 (20 years ago) and it was a pretty solid encyclopedia then.
There were (and still are) some articles that are very thin. And some that have errors. Both of these things are true of non-wiki encyclopedias. When I’ve seen a poorly-written article, it’s usually on a subject that a standard encyclopedia wouldn’t even cover. So I feel like that was still a giant win for Wikipedia.


I think the academic advice about Wikipedia was sadly mistaken. It’s true that Wikipedia contains errors, but so do other sources. The problem was that it was a new thing and the idea that someone could vandalize a page startled people. It turns out, though, that Wikipedia has pretty good controls for this over a reasonable time-window. And there’s a history of edits. And most pages are accurate and free from vandalism.
Just as you should not uncritically read any of your other sources, you shouldn’t uncritically read Wikipedia as a source. But if you are going to uncritically read, Wikipedia’s far from the worst thing to blindly trust.


I don’t think the article summarizes the research paper well. The researchers gave the AI models simple-but-large (which they confusingly called “complex”) puzzles. Like Towers of Hanoi but with 25 discs.
The solution to these puzzles is nothing but patterns. You can write code that will solve the Tower puzzle for any size n and the whole program is less than a screen.
The problem the researchers see is that on these long, pattern-based solutions, the models follow a bad path and then just give up long before they hit their limit on tokens. The researchers don’t have an answer for why this is, but they suspect that the reasoning doesn’t scale.


Thanks for linking that. Reading the paper, it looks like the majority of the “self-host” population they’re capturing is people who have a WordPress site. By my reading, the wording of the paper would disqualify a wordpress.com-hosted site as “self-hosted”. But I’d be very suspicious of their methodology and would expect that quite a few people who use WP-hosted reported as self-hosted because the language is pretty confusing.


I was at Google when they announced that only AI-related projects would be able to request increased budget. I don’t know if they’re still doing that specifically, but I’m sure they are still massively incentivizing teams to slap an “AI Inside” sticker on everything.


lol is it even worth tracking what’s tariffed today?
I’d be curious to hear how you end up liking it.
As someone who spends a lot of time on the command-line, I’ve generally preferred MacOS over Windows as my not-Linux OS. But my impression is that for people who like the Windows or Linux GUI, MacOS is a bigger (and less pleasant) change.
And even on the command-line, MacOS is a different *nix distro and makes seme pretty weird choices (launchd, plists, /etc is actually /private/etc, …) whereas you could have vanilla Ubuntu inside WSL2.