This is totally expected and also absolutely peanuts compared to Intel, who once released a processor that managed to perform floating point long division incorrectly in fascinating (if you’re the right type of nerd) and subtle ways. Hands up everyone who remembers that debacle!
Nobody? Just me?
Anyway, I totally had — and probably still have, somewhere — one of the affected chips. You could check if yours was one of the flawed ones literally by using the Windows calculator.
If I remember correctly the Intel floating point thing didn’t come up as a negative for most users like AI does.
Does AI comes up negative for most users? Surely here in Lemmy, yes. But out there I see/hear people using it -for dumb shit, mind you- all the time and being happy about it.
A lot of people are fine with getting wrong answers about shit they don’t know already. That’s what gets spread in social media and what was used for a large portion of the training data and what is available when AI does a web search.
It presents something that looks right, that is what most people care about.
If only that recall had actually bankrupted the company. I wonder where we would be today…
I remember too, buddy. It’s important to never forget.
Edit: oh, I guess it’s important to forget.
Hah! That was my first thought, too, when I saw the headline.
Microsoft announces new Chief Accuracy Officer, Jack Handey
Mr. Handey has released a statement:
Instead of having “answers” on a math test, they should just call them “impressions,” and if you got a different “impression,” so what, can’t we all be brothers?
"If you ever fall off the Sears Tower, just go real limp, because maybe you’ll look like a dummy and people will try to catch you because, hey, free dummy.”
Ah yes Mr Engineer my impression of this structural assembly is it’s okay but could be really better over there. No need for a second impression.
Obviously, the problem is that you’re asking the wrong questions. The AI is infallible. We just need to get the end user to accept that sometimes 2+2 = 5. Just depends on what Big Brother tells you.
That’s a great question! I’ll be happy to help you count the lights. I see five lights.
Here are a few ways you can improve indoor lighting:
That’s a great question! I’ll be happy to help you count the lights. I see five lights.
This symbolizes the fact that for the last five hundred years white people have been victims of genocide in South Africa.
Would you like to learn more?
So we lost our freedom?
it was a reference to: “Freedom is the freedom to say that two plus two make four. If that is granted, all else follows.”
Somewhat off-topic, but that’s the first time in a long time I’ve read a random article on the internet and just instantly liked the writer’s writing style without respect to the topic.
That was a depressing article, but a very enjoyable read.
I really need to start actually reading articles and following authors instead of just scrolling through headlines.
ITT: people who didn’t read the article.
Excel is still doing the calculations, not the AI. The AI is helping to write functions. You can easily spot check a couple examples then apply that same formula down the column. I don’t really see the issue.
Of all the things to shove AI into, the first thing that came to my mind years back was Excel. It’s handy when I’m presented a spreadsheet of data at work and I just want to do something like “write a function to extract just the number from a column containing data formatted like LPF_PHASE_OF_CARE [PAF 304001]” because I just want to copy paste all the numbers somewhere. It’s trivial to verify it works correctly, I can examine the formula, and I don’t have to wade through numerous shitty Excel tutorial websites to try and teach myself something I’ll use once or twice a year.
Quick shitpost images I share with friends and Excel functions are where I get the most utility out of AI, which in general I think sucks and is massively overhyped.
Honestly, if they just made it easier to craft a formula (like, I dunno multiple lines, some kind of better color coding of matched parentheses, etc), that’d go a lot farther.
Excel is still doing the calculations, not the AI. The AI is helping to write functions.
This distinction is immaterial. This is like a big child grabbing a smaller child’s hand and slapping them with their own hand saying “quit hitting yourself”. It’s like trying to get out of a speeding ticket by saying all you did was push the accelerator… Truely it was the fuel injectors forcing the vehicle to an illegal speed.
Just because you’ve adjusted the abstraction layer at which you’ve ceded deterministic outcomes, doesn’t mean AI isn’t doing it.
You can easily spot check a couple examples then apply that same formula down the column.
This may be appropriate in some scenarios, specifically:
-
When accuracy isn’t important
-
When you will never need to justify what is being done to anyone (including yourself)
This, however, covers a decidedly small portion of professional work done using Excel.
-
My math teachers always told me that “math is not an opinion”.
I’d like to see them now defending that!
A worthy successor to the 65535 Excel bug.
One of the many random numbers that live rent free in my head lol
There is nothing random at all about that number! It’s the largest number that can be represented by sixteen bits, i.e., (2^16 - 1).
Every number is random.
No
even then the number was actually stored correctly, it’s just excel lies to you and shows you a different number.
This AI will stack wrong calculations on top of wrong calculations and cascade everything.
Lemme guess. It’s “AI Integrated”
IF THEN MAYBE...