Well that’s not terrifying at all.
Our names, numbers, and home addresses used to be in a book delivered to everyone’s door or found stacked in a phone booth on the street. That was normal for generations.
It’s funny how much fuckwits can change the course of society and how we can’t have nice things.
Still are. I got a phone book delivered a week ago, I shit thee not. Granted I’m on a small island and the book is small too. But like, you can pay to have your number removed from the book. Can you have it removed from this? Not to mention all the 2FA stuff that can be connected to the phone number. Someone clones your number or takes it and suddenly they’ve got access to a whole lot of your login stuff.
Pay to have it removed! That sounds like blackmail doxing.
My phone book is smaller than a novel and only has yellow pages these days.
Damn that’s interesting. I like how they walked through step by step how they got the exploit to work. This is what actual real hacking is like, but much less glamorous than what you see in the movies.
Casually rotating 18,446,744,073,709,551,616 IP addresses to bypass rate limits.
I am not in IT security, but find it fascinating what clever tricks people use to break (into) stuff.
In a better world, we might use this energy for advancing humanity instead of looking how we can hurt each other. (Not saying the author is doing that, just lamenting that ITS is necessary due to hostile actors in this world. )
$5,000
This is like 1/10th of what a good blackhat hacker would have gotten out of it.
I always wonder what’s stopping security researchers from selling these exploits to Blackhat marketplaces, getting the money, waiting a bit, then telling the original company, so they end up patching it.
Probably break some contractual agreements, but if you’re doing this as a career surely you’d know how to hide your identity properly.
Chances that such an old exploit get found at the same time by a whitehat and a blackhat are very small. It would be hard not to be suspicious.
Yes, but I was saying the Blackhat marketplaces wouldn’t really have much recourse if the person selling the exploit knew how to cover their tracks. i.e. they wouldn’t have anyone to sue or go after.
I’m saying blackhat hackers can make far more money off the exploit by itself. I’ve seen far worse techniques being used to sell services for hundreds of dollars and the people behind those are making thousands. An example is the slow bruteforcing of blocked words on YouTube channel as they might have blocked their name, phone number, or address.
What you’re talking about is playing both sides, and that is just not worth doing for multiple reasons. It’s very obvious when somebody is doing that. People don’t just find the same exploit at the same time in years old software.
Google, Apple, and rest of big tech are pregnable despite their access to vast amounts of capital, and labor resources.
I used to be a big supporter of using their “social sign on” (or more generally speaking, single sign on) as a federated authentication mechanism. They have access to brilliant engineers thus naively thought - "well these companies are well funded, and security focused. What could go wrong having them handle a critical entry point for services?”
Well as this position continues to age poorly, many fucking aspects can go wrong!
- These authentication services owned by big tech are much more attractive to attack. Finding that one vulnerability in their massive attack vector is difficult but not impossible.
- If you use big tech to authenticate to services, you are now subject to the vague terms of service of big tech. Oh you forgot to pay Google store bill because card on file expired? Now your Google account is locked out and now lose access to hundreds of services that have no direct relation to Google/Apple
- Using third party auth mechanisms like Google often complicate the relationship between service provider and consumer. Support costs increase because when a 80 yr old forgot password or 2FA method to Google account. They will go to the service provider instead of Google to fix it. Then you spend inordinate amounts of time/resources trying to fix issue. These costs eventually passed on to customer in some form or another
Which is why my new position is for federated authentication protocols. Similar to how Lemmy and the fediverse work but for authentication and authorization.
Having your own IdP won’t fix the 3rd issue, but at least it will alleviate 1st and 2nd concerns
I set up my GranCentral, now Google Voice, account using a VoIP number from a company that went defunct many years ago. My Google accounts use said Google Voice phone number to validate because GrandCentral wasn’t owned by Google back then. I assume this use case is so small, there is no point fixing it. So essentially, my accounts fall into a loop where google leads to google, etc.
heh
I did something of the opposite. I had a Verizon number. I moved it to Google voice. I had a second Google voice number that then became a google fi number. So now I have a Verizon coded google voice number (that my bank accepts etc), and a google fi number that was originally a google voice number. I’m curious how this honestly effects me. My work numbers have never been associated with my personal accounts so there’s that.
God, I hate security “researchers”. If I posted an article about how to poison everyone in my neighborhood, I’d be getting a knock on the door. This kind of shit doesn’t help anyone. “Oh but the state-funded attackers, remember stuxnet”. Fuck off.
Without researchers like that, someone else would figure it out and use it maliciously without telling anyone. This researcher got Google to close the loophole that the exploit requires before publicly disclosing it.
That’s the fallacy I’m alluding to when I mention stuxnet. We have really well funded, well intentioned, intelligent people creating tools, techniques and overall knowledge in a field. Generally speaking, some of these findings are more makings then findings.
This disclosure was from last year and the exploit was patched before the researcher published the findings to the public.
I think the method of researching and then informing the affected companies confidentially is a good way to do it but companies often ignore these findings. It has to be publicized somehow to pressure them into fixing the problem.
Indeed, then it becomes a market and it incentivises more research on that area. Which I don’t think is helpful for anyone. It’s like your job description being “professional pessimist”. We could be putting that amount of effort into building more secure software to begin with.
I think it’s important for users to know how vulnerable they really are and for providers to have a fire lit under their ass to patch holes. I think it’s standard practice to alert providers to these finds early, but I’m guessing a lot of them already knew about the vulnerabilities and often don’t give a shit.
Compared to airing this dirty laundry I think the alternatives are potentially worse.
Hmm I don’t know… Users usually don’t pay much attention to security. And the disclosure method actively hides it from the user until it no longer matters.
For providers, I understand, but can’t fully agree. I think it’s a misguided culture that creates busy-work at all levels.