• 1 Post
  • 20 Comments
Joined 2 years ago
cake
Cake day: January 25th, 2024

help-circle
  • I’m convinced this was written by GPT.

    I’m a human being. I know my writing style can often come off weird to some people, but I can assure you I don’t outsource my thinking to a word prediction program to make my points for me.

    We disagree on how good or bad porn is for society and the youth, so the rest doesn’t even matter.

    I haven’t seen any evidence that light or moderate consumption of porn by legal adults produces significant negative consequences for them or society at large, so long as the porn doesn’t involve non-consenting parties, underage individuals, etc. Thus, I don’t think it’s reasonable to heavily monitor and restrict access to every single individual in our society.

    As for kids, research is obviously lacking since it’s somewhat of a touchy subject for researchers to study, but since we know sex ed, conversations between kids & parents, and even the most basic of parental controls and monitoring can prevent the vast majority of the negative effects, and even the whole of the initial consumption while underage, then that’s what I advocate for.

    Until I see evidence to the contrary, that demonstrates larger harms from general consumption trends than the surveillance of the online media consumption of every single citizen, on top of the possible risks to online censorship, while other methods we already know work well still can’t reduce that risk below the possible harms of a monitoring/access control system, then I’m not going to support such a system.


  • You show your ID and a notary enters their credentials to allow you to create an account

    The problem then lies in how whoever (likely the government) can ensure that verified accounts are indeed verified by real people.

    If any notary can create these accounts by just claiming they saw a proper ID/biometrics, then even one malicious notary could make as many “verified” accounts as they want. If they’re then investigated, that would mean there’d be monitoring in place to see who they met with, which would defeat the privacy preservation method of only having them look at it.

    This also doesn’t solve the problem of people reselling stolen accounts, going to multiple notaries and getting each one to individually attest and make multiple accounts to give out or sell, etc.

    with your fingerprint or FaceID Your ID doesn’t get saved. Your biometrics are only saved in the way that your iPhone saves them for a password.

    If your biometrics are stored, then there’s one of two places they could be stored and processed:

    1. On your own device (i.e. you just use your existing fingerprint lock on your phone to secure your account, say, one that’s made via a passkey so as to make fingerprint verification possible)

    This can just be bypassed by the user once they log in with their biometrics, since the credentials are then decrypted and they can just export them raw, or just have them stolen by anyone who accesses their device or installs malware, etc.

    This doesn’t solve the sale, transfer, or multiple creations of accounts.

    1. A hash of your biometrics are stored on a government server, then your device provides the resulting hash of your fingerprint scans to unlock your account to the government server when logging in.

    The scanner that originally creates the hash for your fingerprint must be trusted to not transmit any other data about your fingerprint itself, and could be bypassed by modifying network requests to send fake hashes to the government server during account creation, thus allowing for infinite “verified” accounts to be created and sold.

    This also doesn’t prevent the stealing or transfer of accounts, since you would essentially just be using your hash as a password instead of a different string of text, and then they’d just steal your hash, not a typical password. This also would mean the government would get a log of every time someone used their account, and you could be instantly re-identified the moment you go to the airport and scan your fingerprint at a TSA checkpoint, for example, permanently tying your real identity back to any account you verify with your biometrics in the future.

    The fundamental problem with these systems is that if you have to verify your identity, you must identify yourself somehow. If that requires sending your personal data to someone, it risks your privacy and security going forward. If that doesn’t require sending your personal data, then the system is easily bypassed, and its existence can’t be justified.

    What’s a solution that would be acceptable for you?

    I’ve said it before, and I’ll continue advocating for it going forward:

    • Parental controls and simple parent-controlled monitoring software on young children’s devices
    • Actual straightforward conversations between parents and kids about adult content
    • Sex ed classes.

    We already know these things do the most we can reasonably do to prevent underage viewing of adult content. We don’t need age verification laws, because they either harm privacy or don’t even work, when much simpler, common sense solutions already solve the problem just fine.


  • they then authorize you to create an account

    Authorize you how?

    That would involve someone having the ability to see which accounts where made, when, and how they were authorized, not to mention likely being able to track when they’re used in the future.

    with biometric credentials

    What does this mean? Do you mean you verify your biometric data with the notary to prove it’s you? Your ID should be enough. Do you mean where your biometric data is your password? This doesn’t prove it’s you. If processing is on-device like how phone lock screens work, then a simple piece of software could just extract the raw credentials and allow people to use/sell/transfer those, bypassing the biometrics. If it requires sending your biometric data to the company to log in like a traditional password flow, then all my previous issues with biometric verification online become present.

    There’s still a key difference between this hybrid approach and, like I mentioned previously, buying alcohol by showing your ID to a clerk at a counter, and it’s that the interaction ends there. If you show ID, buy alcohol, then leave, the store doesn’t do anything after that. There’s no system monitoring when or how much you’re drinking, or if you’ve offered some of that drink to someone underage, for example.

    But with something like what you’re proposing, the unfortunate reality is that it has to have some kind of monitoring for it to functionally work, otherwise it becomes trivially bypassed, and thus the interaction can’t end when the person leaves.

    Not to mention the fact that not all platforms people find porn on are actually dedicated porn sites. Many people are first exposed via social media, just like how they’re exposed to much of their other information and general knowledge nowadays. If we want to age gate social media porn consumption as well, we then need to age verify everyone regardless of if they intend to view porn or not, because we can’t ensure it won’t end up on their feed.

    There’s a reason why I’m so strongly against these verification methods, and it’s because they always cause a whole host of privacy and security issues, and don’t even create a strong enough system to prevent unauthorized porn viewing by minors in the first place.


  • The conflict that this often boils down to is that the digital world does not emulate the real world. If you want to buy porn in the real world, you need ID, but online anything goes. I love my online anonymity just as much as everybody else, but we’ll eventually need to find some hybrid approach.

    The problem is that because the internet is fundamentally different from the real world, it has its own challenges that make some of the things we do in the real world unfeasible in the digital world. showing an ID to a clerk at a store doesn’t transmit your sensitive information over the internet to/through an unknown list of companies, who may or may not store it for an undetermined amount of time, but doing so on the internet essentially has to do so.

    While I do think we should try and prevent kids from viewing porn at young ages, a lot of the mechanisms proposed to do so are either not possible, cause many other harms by their existence that could outweigh their benefits, or are trivially bypassed.

    We already scan our faces on our phones all the time, or scan our finger on our computer. How about when you want to access a porn site you have to type in a password or do some biometric credential?

    Those systems are fundamentally different, even though the interaction is the same, so implementing them in places like porn sites carries entirely different implications.

    For example, (and I’m oversimplifying a bit here for time’s sake) a biometric scan on your phone is just comparing the scan it takes each time with the hash (a processed version) of your original biometric scan during setup. If they match, the phone unlocks.

    This verification process does nothing to verify if you’re a given age, just that your face/fingerprint is the same as during setup. It also never has to transmit or store your biometrics to another company. It’s always on-device.

    Age verification online for something like porn is much more complex. When you’re verifying a user, you have to verify:

    • The general location the user lives in (to determine which laws you must comply with, if not for the type of verification, then for the data retention and security, and access)
    • The age of the user
    • The reality of the user (e.g. a camera held up to a YouTube video shouldn’t verify as if the person is the one in the video)
    • The uniqueness of the user (e.g. that this isn’t someone re-licensing the same clip of their face to be replayed directly into the camera feed, allowing any number of people to verify using the same face)
    • And depending on the local regulations, the identity of the user (e.g. name, and sometimes other identifiers like address, email, phone number, SSN, etc)

    This all carries immense challenges. It’s fundamentally incompatible with user privacy. Any step in this process could involve processing data about someone that could allow for:

    • Blackmail/extortion
    • Data breaches that allow access to other services the person has an account on
    • Being added to spam marketing lists
    • Heavily targeted advertising based on sexual preference
    • Government registries that could be used to target opponents

    This also doesn’t include the fact that most of these can simply be bypassed by anyone willing to put in even a little effort. If you can buy an ID or SSN online for less than a dollar, you’ll definitely be able to buy an age verification scan video, or a photo of an ID.

    Plus, for those unwilling to directly bypass measures on the major sites, then if only the sites that actually fear government enforcement implement these measures, then people will simply go to the less regulated sites.

    In fact, this is a well documented trend, that whenever censorship of any media happens, porn or otherwise, viewership simply moves to noncompliant services. And of course, these services can be hosting much worse content than the larger, relatively regulatory-compliant businesses, such as CSAM, gore, nonconsensual recordings, etc.


  • There’s absolutely something to be said for trying to ensure that people don’t have access to porn as kids, but that doesn’t come from what these legal battles inevitably want to impose, which is ID check requirements that create a massive treasure trove of data for attackers to target to steal IDs, blackmail individuals, and violate people’s privacy, while adding additional costs for porn sites that will inevitably lead to predatory monetization, such as more predatory ads.

    The problem is that parents are offloading their own responsibility and education off themselves and schools, and instead placing an unworkable burden onto the sites that host and distribute pornographic content.

    We know that when you provide proper sex education, talk to kids about how to safely consume adult content without risking their health, safety, and while setting realistic expectations, you tend to get much better outcomes.

    If there’s one thing I think most people are very aware of, it’s that the more you try and hide something from kids, the more they tend to try and resist that, and find it anyways, except without any proper education or safeguards.

    It’s why abstinence only education tends to lead to worse outcomes than sex education, even though on the surface, you’re “exposing” kids to sexually related materials.

    This doesn’t mean we should deliberately expose kids to porn out of nowhere, remove all restrictions or age checks, etc, but it does mean that we can, for example:

    • Implement reasonable sex education in schools. Kids who have sex ed generally engage in healthier masturbation and sex than kids who don’t.
    • Have parents talk with their kids about safe and healthy sex & relationships. It’s an awkward conversation, but we know it keeps kids healthier and safer in the long run.
    • Implement a captcha-like system to make it a little more difficult (and primarily, slower and less stimulating) for kids to quickly access porn sites. Requiring certain somewhat higher level math problems to be solved, for example. This doesn’t rely on giving up sensitive personal info.

    Kids won’t simply stop viewing porn if you implement age gates. Kids are smart, they find their way around restrictions all the time. If we can’t reasonably stop them without producing a whole host of other extremely negative consequences, then the best thing we can do is educate them on how to not severely risk their own health.

    It’s not perfect, but it’s better than creating massive pools of private data, perverse financial incentives, and pushing people to more fringe sites that do even less to comply with the law.


  • Chrome is relatively limited in scope compared to, say, a user on an instance of degoogled chromium just using the same Google services along with all the other browsing they do. The extra data that’s gathered is generally going to be things like a little more DNS query information, (assuming your device isn’t already set to default to Google’s DNS server) links you visit that don’t already have Google’s trackers on them (very few) and some general information like when you’re turning on your computer and Chrome is opening up.

    The real difference is in how Chrome doesn’t protect you like other browsers do, and it thus makes more of the collection that Google’s services do indirectly, possible.

    Perplexity is still being pretty vague here, but if I had to guess, it would essentially just be taking all the stuff that Google would usually get from tracking pixels and ad cookies, and baking that directly in to the browser instead of it relying on individual sites using it.


  • Not to mention the fact that the stronger IP law is, the more it’s often used to exploit people.

    Oh, did you as an artist get given stronger rights for your work? That platform you’re posting on demands that you give them a license for any possible use, in exchange for posting your art there to get eyeballs on your work.

    Did your patents just get stronger enforcement? Too bad it’s conveniently very difficult to fund and develop any product at scale under that patent without needing outside investor funding into a new corporate entity that will own the patent, instead of you!

    To loosely paraphrase from Cory Doctorow: If someone wants a stronger lock, but won’t give you the key, then it’s not for your benefit.

    If corporations get to put locks on everything with keys they own, but also make it hard for you to get or enforce access to the keys to the locks on your stuff, then the simplest way to level the playing field is to simply eliminate the locks.



  • These folks include presenting a false person as being of age, then switching to underage at the time of meetup when the target shows up.

    I’ve never seen even a single instance in my own viewership of numerous channels that engage in pedophile hunting where the person is presented as being above the legal age of consent, then only switching to underage at the time of the meeting. They’re presented as underage from the get-go.

    Then the group tries to kill the person

    Again, this doesn’t seem to be a widespread thing compared to the number of them that simply lure them to a location then ask them questions (and directly state that they are free to leave at any time since they’re not law enforcement and can’t arrest them) The people you’re talking about are a small minority of both the actual number of pedo hunters, and the number of overall views received.

    And the perpetrators think this is justice.

    I doubt the people that are explicitly lying to farm content think it’s justice. I do believe the people actually catching people who voluntarily contacted someone presented as underage from the start do.


  • It depends on how these channels are going about finding their victims for it to be considered similar.

    Remember, entrapment is based around luring someone to do something they otherwise would not have done had the operation to entrap them not occurred. If they created an account posing as a minor, then directly DM’d a person asking if they wanted to do x/y/z with a minor, that would be entrapment.

    But if they made an account claiming to be a minor on social media, and the person contacted them voluntarily, asked their age, was told it was under 18 and still continued messaging, then sent explicit photos, that’s not entrapment.

    However, if they were then the people who initiated the conversation about wanting the person to come to their house / visit them somewhere, that could be considered entrapment, and the only evidence against the person that could be eligible for use in court would be the explicit material they sent without being prompted.

    It varies case-by-case, but from what I’ve seen, most of the larger operations tend to try and avoid entrapment-like tactics in most cases, where they only allow the other person to initiate unlawful behaviors, rather than prompting anything themselves.












  • For those who don’t care to read the full article:

    This basically just confines any cookies generated on a page, to just that page.

    So, instead of a cookie from, say, Facebook, being stored on site A, then requested for tracking purposes on site B, each individual site would be sent its own separate Facebook cookie, that only gets used on that site, preventing it from tracking you anywhere outside of the specific site you got it from in the first place.