

“I should base my surveys about human behavior solely on responses from non-human machines” said… someone, apparently? Damn. 💀


“I should base my surveys about human behavior solely on responses from non-human machines” said… someone, apparently? Damn. 💀


Fun entirely tangentially related fact, if you want a fun disposable email for random online services you’ll never use again, try https://www.sharklasers.com/ !


We can’t wait for that to happen, xitter needs to close the technology before the first image puts them in trouble.
That is why this bill imposes heavy fines. No company has good reason to operate a tool like this if every single nude generated can cost them half a million dollars each.
This bill isn’t a “you can do this but we’ll give you a slap on the wrist each time you do” bill, it’s a “if you build a tool like this and let it loose, your company is going bankrupt and we’re talking your life savings, so you’d better not”


(I’m citing the law, not the article)
There’s a few things that I think help prevent something like that from happening.
“Nudify” or “nudified” means the process by which: an image or video is altered or generated to depict an intimate part not depicted in an original unaltered image or video of an identifiable individual
“Intimate parts” includes the primary genital area, groin, inner thigh, buttocks, or breast of a human being.
So a reasonably sized bikini probably wouldn’t qualify, because it still covers intimate areas to some degree, but anything too skimpy would.
The prohibitions in subdivision 2 do not apply when the website, application, software, program, or other service requires the technical skill of a user to nudify an image or video.
So something like Photoshop wouldn’t qualify because you’d need the skills to actually edit images yourself.
I think this:
“No, see… My app is designed to show you what you look like in user-created outfits. Like a virtual closet mirror! What do you mean users are trying on tiny bikinis and clear cellophane dresses? How could I ever have planned for that?”
Would be prevented by this law, but with very good reason. Anyone developing a feature like that could very well simply develop a filter that can tell if too much of a sensitive area is being exposed that wasn’t previously there. If they put technical safeguards in place, and it takes reasonably large amounts of effort for a user to bypass, then the site wouldn’t be liable because it would require “technical skill of a user”.
A site like that can exist, and being able to digitally try on outfits is nice, but it shouldn’t be allowed to ignore the obvious consequences of not putting restrictions on how much skin can be shown.


It’s one thing for a working person to spend whatever income they don’t need to live at a baseline on others, it’s another for someone to hoard so much money they couldn’t necessarily physically spend it all if they tried, let alone spend it on things that would actually measurably increase their happiness.
You can argue regular people should donate more, and many actually do donate more than billionaires (as a % of assets/income) depending on which source you trust to give a good enough picture, (given a lot of donations are hard to track, both from large billionaire foundations and DAFs, to smaller donors with hard to classify spending) but there is a massive gap in how much a regular person can donate relative to a rich person, even just as a % of their income.
If you live paycheck to paycheck, but have, say, $20 left over at the end of the month in actual money to your name that hasn’t already gone to groceries, rent, etc, (and we assume you have no other assets), your net worth is $20.
If a billionaire donates $999,000,000 to charity, that would be the equivalent of that person donating $19.98.
Unlike that person though, the billionaire would have a million dollars in net worth, enough money to buy a house, while the regular person would have $0.02.
Even if these conditions aren’t perfect, and you assume maybe the person has some more net worth than $20, the point still stands. A billionaire can give up almost all of their net worth and still have enough money to comfortably live, or at least meet basic living standards for the average person. For most Americans, if they lose their job, have any surprise bill, or don’t make as much money as they expected to, they will instantly become homeless the next month rent is due, even if they give up none of their existing assets and just stop adding more money on top.
This is why “billionairism” (not a real term ofc) is such a damaging condition. It not only causes you to become obsessed with hoarding wealth that you don’t necessarily need, but it causes you to do so at the expense of others you could readily help without experiencing any material downside in your everyday life. There is no reason to hoard so much wealth.
Money is just a means to get or do things. If you are not spending that money, and you have more money than you’ll ever need to spend, that excess dollar value past your realistic spending for the rest of your life is just a valueless number to you. It’s a number that will never impact your life, but it can impact others. Hoarding it is stupid and immoral.


Hence why this article is about them leaving the openly Nazi billionaire’s platform while remaining on other platforms that are mainstream and still provide a lot of reach :)


I totally recommend just clicking through random pages on there for a good 10 minutes or so. You’ll be surprised at how many random, cool, interesting things you find that you’d probably never see otherwise in a million years. (also an AMAZING place to find small blogs to add to your RSS reader of choice)
They feed a lot of this into their main search index if you pay for Kagi search too, so a lot of these sites will appear higher than they would on Google, Bing, DuckDuckGo, etc, if their content is relevant. Especially fediverse sources.


The email dump has an email in it that’s ALL in Papyrus 😭
(File 2010-03-16 03-04-26 Sugar.eml)


Could work. A lot of the time these current systems have… dubious liveness checks.
Over time they’re definitely going to get better, though, and I have a feeling that with AI watermarking being baked into a lot of the actually good models, it’s not going to be super reliable or repeatable.


Hence why I said most.
Regardless though, you know they’re gonna up the ante as they go. The more normalized it becomes to share more data, the easier it is for them to ask everyone for it too.


Most age verification providers also require video with the person’s face doing specific movements, which is then matched with the ID, so stealing an ID probably wouldn’t be enough.
Not that it’ll stop kids from trying, and sending their parent’s ID to some random sketchy company without their knowledge anyways.


You could make that argument about any tool Wikipedia editors use. Why should they need spellcheck? They were typing words just fine before.
…except it just makes it easier to spot errors or get little suggestions on how you could reword something, and thus makes the whole process a little smoother.
It’s not strictly necessary, but this could definitely be helpful to people for translation and proofreading. Doesn’t have to be something people are wholly reliant on to still be beneficial to their ability to edit Wikipedia.


“More secure” is a minefield of marketing and intentionally misleading the populace.
Here is the popular phone cracking company Cellebrite’s leaked slides showing them telling the people they’re selling their tools to that they can’t as easily (if at all, depending on device state) crack GrapheneOS as they can stock Android:
https://grapheneos.social/@GrapheneOS/112462758257739953 (This is just a well-summarized and explained post from GrapheneOS themselves, but the original leak was independent of them, and the slides and final interpretation are no different from what GrapheneOS is showing, thus I wouldn’t consider this just “marketing”)
Objectively, if you have a GrapheneOS phone, and you plug it into a Cellebrite machine, it will not have its data extracted if it’s before first unlock, or after first unlock but on the lock screen. (as long as you’ve updated your security patches since like 2022, which most GrapheneOS phones will be) A stock Android phone, or even many iPhones were not as resistant to brute forces or even full file system extractions as a Pixel with GrapheneOS.
GrapheneOS also has additional features that can make the cracking process even more difficult, such as disabling USB even after first unlock when on the lock screen, automatically rebooting after set period to return the phone to BFU state, or setting a duress PIN that wipes the phone, which could be triggered via a brute force before the real PIN is guessed.
Also, in case you want to look at the diagrams in the post more since they don’t really explain all the acronyms, here’s a key:
I forget which country it was, but Graphene was specifically listed as being used by criminals/drug dealers.
You might be referring to Catalonia, Spain?
In their case, it was more about Pixel phones in general being used by criminals, and GrapheneOS being their OS of choice which made cracking them harder, rather than GrapheneOS itself being considered criminal or suspicious, but I get where you’re coming from.
You could also be referring to the UK, but that was regarding a journalist with GrapheneOS, but the charge was refusing to unlock his phones. And yes, I said phones, because he was also carrying an iPhone, and they wanted that password too. So in this case the charge wasn’t GrapheneOS-specific.
There’s also France, who was going after GrapheneOS because they wanted an encryption backdoor, but GrapheneOS just said no, so they told police to consider any Pixel with GrapheneOS “suspicious”, but not to consider it a crime in itself. (nor did they have the legal authority to do so) GrapheneOS actually migrated all their server infrastructure out of France as a result of this.
The point is that now, using Graphene, counts against you for the purposes of pressing charges or taking you to a black site.
Generally speaking, even in those areas, this (fortunately) just isn’t true. You are more likely to be considered suspicious in Catalonia if you have… a Pixel, GrapheneOS or not. You’re likely to be criminally charged in the UK… if you don’t give up your password, GrapheneOS or not. And you’re likely to be considered “suspicious” in France… but can’t be charged with anything for it, and the only way they’ll know if you have GrapheneOS installed is if you were already arrested for something else and had your phone seized.
Practically speaking, it’s better to support an OS that protects your data, but could increase the risk of you getting in trouble for protecting your data, than an OS that doesn’t protect your data, and gives it all to the authorities, making whether or not you’re considered criminal pointless. After all, you could voluntarily unlock your GrapheneOS phone in any of these jurisdictions and stop facing any of these possible consequences, and it would carry the same implication as a non-GrapheneOS phone that does it whether you provide your PIN/password or not.
So this:
That is an extra charge.
Just isn’t (at least currently) the case, since no regions currently doing anything against GrapheneOS have made the act of having GrapheneOS installed in itself a crime.
Not to say this couldn’t change, and you’re totally valid in assuming that governments will try to push this, but at least currently, using GrapheneOS will not in itself increase the chance of you going to a black site.


Don’t forget Kagi! (though it isn’t technically comparable to the others since it’s a paid, but without ads one)


Why are they spending money on infrastructure and support but getting no revenue in return?
I already addressed this in my comment. If you want me to expand on how they most definitely can make money from something like this, Mozilla:
If this feature brings in new users, they can get revenue from any of these 3 sources, especially the sponsored listings. If this feature is just a benefit for existing users that might have already changed all their defaults and disabled sponsored content, it increases the chance of VPN conversions and donations, and increases the likelihood someone will recommend Firefox to a friend.
Either they are okay with losing even more money, OR they plan to enshittify.
Or they’re trying to get and retain users, which helps them make money from existing revenue options without having to make anything worse, while also providing a beneficial feature. I’m not saying there’s no chance they’ll enshittify, but I don’t think unconditional pessimism is the right move here.
For this and many many other reasons, it’s time to switch to a privacy fork like LibreWolf or WaterFox
I can’t speak to Waterfox myself, but I would agree with saying LibreWolf is a good idea if you care.
I just personally haven’t bothered switching since Firefox currently works fine for me, and anything they’ve done I dislike is fairly easy to just disable in settings and never see again.


For everyone who thinks this is just gonna be a way for them to somehow sell your data, I don’t think so.
Think about it like this. You can buy a VPN plan for as little as $2 a month or less depending on the provider if you have a long-term commitment (e.g. 1-2 years). That pricing includes margin.
Firefox can essentially operate at lower prices than that, because they:
I would bet this would probably cost Mozilla less than a dollar per user per month, and that’s also assuming all those users are continuing to use the VPN service over time, maxing out their data limit, but refusing to pay for anything else after.
Meanwhile, Mozilla conveniently sells their own VPN service provided through Mullvad, which they make a profit on.
If a user cares enough to continue using the VPN because they want a VPN, they’ll blow through the data limit and be more inclined than the average user to pay for Mozilla’s option. (rather than going “I guess I’ll only care about my privacy for 5 days out of the month”)
If a user doesn’t care enough to continue using the VPN because they were just trying it out, but they chose to use Firefox because it had a free VPN bundled in, which sold them on it over another browser, Mozilla just paid less than an ad would cost for a conversion.
And at the end of the day, it also just helps keep up their reputation as a browser that respects your privacy, which makes it easier to promote the browser elsewhere, in ads or otherwise.
This feels more like a marketing ploy that’s likely to just save money on ad conversions for new Firefox users, and increase Mozilla VPN conversions, rather than something they’re gonna use to super secretly siphon off your data and sell it to advertisers.


It’s also not as SEO-gameable (since fediverse domains are inherently more fragmented than a large, high-reputation domain for SEO algorithms to rank highly), and doesn’t have an inherent monetization system (unlike platforms like Twitter with their ad payouts), so that’s a couple more things going for us.


Make sure to sign up via a creator’s link! (the ones they’ll put in the sponsored section of a video where they are “sponsored” by Nebula as one of Nebula’s creators)
Gets you a pretty good discount and drops it to about 30 bucks a year.


True, but that also depends on the circumstance.
Again, a lot of people just use LLMs now as their primary search engine. Google is an afterthought, ChatGPT is their source of choice. If they ask a simple question with legal or medical implications, with tons of sources, that the LLM answers with identical accuracy to those other publications, should they be sued?
I think it would be a lot better to allow people to sue if it provides false advice that ends up causing some material harm, because at the end of the day, a lot of stuff can be considered “medical.”
Maybe a trans person asks what gender affirming care is. Is that medical? I’d say it is. Should that not get discussed through an LLM if a person wants to ask it?
I’m not saying I wholeheartedly oppose this idea of banning them from giving this type of advice, but I do think there are a lot of concerns around just how many people this would actually benefit vs just cutting people off from information they might not bother to look up elsewhere, or worse, just go to less reputable, more fringe sites with less safeguards and less accountability instead.
Most AI models at this point won’t see significant gains from training on such a small sample of code.
You don’t need a whole corporation’s code to make a functional model, you need the whole world’s.
Adding a tiny bit of your own company’s code to the mix doesn’t really do anything to change the model much, so they generally won’t do it for that reason. Tons of training costs, the only benefit is that the model is very very very slightly fine tuned to kinda sorta produce code that’s maybe possibly a little more stylistically similar to yours.