

Article since it’s behind a paywall.
Article since it’s behind a paywall.
He still has cards to play to make himself heard. He hasn’t insulted anyone’s attire yet.
That’s fair. We can also expect proper moderation from social media sites. I’m okay with a light touch but It shouldn’t be floating around if you get what I mean.
Banning the tech, banning generated cp on the internet or banning it at home?
I’m a big advocate of AI and don’t personally want any kind of banning or censorship of the tools.
I don’t think it should be published on any kind of image sharing sites. I don’t hold people publishing it in high regard and I’m not against some kind of consequence. I generally view prison as unproductive though.
At home, I’m not sure. People imo can do what they want behind closed doors. I don’t want any kind of surveillance but I don’t know how I would react if it got brought up at a trial, as a kind of proof if the allegations have something to do with that theme (child molestation).
I also don’t think we need much of a reason to ban it on the web.
Kids will do things if they see other children doing it in pictures and videos. It’s easier to normalize sexual behavior with cp then without.
Although that’s true, such material can easily be used to groom children which is where I think the real danger lies.
I really wish they had excluded children in the datasets.
You can’t really put a stop to it anymore but I don’t think it should be something that’s normalized and accepted just because there isn’t a direct victim anymore. We are also talking about distribution here and not something being done in private at home.
Yes exactly. Yet the islands were included but Russia wasn’t. So the situation isn’t that Russia has no trade and tariffs are useless against it, but that Russia was specifically singled out so they wouldn’t receive tariffs.
Just take the loss, jfc.
We first use the DE-COP membership inference attack (Duarte et al. 2024) to determine whether a particular data sample was part of a target model’s training set. This works by quizzing an LLM with a multiple choice test to see whether it can identify original human-authored O’Reilly book paragraphs from machine-generated paraphrased alternatives that we present it with. If the model frequently correctly identifies the actual (human-generated) booktext (for books published during the model’s training period) then this likely indicates priormodel recognition (training) of that text.
I’m almost certain OpenAI trained on copyrighted content but this proves nothing other then it’s ability to distinguish between human and machine written text.
And just like that, Judgement Day is avoided.
We concentrated so much on solar energy we completely forgot moon energy, and now, we must pay for that oversight.
I heard certain guards overlooking certain camps said the same thing.
If someone asks me to commit genocide, it’s going to take a bit more then “pushback” to get me to do it.
It’s sadly already happening in regards to stack.
I understand the sentiment but I think it’s foolhardy.
And all that mostly benefiting the data holders and big ai companies. Most image data is on platforms like Getty, Deviant Art, Instagram, etc. It’s even worse for music and lit, where three record labels and five publishers own most of it.
If we don’t get a proper music model before the lawsuits pass, we will never be able to generate music without being told what is or isn’t okay to write about.
I think it will be punished, but not how we hope. The laws will end up rewarding the big data holders (Getty, record labels, publishers) while locking out open source tools. The paywalls will stay and grow. It’ll just formalize a monopoly.
It shouldn’t be much of a problem using a gibli based model with img2img. I personally use forge as my main ui, models can be found on civitai.com . It’s easily possible, you just need a bit of vram and setting it up is more work. You might get more mileage by using controlnet in conjunction with img2img.
AI has a vibrant open source scene and is definitely not owned by a few people.
A lot of the data to train it is only owned by a few people though. It is record companies and publishing houses winning their lawsuits that will lead to dystopia. It’s a shame to see so many actually cheering them on.
They won’t be rewarded. Data brokers, record companies, publishing houses, getty, etc will be rewarded.
You want to shoot open source initiatives in the face and give a handful of companies a monopoly, so rich people can get richer.
seeing the huge amount of data needed for competitive generative AI, then open source AI cannot afford the data and dies.
How does that change if copyrights are strengthened? The open source scene dies and the big players will still keep scraping.
It’s basically fanart being banned but worse, and everybody is cheering for the copyright industry winning and censoring us again.
I wonder if we’ll run out of styles in a few years or if it will only apply to the ones with lawyers and lots of money behind them.