Hello everyone,
We unfortunately have to close the !lemmyshitpost community for the time being. We have been fighting the CSAM (Child Sexual Assault Material) posts all day but there is nothing we can do because they will just post from another instance since we changed our registration policy.
We keep working on a solution, we have a few things in the works but that won’t help us now.
Thank you for your understanding and apologies to our users, moderators and admins of other instances who had to deal with this.
Edit: @Striker@lemmy.world the moderator of the affected community made a post apologizing for what happened. But this could not be stopped even with 10 moderators. And if it wasn’t his community it would have been another one. And it is clear this could happen on any instance.
But we will not give up. We are lucky to have a very dedicated team and we can hopefully make an announcement about what’s next very soon.
Edit 2: removed that bit about the moderator tools. That came out a bit harsher than how we meant it. It’s been a long day and having to deal with this kind of stuff got some of us a bit salty to say the least. Remember we also had to deal with people posting scat not too long ago so this isn’t the first time we felt helpless. Anyway, I hope we can announce something more positive soon.
Fuck these trolls
troll is too mild of an adjective for these people
How about “pedophile”? I mean, they had to have the images to post them.
“Terrorist”. Having the images doesn’t mean they liked them, they used them to terrorize a whole community though.
“Petophilile enabled Terrorist” or “petophilic terrorist” depending on the person
It still means they can tolerate CSAM or are normalized to it enough that they can feel anything other than discust during “shipping and handling”.
All of your comments have “banned” next to them for me (logged in via lemmy.world using Liftoff) - any idea why?
I assume you’re not actually banned…?
They were banned because they were defending pedophilia (advocating for them to be able to get off to what turns them on) and also trolling very aggressively. You can look at them in the Modlog on the Website, not sure if Apps implement the modlog yet though.
Ah thanks, I’ve seen it a few times but thought it was a bug. Why are people like this!
I have not clue, people can be quite toxic and horrible. Also noticed that they reduced his ban, not sure why, defending pedophilia is pretty bad and definitely carries legal risk but it’s not my call to make.
Yeah, got banned for forgetting that some axioms give people free pass to say whatever they want, no matter how they say it… and replying in kind is forbidden. My bad.
You were banned because you were arguing for why people shouldn’t be arrested for possession of CSAM material, trolling and straw-manning in the replies, and on top of that attempting to seriously and honestly advocate for pedophiles on another community, which is at best borderline illegal (anyone can check the modlog on that one if they don’t believe me, I wouldn’t make such claims if they weren’t true).
So to summerize you were banned for
- Trolling
- Promoting Illegal Activities (Pedophilla and CSAM)
Yeah, this isn’t just joking or shitposting. This is the kind of shit that gets people locked up in federal pound-you-in-the-ass prison for decades. The feds don’t care if you sought out the CSAM, because it still exists on your device regardless of intent.
The laws about possessing CSAM are written in a way that any plausible deniability is removed, specifically to prevent pedophiles from being able to go “oh lol a buddy sent that to me as a joke” and getting acquitted. The courts don’t care why you have CSAM on your server. All they care about is the fact that you do. And since you own the server, you own the CSAM and they’ll prosecute you for it.
Sounds like a digital form of SWATing.
And not just the instance admins would be at risk as well. Any time you view an image your device is making a local copy of it. Meaning every person who viewed the image even accidentally is at risk as well.
Yeah honestly report all of those accounts to law enforcement. It’s unlikely they’d be able to do much, I assume, but these people are literally distributing CSAM.
That’s not a troll, CSAM goes well beyond trolling, pedophile would be a more accurate term for them.
Yeah. A troll might post something like a ton of oversized images of pig buttholes. Who the fuck even has access to CSAM to post? That’s something you only have on hand if you’re a predator already. Nor is it something you can shrug off like “lol I was only trolling”. It’s a crime that will send you to jail for years. It’s a major crime that gets entire police units dedicated to it. It’s a huuuuge deal and I cannot even fathom what kind of person would risk years in prison to sabotage an internet forum.
My thoughts exactly, like if they were just spamming goatsee or something, that would be one thing…
But this raises several questions, and they can only have grimdark answers.
Dont forget they are doing this to harm others, they deserve the name “e-terrorist” or simmlar. hey are still absolutely pedophiles. Their bombing out a space, not trying to set up shop.
I would definately agree that this would very likely count as cyber terrorism, and if it doesn’t it should.
deleted by creator
Simply having it to post makes you culpable. It’s way beyond trolling.
deleted by creator
deleted by creator
deleted by creator
Oh by the way, MAP stuff really doesn’t look good on you (that’s in your comment history). Yeah maybe you think I’m a terrible person because I think drugs should be treated less harshly but you have literally said in other comments that Pedophiles should be allowed to get off on what turns them on (which I remind you is exploitation of Minors). That is a very different stance than people shouldn’t be beaten and arrested for snorting coke, you’re literally advocating for people to be allowed to produce and consume abuse material and claiming that it’s acceptable for people to be pedophiles, and pursue their attractions instead of getting help. I don’t know how you don’t see what is wrong with that. Like seriously this is either really bad trolling (way too far) or, you’re one of them.
Too bad, I’m “kid agnostic”; they might as well be cars or dragons —drawn or otherwise—, I don’t care whether they’re “kid” or “grownup” cars or dragons.
I think I now know which one it is if that statement from the horse’s mouth is to be believed…
A lot of stuff the government bans doesnt align with morrality. In this instance it fucken does
Yes getting someone to drop this on your hard drive even if its explicitly labeled “cache” is equivilant to evidence planting. It puts you in danger of our laws falsely finding you guilty (misunderstandings are a thing, i dont know the level of risk). The advice by our governments is “delete it emedately”. Follow it as completely as you can. Most devices dont broadcast your harddrive contents without warning, giving you time to delete it. For this being false for iphones, Its a risk to ones personal freedom to go on lemmy on an iphone until we can get this CSAM issue resolved.
Do you know why we have possession laws against CSAM in the first place? It’s because people buy and sell abuse material in underground markets, it’s another way they profit off the abuse of children. This is nothing like drug possession laws (which are stupid) because the product is literally a direct product of the abuse of children that many of the people in possession likely helped the criminals in order to obtain it (either directly or by paying them for it).
So yes in this case it does make sense to criminally charge people for possession of something like this considering the direct connection CSAM has to child trafficking and child sexual abuse and when you defend it by going against possession laws it makes it seem like you support these criminals.
Yes, its a virus, the FBI will target anyone who is a host. anyone Who has it on their drive, (edit: intent may be relivant, im no expert). The only way to stay safe is to rid yourself of it. Delete it.
Lemmy mods, keep yourself safe
please dont use an iphone to moderate, if your on linux (i think windows too) use bleachbit on your browser cache and do the “vaccuum” operation.
On android
-
to clear cache or better written instructions here
- go to where you see all your apps
- find your client
- tap and hold on its icon
- tap “app info”
- go to “storage”
- tap “clear cache”
- (if your parranoid “clear data” and loose you sign in, settings and other local data).
-
To manually vaccum, vant find better instructions
- download an app called “termux”, it doesnt need any permissions for this task.
- when you see a black screen with text, type
clear
and hit enter, - Then type or paste
{ echo writing big file of random; cat /dev/urandom >file-gets-big; rm file-gets-big -v; }
- And hit enter
Your phone and the program
cat
will complain About being out of storage, ifrm
gets run, it will be fixed again. If it still complains or termux crashes, uninstall and reinstall termux, the vaccum process is finishedSome people know at a glance wether these steps are safe or not, others do not. never follow instructions you dont understand, verify that i havent led you do somthing dumb.
-
A person who is attracted to children is an evil and disgusting person, someone being a pedophile isn’t just “liking something”, they are a monster.
People spreading CSAM are beyond terrible, but I don’t agree with this generalization. Pedophiles don’t choose to be pedophiles, so as long as they control themselves and avoid harming anyone, I don’t like labelling them as evil monsters.
deleted by creator
This is a serious problem we are discussing, please don’t use this as an opportunity to inject bad-faith arguments.
Edit: Wow your post history is a lot of the same garbage, there is no point in attempting to reason with you, you seem to be defending the act of CSAM or just trolling (really awful and severe trolling I might add, CSAM isn’t something to joke or troll about).
deleted by creator
Ah I see what’s going on, you’re salty that they closed the shitposting community so you’re trolling here, going so far as to compare gays and jews to pedophillia (which is extremely bigoted and incorrect) or downplay the horrific acts that led to the closing of that community and registrations to protect the rest of the Instance’s well being.
Also I’d appreciate it if you didn’t edit what I said when quoting me, thanks.
I’d say the proper word is ‘criminal.’
Criminals.
Trolls? In most regions of the planet, I am fairly certain their actions would be considered criminal.
Removed by mod
The Internet is essentially a small microbiome of beautiful flora and fauna that grew on top of a lake of sewage.
The Internet is a reflection of humanity, minus some of the fear of getting punched in the face.
Yeah, back in the Limewire/Napster/etc days, it wasn’t unheard of for people to troll by relabeling CSAM as a popular movie or TV show. Oh, you wanted to download the latest Friends episode? Congrats, now you have CSAM because a troll uploaded it with the title “Friends S10E7.mov”
I would like to extend my sincerest apologies to all of the users here who liked lemmy shit posting. I feel like I let the situation grow too out of control before getting help. Don’t worry I am not quitting. I fully intend on staying around. The other two deserted the community but I won’t. Dm me If you wish to apply for mod.
Sincerest thanks to the admin team for dealing with this situation. I wish I linked in with you all earlier.
@Striker@lemmy.world this is not your fault. You stepped up when we asked you to and actively reached out for help getting the community moderated. But even with extra moderators this can not be stopped. Lemmy needs better moderation tools.
Hopefully the devs will take the lesson from this incident and put some better tools together.
There’s a Matrix Room for building mod tools here maybe we might want to bring up this issue there, just in case they aren’t already aware.
Or we’ll finally accept that the core Lemmy devs aren’t capable of producing a functioning piece of software and fork it.
Its not easy to build a social media app, forking it won’t make it any easier to solve this particular problem. Joining forces to tackle an inevitable problem is the only solution. The Lemmy devs are more than willing to accept pull requests for software improvements.
And who’s gonna maintain the fork? Even less developers from a split community? You have absolutely no idea what you’re talking about.
Please, please, please do not blame yourself for this. This is not your fault. You did what you were supposed to do as a mod and stepped up and asked for help when you needed to, lemmy just needs better tools. Please take care of yourself.
It’s not your fault, thank you for your job!
It’s not your fault, these people attacked and we don’t have the proper moderation tools to defend ourselves yet. Hopefully in the future this will change though. As it stands you did the best that you could.
Definitely not your fault mate, you did what anyone would do, it’s a new community and shit happens
I love your community and I know it is hard for you to handle this but it isn’t your fault! I hope no one here blames you because it’s 100% the fault of these sick freaks posting CSAM.
Thanks for your work. The community was appreciated.
You don’t have to apologize for having done your job. You did everything right and we appreciate it a lot. I’ve spent the whole day trying to remove this shit from my own instance and understanding how purges, removals and pictrs work. I feel you, my man. The only ones at fault here are the sickos who shared that stuff, you keep holding on.
You didn’t do anything wrong, this isn’t your fault and we’re grateful for the effort. These monsters will be slain, and we will get our community back.
Thank you for your help. It is appreciated.
Really feel for you having to deal with this.
You do a great job. I’ve reported quite a few shit heads there and it gets handled well and quickly. You have no way of knowing if some roach is gonna die after getting squashed or if they are going to keep coming back
You’ve already had to take all that on, don’t add self-blame on top of it. This wasn’t your fault and no reasonable person would blame you. I really feel for what you and the admins have had to endure.
Don’t hesitate to reach out to supports or speak to a mental health professional if you’ve picked up trauma from the shit you’ve had to see. There’s no shame in getting help.
As so many others have said, there’s no need for an apology. Thank you for all of the work that you have been doing!
The fact that you are staying on as mod speaks to your character and commitment to the community.
Contact the FBI
This isn’t as crazy as it may sound either. I saw a similar situation, contacted them with the information I had, and the field agent was super nice/helpful and followed up multiple times with calls/updates.
This doesn’t sound crazy in the least. It sounds like exactly what should be done.
yha, what do people think the FBI is for… this isn’t crazy. They can get access to ISP logs, VPN provider logs, etc.
I think what they’re saying is that contacting the FBI may seem daunting to someone who has never dealt with something like this before, but that they don’t need to worry about it. Just contact them.
Under US jurisdiction, yeah. Could be slightly more difficult depending on the country, LEGAT can’t conduct unilateral operations so they’ll have to cooperate with foreign authorities. These assholes can get away with exploiting jurisdictional boundaries. Hopefully they will be caught, but oh well.
deleted by creator
I’m not saying anybody takes csam less serious. But I wish the American government Went after minor csam events as much as they go after copyright/IP violations. Its not like mike pompeo flew out to other countries to strong arm them into new laws to prevent csam like they have done with pirates who threatened Hollywood profits
I wish the American government Went after minor csam events as much as they go after copyright/IP violations.
Easy: claim copyright/IP on the CSAM… uh, no, wait…
deleted by creator
There is no CP and no porn in Japan… add some tiny censor bars, and it’s just some wholesome family tentacle fun!
That one backfired spectacularly.
deleted by creator
TIL. Oh well, it probably will keep backfiring as long as Japan insists on having “morality laws” instead of something more objective.
Yeah there was even that case where a citizen and resident of Mexico was arrested and detained in the US for breaking US law, even tho it technically didn’t apply to them since they were under Mexican sovereignty… Borders mean little to the US
They might be, but I’d imagine most countries have laws on the books about this sort of stuff too.
And it’s something that the nations usually have no issues cooperating with.
The FBI has assisted in a lot of global raids related to CSAM.
There are few situations where pretty much everyone universally agrees to work together. This is one of those situations. Across cultures and nations, pedos are seen as some of the most vile people alive.
The FBI has offices in alot of other countries and work with local law enforcement.
https://www.fbi.gov/contact-us/international-offices
Can’t really hide from them unless you live in North Korea or Russia
Wait, is this like China having police offices in other countries?
I knew the US collects taxes on their citizens no matter where they live, but isn’t this kind of excessive? Wasn’t INTERPOL supposed to take care of international crime?
For more than eight decades, the FBI has stationed special agents and other personnel overseas. We help protect Americans back home by building relationships with principal law enforcement, intelligence, and security services around the globe.
It is similar to China’s international police but keep in mind quite a few other countries have a similar setup
I’m just surprised that it’s FBI personnel, I thought the CIA was in charge of international affairs, with INTERPOL acting as liaison for the FBI with other countries.
IIRC in the EU we have EUROPOL acting as liaison between the national law enforcement branches, and while there is nothing stopping personnel from one country to enter another, I don’t think they do. But maybe that’s more like the state vs. federal jurisdictions in the US. On the other hand, it’s been some time since I’ve looked deeper into it, and things keep changing.
I have to wonder if Interpol could help with issues like this I know there are agencies that work together globally to help protect missing and exploited children.
What the hell do they actually do then?
“Interpol provides investigative support, expertise and training to law enforcement worldwide, focusing on three major areas of transnational crime: terrorism, cybercrime and organized crime. Its broad mandate covers virtually every kind of crime, including crimes against humanity, child pornography, drug trafficking and production, political corruption, intellectual property infringement, as well as white-collar crime. The agency also facilitates cooperation among national law enforcement institutions through criminal databases and communications networks. Contrary to popular belief, Interpol is itself not a law enforcement agency.”
https://en.wikipedia.org/wiki/InterpolHuh. Thanks!
The FBI reports it to interpol I believe, interpol is more of like an international warrant system built from treaties.
Perhaps most importantly, it establishes that the mods/admins/etc of the community are not complicit in dissemination of the material. If anyone (isp, cloud provider, law enforcement, etc) tries to shut them down for it, they can point to their active and prudent engagement of proper authorities.
More importantly, and germaine to our conversation, the FBI has the contacts and motivation to work with their international partners wherever the data leads.
This is seriously sad and awful that people would go this far to derail a community. It makes me concerned for other communities as well. Since they have succeeded in having shitpost closed does this mean they will just move on to the next community? That being said here is some very useful information on the subject and what can be done to help curb CSAM.
The National Center for Missing & Exploited Children (NCMEC) CyberTipline: You can report CSAM to the CyberTipline online or by calling 1-800-843-5678. Your report will be forwarded to a law enforcement agency for investigation. The National Sexual Assault Hotline: If you or someone you know has been sexually assaulted, you can call the National Sexual Assault Hotline at 800-656-HOPE (4673) or chat online. The hotline is available 24/7 and provides free, confidential support.
The National Child Abuse Hotline: If you suspect child abuse, you can call the National Child Abuse Hotline at 800-4-A-CHILD (422-4453). The hotline is available 24/7 and provides free, confidential support. Thorn: Thorn is a non-profit organization that works to fight child sexual abuse. They provide resources on how to prevent CSAM and how to report it.
Stop It Now!: Stop It Now! is an organization that works to prevent child sexual abuse. They provide resources on how to talk to children about sexual abuse and how to report it.
Childhelp USA: Childhelp USA is a non-profit organization that provides crisis intervention and prevention services to children and families. They have a 24/7 hotline at 1-800-422-4453. Here are some tips to prevent CSAM:
Talk to your children about online safety and the dangers of CSAM.
Teach your children about the importance of keeping their personal information private. Monitor your children’s online activity.
Be aware of the signs of CSAM, such as children being secretive or withdrawn, or having changes in their behavior. Report any suspected CSAM to the authorities immediately.
So far I have not seen such disgusting material, but I’m saving this comment in case I ever need the information.
Are there any other numbers or sites people can contact in countries other than the USA?
It’s probably going to be country dependent
Of course yes. But I’ve discovered that cell phones are programmed to translate emergency numbers even.
In the USA, our main emergency number is 911, but I found out (quite by accident), that dialing 08 brings you to emergency services.
https://en.wikipedia.org/wiki/List_of_emergency_telephone_numbers
Not that I’m familiar with Rust at all, but… perhaps we need to talk about this.
The only thing that could have prevented this is better moderation tools. And while a lot of the instance admins have been asking for this, it doesn’t seem to be on the developers roadmap for the time being. There are just two full-time developers on this project and they seem to have other priorities. No offense to them but it doesn’t inspire much faith for the future of Lemmy.
Lets be productive. What exactly are the moderation features needed, and what would be easiest to implement into the Lemmy source code? Are you talking about a mass-ban of users from specific instances? A ban of new accounts from instances? Like, what moderation tool exactly is needed here?
Speculating:
Restricting posting from accounts that don’t meet some adjustable criteria. Like account age, comment count, prior moderation action, average comment length (upvote quota maybe not, because not all instances use it)
Automatic hash comparison of uploaded images with database of registered illegal content.
On various old-school forums, there’s a simple (and automated) system of trust that progresses from new users (who might be spam)… where every new user might need a manual “approve post” before it shows up. (And this existed in Reddit in some communities too).
And then full powers granted to the user eventually (or in the case of StackOverlow, automated access to the moderator queue).
What are the chances of a hash collision in this instance? I know accidental hash collisions are usually super rare, but with enough people it’d probably still happen every now and then, especially if the system is designed to detect images similar to the original illegal image (to catch any minor edits).
Is there a way to use multiple hashes from different sources to help reduce collisions? For an example, checking both the MD5 and SHA256 hashes instead of just one or the other, and then it only gets flagged if both match within a certain degree.
Traditional hash like MD5 and SHA256 are not locality-sensitive. Can’t be used to detect match with certain degree. Otherwise, yes you are correct. Perceptual hashes can create false positive. Very unlikely, but yes it is possible. This is not a problem with perfect solution. Extraordinary edge cases must be resolved on a case by case basis.
And yes, simplest solution must be implemented first always. Tracking post reputation, captcha before post, wait for account to mature before can post, etc. The problem is that right now the only defense we have access to are mods. Mods are people, usually with eyeballs. Eyeballs which will be poisoned by CSAM so we can post memes and funnies without issues. This is not fair to them. We must do all we can, and if all we can includes perceptual hashing, we have moral obligation to do so.
Something I thought about that might be helpful is if mods had the ability to add a post delay on a community basis. Basically, the delay would be moderator adjustable, but only moderators and admins would be able to see the post for X number of minutes after being posted. It’d help for situations like ongoing attacks where you don’t necessarily want to have to manually approve posts, but you want a chance to catch any garbage before the post goes public.
Edit: and yeah, one of the reasons I’m aware that perceptual hashes can have collisions is because a number of image viewers/cataloging tools like xnview mp or hydrus network use hash collisions to help identify duplicate images. However, I’ve seen collisions between unrelated images when lowering the sensitivity which is why I was wondering if there was a way to use multiple hashing algorithms to help reduce false positives without sacrificing the usefulness of it.
I’m surprised this isn’t linked, there are services that does this for you.
And they are free.
I beleive there are several readily available databases of hashes of csam material for exactly this kind of scanning. Looks like there are some open source ones.
Some top results: https://github.com/topics/csam
This looks to be the top project: https://prostasia.org/project/csam-scanning-plugins/
Could they not just change one pixel to get another hash?
I guess it’d be a matter of incorporating something that hashes whatever it is that’s being uploaded. One takes that hash and checks it against a database of known CSAM. If match, stop upload, ban user and complain to closest officer of the law. Reddit uses PhotoDNA and CSAI-Match. This is not a simple task.
None of that really works anymore in the age of AI inpainting. Hashes / Perceptual worked well before but the people doing this are specifically interested in causing destruction and chaos with this content. they don’t need it to be authentic to do that.
It’s a problem that requires AI on the defensive side but even that is just going to be eternal arms race. This problem cannot be solved with technology, only mitigated.
The ability to exchange hashes on moderation actions against content may offer a way out, but it will change the decentralized nature of everything - basically bringing us back to the early days of the usenet, Usenet Death Penaty, etc.
Not true.
A simple CAPTCHA got rid of a huge set of idiotic script-kiddies. CSAM being what it is, could (and should) result in an immediate IP ban. So if you’re “dumb” enough to try to upload a well-known CSAM hash, then you absolutely deserve the harshest immediate ban automatically.
You’re pretty much like the story of the economist who refuses to believe that $20 exists on a sidewalk. “Oh, but if that $20 really existed on the sidewalk there, then it would have been arbitraged away already”. Well guess what? Human nature ain’t economic theory. Human nature ain’t cybersecurity.
Idiots will do dumb, easy attacks because they’re dumb and easy. We need to defend against the dumb-and-easy attacks, before spending more time working on the harder, rarer attacks.
You don’t get their ip when they post from other instances. I’m surprised this hasn’t resulted in defed.
Well, my home instance has defederated from lemmy.world due to this, that’s why I had to create a local account here.
I mean defedding the instances the CSAM is coming from but also yes.
I’m sorry but you don’t want to use permanent IP bans. Most residential circuits are DHCP meaning banning via IP only has a short term positive effect.
That said automatic scanning of known hashes, and automatically reporting to relevant authorities with relevant details should be doable (provided there is a database somewhere - I honestly have never looked).
Couldn’t one small change in the picture change the whole hash?
Good question. Yes. Also artefacts from compression can fuck it up. However hash comparison returns percentage of match. If match is good enough, it is CSAM. Davai ban. There is bigger issue however for developers of Lemmy, I assume. It is a philosophical pizdec. It is that if we elect to use PhotoDNA and CSAI Match, Lemmy is now at the whims of Microsoft and Google respectively.
Honestly I’d rather that than see shit like this any day.
The bigger thing is that hash detection tools don’t want to give access to just anyone, and just anyone can run a Lemmy instance. The concern is that you’re effectively giving the CSAM people a way to know if they’ll be detected.
Perhaps they can allow some of the biggest Lemmy instances to use the tech, but I wouldn’t expect it to be available to everyone.
deleted by creator
Mod tools are not Lemmy. Give admins and mods an option. Even a paid one. Hell. Admins of Lemmy.world could have us donate extra to cover costs of api services.
I agree. Perhaps what Lemmy developers can do is they can put slot for generic middleware before whatever the POST request is in Lemmy API for uploading content? This way, owner of instance can choose to put whatever middleware for CSAM they want. This way, we are not dependent on developers of Lemmy for solution to pedo problem.
If they hash the file binary data, like CRC32 or SHA, yes. But there are other hash types out there, which are more like “fingerprints” of an image. Think of how Shazam or Sound Hound can recognize a song playing, despite the extra wind, static, etc that’s present. There are similar algorithms for images/videos.
No idea how difficult those are to implement, though.
There are FOSS applications that can do that (czkawka for example). What I’m not sure it’s if the specific algorithm used is available and, more importantly, if the csam hashes are available for general audiences. I would assume if they are any attacker could check first and get the right amount of changes.
One bit, in fact. Luckily there are other ways of comparing images without actually showing them to human eyes that allow you to calculate a percentage of similarity.
Reddit had automod which was highly configurable.
Reddit automod is also a source for all the porn communities. Have you ever checked automod comment history?
Yeah, I have. Like 2/3 of automod comments are in porn communities.
What? Reddit automod is not a source for porn. What would be happening is the large quantity of content it reacts to there.
It literally reads your config in your wiki and performs actions based on that. The porn communities using it are using it to moderate their subs. You can look at the post history. https://www.reddit.com/user/AutoModerator It is commenting on posts IN those communities as a reaction to triggers but isn’t posting porn (unless they put in their config)
Not worth it if you don’t moderate on reddit but read the how to docs for reddit automod, it is an excellent tool for spam management and the source is open prior to reddit acquiring it and making it shit. https://www.reddit.com/wiki/automoderator/full-documentation
No shit, ya don’t say?
Where the hell you think I got that list from? I literally filtered every single subreddit that AutoModerator replied in for like three months.
Bruh you’re preaching to the person that accumulated the data. That’s the data it puked up. I can’t help it that most of them happen to be filth communities.
So you should understand that what you said is invalid. Automod doesn’t post porn without a subreddit owner configuring it to and just because it posts 2/3 to NSFW subs doesn’t mean it is posting content just working more there.
We could 100% take advantage of a similar tool, maybe we some better controls on what mods can make it do. I’m working to bring BotDefence to Lemmy because it is needed.
You completely missed the point.
By the statistics of the data I found, most of the subreddits using AutoModerator are filth communities.
So you can reverse that, check AutoModerator comment history, and find a treasure trove of filth.
I can’t help that these are the facts I dug up, but yeah AutoModerator is most active in porn communities.
Too stupid to argue with. You don’t even understand your own “data”.
The best feature the current Lemmy devs could work on is making the process to onboard new devs smoother. We shouldn’t expect anything more than that for the near future.
I haven’t actually tried cloning and compiling, so if anyone has comments here they’re more than welcome.
I think having a means of viewing uploaded images as an admin would be helpful, as well disabling external image caching. Like an “uploaded” gallery for admins to view that can potentially hook into Photodna/CSAI-Match or whatever.
I think it would be an AI autoscan that flags some posts for mod approval before they show up to the public and perhaps more fine-grained controls for how media is posted like for instance only allowing certain image hosting sites and no directly uploaded images.
I was just discussing this under another post and turns out that the Germans have already developed a rule-based auto moderator that they use on their instance:
https://github.com/Dakkaron/SquareModBot
This could be adopted by
lemmy.world
by simply modifying the config fileCloudflare CSAM protection is not available outside of the US, unfortunately.
Probably hashing and scanning any uploaded media against some of the known DBs of CSAM hashes.
Iirc that’s how Reddit/FB/Insta/Etc. handle it
deleted by creator
The sad thing is that all we can usually do is make it harder for attackers. Which is absolutely still worth doing, to be clear. But if an attacker wants to cause trouble badly enough, there’s always ways around everything. Eg, image detection can be foiled with enough transformation, account age limits can be gotten past by a patient attacker. Minimum karma can be botted (even easier than ever with AI) and Lemmy is especially easy to bot karma because you can just spin up an instance with all the bots your heart desires. If posts have to be approved, attackers can even just hotlink to innocent images and then change the image after it’s approved.
Law enforcement can do a lot more than we can, by subpoenaing ISPs or VPNs. But law enforcement is slow and unreliable, so that’s also imperfect.
This is flat out disgusting. Extremely questionable someone having an arsenal of this crap to spread to begin with. I hope they catch charges.
I hope the devs take this seriously as an existential threat to the fediverse. Lemmyshitpost was one of the largest communities on the network both in AUPH and subscribers. If taking the community down is the only option here, that’s extremely insufficient and bodes death for the platform at the hands of uncontrolled spam.
There are just two full-time developers on this project and they seem to have other priorities. No offense to them but it doesn’t inspire much faith for the future of Lemmy.
this doesn’t seem like a respectful comment to make. People have responsibilities; they aren’t paid for this. It doesn’t seem to fair to make criticisms of something when we aren’t doing anything to provide a solution. A better comment would be “there are just 2 full time developers on this project and they have other priorities. we are working on increasing the number of full time developers.”
Please get some legal advice, this is so fucked up.
We have been fighting the CSAM (Child Sexual Assault Material) posts all day but there is nothing we can do because they will just post from another instance since we changed our registration policy.
It’s likely that we’ll be seeing a large number of instances switch to whitelist based federation instead of the current blacklist based one, especially for niche instances that does not want to deal with this at all (and I don’t blame them).
Sounds like the 4chan raids of old.
Batten down, report the offender’s to the authorities, and then clean up the mess!
Good job so far _
Genuine question: won’t they just move to spamming CSAM in other communities?
With how slow Lemmy moves anyways, it wouldn’t be hard to make everything “mod approved” if it’s a picture/video.
This, or blocking self hosting pictures
Honestly, this sounds like the best start until they develop better moderation tools.
This seems like the better approach. Let other sites who theoretically have image detection in place sort this out. We can just link to images hosted elsewhere
I generally use imgur anyway because I don’t like loading my home instance with storage + bandwidth. Imgur is simply made for it.
Yes, and only whitelist trusted image hosting services (that is ones that have the resources to deal with any illegal material).
the problem is those sites can also misuse the same tools in a way that harms the privacy of it’s users. We shouldn’t resort to “hacks” to fix real problems, like using client scanning to break E2EE . One solution might be an open sourced and community maintained auto mod bot…
This seems like a really good solution for the time being.
[This comment has been deleted by an automated system]
Not-so-fun fact - the FBI has a hard limit on how long an individual agent can spend on CSAM related work. Any agent that does so is mandated to go to therapy afterwards.
It’s not an easy task at all and does emotionally destroy you. There’s a reason why you can find dozens of different tools to automate the detection and reporting.
[This comment has been deleted by an automated system]
Yep. I know someone that does related work for a living, and there are definite time limits and so on for exactly the reasons you say. This kind of stuff leaves a mark on normal people.
Or it could even just ask 50 random instance users to approve it. To escape this, >50% of accounts would have to be bots, which is unlikely.
But then people would have to see the horrible content first
That definitely is a downside
good thing you did it the way you did nobody should have to look at awful stuff like this. keep your mind healthy nobody should have to deal with that
I’m afraid the fediverse will need a crowdsec-like decentralized banning platform. Get banned one platform for this shit, get banned everywhere.
I’m willing to participate in fleshing that out.
Edit: it’s just an idea, I do not have all the answers, otherwise I’d be building it.