Currently hide post sensitivity (API hide_post_sensitivity) is set to Medium. This controls the “score” needed to hide a post based on user flags. In some circumstances, a single user has ended up hiding a post, and in others two users with a low “trust level” and “type bonus” have hidden a post. Based on the activity we’ve seen, setting it to a “Low” sensitivity would be an improvement.
This would only impact the posts until the flag is seen by a moderator, as when they accept or reject the flagging, the post is either shown or hidden.
about sensitivity to hiding: in my opinion, there should not be any automatic hiding of anything, only manual interventions should hide posts. Just as it’s a person writing something, that is taking responsibility for something written here, it should be a person taking action against improper statements.
I prefer having to deal with a human being taking the responsibility of executing an action, than hearing “it’s the way this software works”.
also: if a flag is deemed improper, the persons wrongly flagging a post should receive a word of caution, explaining that we should not flag a post only because we disagree with its content.
but again, please do let us show we disagree, politely, with a simple , thanks.
Basically, the idea of this function is ok, because moderators don’t do a 24/7 job.
That’s why I don’t want to abolish it.
However, I absolutely agree with @pnorman: the sensitivity must be reduced! I’ve read far too often now in various situations that posts seem to have been flagged by only 1 or 2 users and thus hidden.
But I also agree that users who want to report/flag a post should get a warning about the consequences beforehand. And it should be more clearly described what kind of flag (spam, inappropriate, offtopic …) should be used and when not. And yes: not every opinion you don’t share is inappropriate and therefore shouldn’t be flagged right away.
And moderators should consistently react to the misuse of the report button with “reject” - this lowers the score of the user who misuses the function and thus limits the possibility of misuse.
Surely, that’s an example of a mod hitting the wrong button? (I.e. the “approve but do nothing about it” button instead of the “delete post” button)?
That’s assuming we conclude it’s spam. It’s certainly someone trying to advertise their service, but it seems superficially related to the topic at hand. Presumably, unambiguous spam would have attracted more than one flag by now.
It’s related to the topic, but also written in a way that’s purely advertisement rather than a mere suggestion of just checking another service to view the OT datasets. Personal opinion would be to move it at a separate topic, still linked to the current one.
Yes, @Tordanik is exactly right.I hit the wrong button by accident and could not undo it. I was going to flag it later today and have someone else hide it. Investigating the link, I concluded it is an underhanded way of trying to get people to patronize their website. You have to sign up for their services I see. @nukeador
that assumes that someone will take action, not that some threshold be reached and the action will follow automatically.
possibly, if the problem is that moderators don’t manage to take manual action quick enough, we need more moderators in place, more responsive, more uniformly distributed along time-zones.
IMO: automatic action is a very weak solution, and should be disabled completely.
But they will; where did you get the impression that they won’t?
Automatic action is not “a solution” but only a temporary damage-control measure. It will usually only last for dozen or so hours until moderator comes back online and checks their review queue and choose the final action. Or do you have different information?
and should be disabled completely.
Disabling it might work if we had 50+ moderators per category from different timezones, so at least several of them are always available and checking the review queue 24/7. But not when we have ~3 of them per category, often in the same timezone - in which case current system seems preferable to me.
Be grateful (while it lasts) that there are still multiple humans whose judgement triggers such action, and moderators that later review their choices. I predict soon it will all be AI which decides what goes and what doesn’t; and a little while later there will be no humans to complain to if AI had chosen wrongly
Also, instant feedback is good, yes? If you start getting flags so your posts are being hidden, it is strong indication that at least some of the community does not approve of the way you write; so one should take it into strong consideration and both work on making their writing more in accordance with participation guidelinesand delay further postings until moderators come back online and pass a verdict.
I don’t speak in hypothetical future, but based on past experience. Now, trying not to look at my own examples, have a look at the reply on this post. Unfortunately, having been deleted, we cannot review what was the problem.
Maybe I’m looking at that 2%, or maybe there’s a sheer mass of spam that I’m not aware of. In that case, applause to our good moderators.
you can exaggerate, no less than Italians! In the global Telegram group we have 13 admins, and the only challenge I’ve noticed is managing to delete spam before another admin intervenes.
I was the most active poster for the Panama forum, which was imported into LATAM. After that happened, I asked to join administration, but I don’t know what happened with my application. In Telegram I asked the owner and he said “sure, no problem” and that was it.
feedback would be good. instead, whenever a post is flagged by enough weighing people (can be one) the writer is notified with a standard letter. I had in mind this example and am pleased to notice that in the end someone did intervene to reactivate the post. I hadn’t noticed it, I wasn’t notified of the action after it was taken, and I don’t know whom to thank.
However fixed, still: my experience was not feedback and not instant. it was anonymous automatic action, and nobody to speak to.
Discourse sends a notification to moderators after flags have been more than 12 hours without resolution. That’s happened 3 times this month, and there’s been 66 flags. Notifications may contain multiple items, and the 66 flags contain duplicate, so I don’t want to calculate percentages, but I don’t see a general problem with flags taking too long to resolve.