flamingos-cant (hopepunk arc)

Webp’s strongest soldier.

  • 192 Posts
  • 609 Comments
Joined 3 years ago
cake
Cake day: June 12th, 2023

help-circle















  • I’m not anti asylum seeker. I’m anti people taking advantage of the system for legitimate asylum seekers.

    Unless you’re from Ukraine or Hong Kong, there’s no way to claim asylum in the UK that doesn’t first involve entering the country illegally.

    I can’t remember the statistic, but the majority of asylum seekers don’t even come over in small boats.

    About half of asylum applicants come across in boats.

    I’m talking about the widely condemned human trafficking industry that takes advantage of our system to make a business of bringing people across the channel.

    I also want to stop the boats and the exploitative gangs doing this, but any approach that isn’t opening up safe and legal routes for applications to be made is just advocating for everyone else to bear the burden of global instability the UK played a disproportionate role in creating.

    Also in addition, there’s been attempted pogroms here.

    I assume you mean there hasn’t. I think it’s a pretty apt way to describe a mob descending on a hotel to try and burn it down because asylum seekers are inside.






  • I was curious to see how they handle this on the fedi side, because they obviously can’t stop you from uploading images to other instances, so decided to do some digging myself.

    The fedi code for this is here and looks like this:

    # Alert regarding fascist meme content
    if site.enable_chan_image_filter and toxic_community and img_width < 2000:  # images > 2000px tend to be real photos instead of 4chan screenshots.
        if os.environ.get('ALLOW_4CHAN', None) is None:
            try:
                image_text = pytesseract.image_to_string(
                    Image.open(BytesIO(source_image)).convert('L'), timeout=30)
            except Exception:
                image_text = ''
            if 'Anonymous' in image_text and (
                    'No.' in image_text or ' N0' in image_text):  # chan posts usually contain the text 'Anonymous' and ' No.12345'
                post = session.query(Post).filter_by(image_id=file.id).first()
                targets_data = {'gen': '0',
                                'post_id': post.id,
                                'orig_post_title': post.title,
                                'orig_post_body': post.body
                                }
                notification = Notification(title='Review this',
                                            user_id=1,
                                            author_id=post.user_id,
                                            url=post.slug,
                                            notif_type=NOTIF_REPORT,
                                            subtype='post_with_suspicious_image',
                                            targets=targets_data)
                session.add(notification)
                session.commit()
    
    

    The curious thing here, apart from there being both an environmental variable and site setting for this, is the toxic_community variable. This seems to be a renaming of the low_quality field Piefed applies to communities, which are just communities with either memes or shitpost in their name.

    You also don’t get social credits docked for this.


  • Sorry I misspoke, when I said it wouldn’t work I was thinking of the negative knock-on effects that make this approach unworkable, not that it was entirely ineffective. What I’m trying to get at is that there were more effective options that the government for achieving it’s stated goals, only allowing legal adults to access pornography, that didn’t require you to send a picture of your ID or face to every website that happens to have nudes on it. But other, more privacy-friendly approaches wouldn’t make adults hesitant to access porn websites and I think this quality of the current approach was a desirable effect.

    Obviously it’s not the case that everything popular with the public is popular with politicians for the same reason, but if something is popular with the public you need quite a good reason to believe that politicians are in favour of it for some other motivation, and with all of that, we just don’t have that good reason.

    I think politicians hating the concept of porn and liking the idea of having your real identity linked to your social media accounts are actually pretty good explanations for the government knowingly going with an inferior approach to all this.

    Again, I think we’re at an impasse, you’re giving the former government a level charitably I can’t. That’s not to say prospective is unreasonable, I think it’s perfectly reasonable, I just don’t believe it.





  • It’s honestly maddening that we’re being subjected to this mass surveillance in the name of preventing children seeing porn when we’ve had much more effective tools for decades now. An OS or ISP level DNS filter takes more work to get around than just finding a more shady website. Microsoft, to its credit, have pretty good parental controls built into their OSs (Windows + Xbox).

    But it’s like I’ve said before, this isn’t about preventing kids seeing porn; it’s about preventing adults seeing porn because the political class finds it icky.


  • Most parents apparently don’t agree with you, or are too lazy to do anything:

    back in 2011 the government worked with ISPs (internet service providers) to come up with a Code of Practice on implementing ‘parental controls’ for all new customers. In 2013 this was adopted by all the major players. So when you (an adult – because you have to be over 18 to do this) register for an internet connection, you are offered adult content filtering by default. You can tweak this, if you like, for example you can decide you’re happy for your family to access social media sites but not pornography. Or if you don’t anticipate any children using your connection, you can opt out of adult filters altogether. Research conducted in 2022, however, found that although 61% of parents were aware of these filters, only 27% actually used them

    From: https://www.girlonthenet.com/blog/age-verification-whats-the-harm/