Extreme Risk Facebook Use

I’ve warned about the dangers of due process-free “Extreme Risk Protection Orders. The lack of said due process, the way pretty much anyone can get one, the way the target isn’t allowed to know until it’s too late, the potential for abuse.

Let’s look at one of those points again: he way pretty much anyone can get one. It varies by state, but generally anyone who sort of knows the target can get one. Truly sucks if the requestor was simply trying to get his target disarmed, the better to kill or injure him with impunity.

Anyone who sort of knows the target…

Like Facebook.

Facebook rolls out AI to detect suicidal posts before they’re reported
“This is about shaving off minutes at every single step of the process, especially in Facebook Live,” says VP of product management Guy Rosen. Over the past month of testing, Facebook has initiated more than 100 “wellness checks” with first-responders visiting affected users. “There have been cases where the first-responder has arrived and the person is still broadcasting.”(emphasis added-cb)
Facebook’s tools then bring up local language resources from its partners, including telephone hotlines for suicide prevention and nearby authorities. The moderator can then contact the responders and try to send them to the at-risk user’s location, surface the mental health resources to the at-risk user themselves or send them to friends who can talk to the user.

Gee, I can’t see any possible way that could go wrong. Except when it does. A lot.

But assuming you survive the wellness/welfare check, what’s next? We have Facebook notifying the authorities that a person is suicidal; is that going to trigger an ERPO?

Perhaps not under most current laws just now — though several have been written to allow police to file for an order when they become aware of a situation — but given the fad for minority reports predictive policing, and for mental health tests for gun ownership, I figure it’s merely a matter of time before the Pelosis of the statist world take note of Facebook’s snooping and require a FB report to trigger an ERPO. Just to be sure.

Unfortunately, after TechCrunch asked if there was a way for users to opt out, of having their posts a Facebook spokesperson responded that users cannot opt out. They noted that the feature is designed to enhance user safety, and that support resources offered by Facebook can be quickly dismissed if a user doesn’t want to see them.

Quickly dismissing the “first-responders” kicking in your door may be a little more difficult. Dismissing a gun-siezure ERPO is effectively impossible.

But the spokesman is wrong. There is an opt-out: drop Facebook for your own protection. True, FB permanently archives your account contents in case you ever want to come back, but at least you can avoid the near real-time telescreen psych monitoring.


8 thoughts on “Extreme Risk Facebook Use”

  1. At one point, years ago, my older grandchildren talked me into getting a FB account, supposedly so we could communicate. Didn’t work out that way, and I soon dropped it, but got sucked into some political discussion with others in the meantime. I don’t know what in the world FB does with all that old data, but it can’t be anything good. I vigorously second your caution not to have anything to do with this invasive, “big brother” thing.

    Suicide is the business of the individuals involved, and nobody else. And all this nonsense about FB interfering with people who “might” want to kill themselves is a serious symptom of so much of the evil in the world.

    The desire/compulsion to control the lives and property of other people is the ROOT of all evil.

  2. I commented similarly on another blog:

    Expect a lot of false-positives, where people venting online trigger the AI and get a visit (with or without an ERPO). I only recently got a Facebook account, but I’ve noticed that many of my friends — my sane, rational, real-life friends — could come off as depressed online, especially to a third-party observer that doesn’t know them.

    Also (and this is the part nobody is talking about), expect a lot of false-negatives, where people who really are suicidal act “normal enough” online that Facebook’s AI fails to detect them.

    Finally, expect that real-life interventions by caring friends and family will slow and/or stop, as everyone puts their trust in Facebook’s latest-greatest AI. Expect less human interaction and more reliance on the machine, which is what those of us who resisted social media have been saying for quite some time.

    Out of all that, I have one over-arching question: What will Facebook’s criminal and civil liability be in all this? If a false-positive results in the loss of due process and freedom, is there any recourse for the victim? If a false-negative results in a suicide attempt because no intervention occurred, is there any recourse for the victim’s family? And as Facebook’s corporate structure has shown itself to be generally anti-Second-Amendment, how is this not a massive conflict of interest worthy of investigation?

    What is their liability?

    1. The wonderful Book of Face will have the same liability as the police. They have no responsibility to protect anyone. Except perhaps Mark Zuckerberg. He won’t ever have any responsibility for anything. He is what is known as one of the untouchables.

      1. And responding officers and any judge who signs an ERPO without any evidence are subject to 18 U.S.C. 242 – Deprivation of Rights under Color of Law, both for the due process violations AND the 2A violations.

        But will we ever actually see any prosecutions under those statutes?

        My prediction is, “Probably not.” Among other things, you’d have to find a federal prosecutor willing to file charges, and federal prosecutors tend to lean left (read: anti-2A), and are generally unwilling to “rock the boat”.

        If no one is willing to actually prosecute the crimes, then the statutes are meaningless. Facebook’s liability is effectively zero.

        (Besides which, a corporation cannot serve a jail sentence, so they’d just be fined. The fines prescribed in the statutes would financially destroy an individual, but even the maximums are just a drop in the bucket for an entity the size of Facebook.)

        I really don’t mean to be a naysayer. I just don’t see a way out of this mess.

  3. Tried and dropped it immediately. My life is not that interesting that I must broadcast it.

    ERPOs have to go.
    Blatantly un-Constitutional.
    Ripe for abuse, just like SWATTING.

  4. Swatting was the first thing that came to my mind as well. Especially knowing facebook is Anti-Second Amendment. Gab.ai is looking pretty good though. It’s more like Twitter (also Second Amendment) than facebook though.

Leave a Reply to MamaLiberty Cancel reply

Your email address will not be published. Required fields are marked *