The founding fathers of the
United States passed the Second Amendment to the Constitution in 1791. Mark
Zuckerberg and friends founded Facebook in 2004. What do these two things have
in common? In both cases, the people behind this law and this social media
platform, respectively, were considered to be visionaries in their fields of
endeavor.
The Second Amendment, which
generally gives U.S. citizens the right to keep and bear arms, was ratified
after American colonists—fighting for their independence—had used guns to ward
off their English overlords. Its fundamental purpose was to give citizens the
opportunity to fight back against a tyrannical federal government.
The intended purpose of "The Facebook," as it was originally known, was to allow students at Harvard University to use their email addresses and photos to connect with other students at the school. Zuckerberg, then a Harvard student, saw it as a way to bring the college social experience onto the Internet.
So here’s the problem. In
1791, when militiamen used single-shot muskets to repel the enemy, nobody could
have imagined that in the distant future, a gunman with a semi-automatic weapon could fire up to 400
bullets every minute, reload in seconds and start all over again. And in 2004,
nobody could have imagined that in 2016, a room full of hackers could sit in a
warehouse in St. Petersburg, Russia, and flood Facebook with millions of phony
comments created by non-existent individuals in order to make Donald Trump the
president of the United States.
Unfortunately, both of those
things came true.
I could go on all day talking
about the Second Amendment, but that’s for another time. The news today is that
an independent board of oversight has upheld the decision by Facebook on
January 7 to ban former president Donald J. Trump. According to the New York
Times:
A Facebook-appointed panel of journalists, activists and
lawyers on Wednesday upheld the social network’s ban of former President Donald
J. Trump, ending any immediate return by Mr. Trump to mainstream social media
and renewing a debate about tech power over online speech.
Facebook’s Oversight Board, which acts as a quasi-court over
the company’s content decisions, said the social network was right to bar Mr.
Trump after the insurrection in Washington in January, saying he “created an
environment where a serious risk of violence was possible.” The panel said the
ongoing risk of violence “justified” the move.
But the board also said that an indefinite suspension was
“not appropriate,” and that the company should apply a “defined penalty.” The
board gave Facebook six months to make its final decision on Mr. Trump’s
account status.
So here’s the point:
I agree with the board that
if Facebook is going to ban users for any period of time, it must establish
standards for doing so, much the same way the government establishes penalties
for various crimes. Murder is obviously worse than jaywalking, for example. In
the case of Facebook, inciting white supremacists, domestic terrorists and
neo-Nazis to storm the U.S. Capitol and attempt to overturn a
lawfully-conducted election is certainly worse than calling the current
president “Sleepy Joe” or your primary opponent “Lyin’ Ted.”
But how much worse? What’s
the penalty for both? The answer is currently unknown.
Facebook’s seemingly
arbitrary “community standards” clearly need to be more effectively defined.
For example, a good friend of mine once posted something about his wife’s
Volkswagen automobile and when I jokingly replied, “damn Germans,” I was
threatened with suspension for violating Facebook’s rules governing hate
speech.
Are you kidding me? That was
considered hate speech? I could understand Facebook’s concern if I had said, “Those damn Germans are mentally deficient, substandard people with no sense of
dignity or class who are only 3/5 human and should be eliminated or, at least,
segregated from the good people of the earth.” But that’s not what I said. So
why was I threatened then?
In a word, “algorithm.”
Since its inception, Facebook
has attracted billions of users, including the above-mentioned fake accounts
commonly referred to as “bots.” To monitor and control what they post on its
platform, Facebook needs a system of moderation, which I know something about.
I used to be one of three moderators on a sports fan site that had a few
hundred users, and it was our job to observe what members posted and sanction
or ban those who broke established rules.
But Facebook couldn’t hire
enough people to moderate billions of users, checking every post in the context
it was used, so it created an algorithm to search for certain words and phrases
that it deemed objectionable, and to issue warnings, suspensions and bans based
on those violations. Because the algorithm doesn’t read context, words like
“damn Germans” apparently set off an alarm.
In the wake of today’s
decision by the Oversight Board, it’s my opinion that Facebook needs to do two
things: (1) find a way to redesign its algorithm to be more discerning (don’t
ask me how; I’m no programmer) and (2) create a schedule of community standards
that assigns appropriate punishment for various levels of violation.
For example, typing “Joe
Biden is a crook” might get you a 3-day ban, “Joe Biden is a pedophile” might
get you banned for six months and “Joe Biden stole the election and we need to
rally at the Capitol and take the country back” might you banned for life.
Or something like that.
It’s not up to me to
determine appropriate punishment, so I have to trust Facebook to do it the
correct way. If they do, it won’t solve all of social media’s problems, but it
will be a step in the right direction. And if they don’t, I hope the Oversight
Board will be back, telling Zuckerberg and his minions to try again.
Meanwhile, I hope Trump's ban is made permanent for what he has already done to this country, a lot of which was instigated through social media accounts. In my opinion, nobody deserves it more.
No comments:
Post a Comment