
cutting the rapid spread and echo of disinformation is a much better approach to the abuse of our social media and threat to our democracy.
Not perfect, but cuts through a lot of the 'free speech' complaints and allows much greater 'free speech' opportunity without Zucker&Co. control. This is what I posted on my FB page:
"It is the job of the poet to observe what otherwise goes unnoticed, and to say what otherwise cannot be said." — rs.
In an excellent interview on NPR radio yesterday, Renee DiResta , Research Manager at the Stanford Internet Observatory, discussed the threat and problem of toxic speech and messaging in our social media. A remark that caught my attention was her observation that "speech on the internet is not like someone giving a speech in a public park." She continued, saying that it was the capability of our texts on social media to be rapidly echoed and amplified in a matter of minutes that made the toxicity of that speech far more dangerous than it otherwise is in the real world.
Ms. DiResta's remarks echo my own thoughts on the matter, ones I have been writing about for many years. What she prompted was for me to see if I could propose a z-axis solution set to this problem. The z-axis problem was to find something that would not only dampen the dangers of the echoing and amplification of toxic speech, but avoid much of the blow-back about "free speech" rights. This is what I said in an email I sent to Ms. DiResta:
RE: Controlling Rapid Dissemination of Toxic Speech via the Internet.
It is clearly necessary to distinguish disinformation via the internet from free speech in the real world. In short, that it is not so much the content of that speech that is the problem that misinforms and threatens one of the pillars of a free-speech democracy, as it is the opportunities for its rapid spread as mass communication without the constraints and normative filters that correct for bad/harmful speech from bad actors as is the case with the slower processes of real world communications. My own thinking about the problem has led me to a few conclusions about possible solution sets to this problem of toxic speech:
1. It is clear that our social media can no longer be regarded simply as a privately owned forum that leaves constraints to self-regulation and voluntary correction by the owners of these platforms. Social media platforms are, ipso facto, public commons similar (but not identical) to real world public commons which can and are regulated by government statute and federal agency regulatory powers. I've expressed this more fully in a change.org petition (https://www.change.or /OccupySocialMedia) though it has not received much attention to date.*
2. The content of the speech/text is far less important than the virulence of its spread without benefit of these normative filters. However filters such as blocking certain messages, blocking accounts, flagging misinformation and such, do little to temper this spread which has many ways of circumventing restrictions on speech, easily crossing boundaries that invite questions of censorship and free speech (by those who advocate for maximum free-speech, as well the purveyors of toxic messages.) There is, however a solution which does not have this problem. It is to limit the speed and scope of message dissemination, irregardless of its content.
One way this may be achieved is by limiting the first instance of a message being shared or otherwise distributed through our platforms to a small audience, say 25-50 user accounts without any review. These messages could then be flagged to prevent any further distribution without explicit clearance. To obtain that permission a user would request a broader distribution from the platform, say to 100-500 other users**. That request would then receive an in-house review to weed out the most blatant cases of misinformation or potentially harmful messaging.
After that and for greater dissemination tiers, up to an unlimited number, an independent review would be conducted by people who are expert in the subjects of the message, who would decide if it can be disseminated further, based on explicit criteria for legitimate free speech permissions or, if it may exceed constraints that would ordinarily temper and dissolve dangerous messages in the non-digital world of other public distributions. That solution, I believe moves away from many of the concerns of free speech advocates (of which I am one) yet hold speech accountable to normal, real-world methods that normally damp its echo and easy amplification.
3. It is necessary that the government recognize the airwaves and forums of our social media platforms as public commons and conveyors. The problem that stands in the way, aside from fierce lobbying by the owners of these platforms, is that many of the users of our social media have bought the KoolAid that private ownership is equivalent to the assertion, "They can do whatever they want." The only antidote for that underlying piece of misinformation is education, a lot of education. My own method of communicating this is to regularly point out that 'stewardship' is a much better and more accurate term to use than 'ownership' in this respect.
4. In addition to tiered limits and reviews to regulate the scope of distributing messages, it would be useful to create a voluntary pool of users willing to review messages requesting wider distribution. The tiered reviews suggested in item #2 remain in place. Only, that a few of those participating in the user pool would receive posts/messages that had requested wider sharing (say, over 1,000) and then be permitted to comment on these posts on the originating page or account. The randomly selected pool members (say five or ten people) could make comments as they wished, whether in criticism or approval, of the post in question. This adds the oft noted fact that, in a democracy, the answer to bad speech is not censorship, it is more speech. This item permits the addition of ‘more’ speech' to a text, whether approved or not for wider distribution.
------------------------
* 153 signatures in the 6 years it has been posted on change.org
** reasonable breakpoint numbers would need to be decided by those implementing such a protocol. However, it can only have maximum effect if it is a cross-platform restriction, equally recognized by all social media (Facebook, Pinterest, Twitter, LinkdIn,...,etc.)
Of course, a lot more than 153 people signing this petition in 6 years might help some. But I'm not banking on that. Doesn't seem many others are much interested, or can see how dangerous the status quo really is.