Congress is targeting algorithms

[ad_1]

“I agree in principle that there should be responsibility, but I don’t think we have found the right set of terms to describe the processes we are concerned about,” said Jonathan Strey, a visiting scientist at the Berkeley Center for People-Compatible AI. who studies algorithms for recommendations. “What is reinforcement, what is improvement, what is personalization, what is a recommendation?”

The Justice Act against the malicious algorithms of New Jersey Democrat Frank Pealone, for example, would waive immunity when a platform “knows or should have known” that it was making a “personalized recommendation” to a consumer. But what is considered personalized? According to the bill, it uses “person-specific information” to improve the prominence of one material over another. This is not a bad definition. But at first glance, it seems that any platform that doesn’t show everyone the same thing would lose the protection of section 230. Even showing posts to someone you follow probably relies on information specific to that person.

Malinowski’s bill, the Protection of Americans from Dangerous Algorithms Act, will waive immunity under Section 230 for claims under certain civil and terrorism-related claims if a platform “uses an algorithm, model, or other computational process to rank, rank, , promote, recommend, enhance or similarly alter the transmission or display of information. ” However, it contains exceptions for algorithms that are “obvious, understandable, and transparent to a reasonable user” and lists some examples that would fit the bill, including reverse chronological feeds and popularity rankings or user reviews.

It makes a lot of sense. One problem with engagement-based algorithms is their opacity: Users have little idea how their personal data is used to direct them to content that the platform anticipates interacting with. But Stray pointed out that distinguishing between good and bad algorithms is not so easy. Ranking on user feedback or voting up / down, for example, sucks in itself. You wouldn’t want a single vote in support or a five-star review to hit the top of the list. A standard way to correct this, Stray explained, is to calculate the statistical error limit for a given content and rank it according to the bottom of the distribution. Is this technique – which took a few minutes to explain to Strey – obvious and transparent? How about something as basic as a spam filter?

“I don’t know if the intention to exclude systems that are simple enough would actually exclude any system that is actually practical,” Stray said. “My suspicions are probably not.”

In other words, a bill that waives immunity under section 230 with respect to the algorithmic recommendation may look the same as outright repeal, at least as far as social media platforms are concerned. Jeff Cosef, author of the final book on section 230, The twenty-six words that created the Internet, pointed out that internet companies have many legal protections to waive, including the First Amendment, even without the protection of the law. If the bylaws are full of enough exceptions and exceptions to exceptions, these companies may decide that there are easier ways to defend themselves in court.

[ad_2]

Source link

Leave a Reply

Your email address will not be published.