Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Negative reputation #31

Open
aquabu opened this issue Nov 2, 2015 · 7 comments
Open

Negative reputation #31

aquabu opened this issue Nov 2, 2015 · 7 comments

Comments

@aquabu
Copy link
Contributor

aquabu commented Nov 2, 2015

In Reputation and The Real World @frandallfarmer says
"Reserve negative karma for egregious cases of misconduct; even then, look for alternatives."

@ChristopherA and I have talked about how most things you can do with a negative reputation system can be done more safely with a well constructed positive reputation system.

That said, under what conditions should negative reputation systems be built? How should they be constructed? What are the biggest anti-patterns of negative reputation systems?

@fractastical
Copy link
Contributor

Highly relevant is the Hacker News system which has long had upvotes and downvotes and has a second private "admin tier" capability of ghost-banning users perceived to be trolls.

@frandallfarmer
Copy link
Collaborator

Negative karma is great for system internal systems, such as IP SPAM source karma, etc. No one in the publick knows the "resulting" score of all the "this is spam" button clicks, but Yahoo/Google/etc, use this information to great effect.

@fractastical
Copy link
Contributor

@frandallfarmer I think that's one of the fundamental limitations of open reputational systems. If the source data / algorithm used for ranking are transparent, then they are easily gamable. Equally true for review based systems like AirBnB and CouchSurfing, which have private feedback mechanism for people gaming the system.

@ChristopherA
Copy link
Member

With strong persistent identity you may be able to have altruistic punishment (and meta-altruistic punishment see Dunbar, Altruistic Punishment, and Meta-Moderation) as an effective negative reputation system, as both sides are visible. But you also are more likely to descend into high-school style clique politics as the group size grows (aka congressional whip or a vice presidential candidate can take the losses of being the "attack dog" for a larger group of people).

I do believe that there are ways to create negative reputation systems, in particular, as @frandallfarmer noted, if the negative reputations are not publicly visible. But 90% of the time if you look at what are the desired behaviors to increase and decrease, you'll find that you can construct a positive rating system that has the same results as what you were thinking you needed a negative reputation for. This process is harder to do, but dealing with the long-term impact of a negative reputation system done improperly.

@ChristopherA
Copy link
Member

Equally a problem with most reputation systems is that they confusing rating and ranking, or use proven poor systems (the 5-star system is a good example of a bad system). See Systems for Collective Choice and its following articles on ratings and ranking.

@macieklearns2code
Copy link

Perhaps it's useful to consider a negative reputation in the context of inter-cluster relationships.

There is a scale of relationships: from hostile to friendly. This applies to individuals and to groups.

An act can carry a punishment in one group and an equivalent reward in another. E.g. the more of an anti-hero Trump becomes in one group, the more of a hero he is in another.

This adds additional incentives/punishments:

  • Participating forces me to take on a risk of being punished by the individual I'm punishing and the peers that agree with them
  • Participating rewards me for taking the risk; my influence raises within those peers that agree with my judgment
  • Not participating saves me from risk, but it also means I will not gain influence

In this context, this system can potentially work well in large groups. That is assuming that

  1. it's predicated on a robust identity
  2. the cognitive limitation of keeping track of all of these interactions is removed

@frandallfarmer
Copy link
Collaborator

(Tried replying in email, seems that was a mistake...)

OOf. I'd recommend avoiding doing reputation math with negative weights entirely for "karma" (aka user reputation.)

In experience of social systems, the magnitude of a -1 and a +1 are never equal - even if you could get the scale to be clear and "Hostile to Friendly" is NOT a clear analog scale, so how can it be digitized?)

Twitter recently got into hot water for "punishing" (down-repping) people by presumed association with bots - they shouldn't have done that at all. They simply should have not incremented referral reputation from suspected bad-actors.

Digital Reputation should be contextual, always (imagine "tagged"?) and never negative. If you want bad-actor karma of some sort, make it a positive value.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants