See the "Cited By" section at the bottom for other trust score systems - most notably Microsoft's 2005 "Determination of a reputation of an on-line game player".
FIG. 2 illustrates examples of other behaviors, besides cheating, which can
be used as a basis for player matchmaking. For example, the trained machine
learning model(s) 216 may be configured to output a trust score 118 that
relates to the probability of a player behaving, or not behaving, in accordance
with a game-abandonment behavior (e.g., by abandoning (or exiting) the video
game in the middle of a match). Abandoning a game is a behavior that tends to
ruin the gameplay experience for non abandoning players, much like cheating.
As another example, the trained machine learning model(s) 216 may be configured
to output a trust score 118 that relates to the probability of a player
behaving, or not behaving, in accordance with a griefmg behavior.
A “griefer” is a player in a multiplayer video game who deliberately irritates
and harasses other players within the video game 110, which can ruin the
gameplay experience for non-griefmg players.
As another example, the trained machine learning model(s) 216 may be
configured to output a trust score 118 that relates to the probability
of a player behaving, or not behaving, in accordance with a vulgar language behavior
This is slightly contradicted by the following,
[0015] The techniques and systems described herein also improve upon
existing matchmaking technology, which uses static rules to determine
the trust levels of users
The system seems to be a Reinforcement learning from human feedback (RLHF), at least in CS2. It also seems to be influenced by in-game report gaming and consquently the Just World Fallacy. The reality, in the case of Counter Strike (which is alluded to in the patent):
One popular video game genre where players often play in multiplayer mode is the first-person shooter genre
is that Valve have built a trust system but for a prison-inmate type population (anti-social traits), and are using negative reinforcement rather than Blizzard's approach in Overwatch of positive reinforcement. The more cynical commentators would suggest Valve will always be more interested in the flow of game case sales, which promotes Steam market game sales, than influencing their player base for games. Much like a Casino isn't too interested in enforcing anti-gambling moral behaviour on its regulars.
From the US patent, Valve are be basing the trust score on:
- an amount of time a player spent playing video games in general,
- an amount of time a player spent playing a particular video game
- times of the day the player was logged in and playing video games
- match history data for a player- e.g., total score (per match, per round, etc.), headshot percentage, kill count, death count, assist count, player rank, etc.
- a number and/or frequency of reports of a player cheating
- a number and/or frequency of cheating acquittals for a player
- a number and/or frequency of cheating convictions for a player
- confidence values (score) output by a machine learning model that detected a player of cheat during a video game
- a number of user accounts associated with a single player (which may be deduced from a common address, phone number, payment instrument, etc. tied to multiple user accounts)
- how long a user account has been registered with the video game service
- a number of previously-banned user accounts tied to a player
- number and/or frequency of a player’s monetary transactions on the video game platform
- a dollar amount per transaction
- a number of digital items of monetary value associated with a player’s user account
- number of times a user account has changed hands (e.g., been transfers between different owners/players)
- a frequency at which a user account is transferred between players
- geographic locations from which a player has logged-in to the video game service
- a number of different payment instruments, phone numbers, mailing addresses, etc. that have been associated with a user account and/or how often these items have been changed
- and/or any other suitable features that may be relevant in computing a trust score that is indicative of a player’s propensity to engage in a particular behavior
A few observations of my own:
- You get 25+ kills in your first CS2 game of the day, their is a skill rating mismatch. Your trust score decreases from this (and/or people report you), so your next match is against low trust players.
- You score low in this next game, confusing the model further into thinking your next game should be against low trust players because there is such a big score discrepency.
- You don't have a microphone enabled for various reasons in the game, and Valve think after a certain rating you should have a mic, so trust lowers.
- You play against bad actors who game the trust system via only playing 1 match a day or week, but on multiple accounts.
- Ypu get punished by the model for playing multiple (4+) games, as this is booster lobby (cheating) behaviour.
- A high trust pool is non-existent during the day.
- The trust system seems to be easily gamed by high-value skin purchases.
- Since 2020 the player age appears to have dropped a lot for CS2 which is a PEGI 16. Many pre-high school age players need a different psychological model for trust than the one Valve uses, one that is more punative.
- Most of the CS2 aimbotting seems to have gone, in the mid-ranks.
- The estimate of about 40% player base cheating (wallhacking) is closer to 50% in competitive mode.
- The profile makes a difference to who you are matched against/with, although the RNG is still more prominent using the player-base. My tip is put "faceit level 8" and "nice to meet" you in the profile, with a custom friendly looking avatar to enter the weird world of boosting lobbies. Use some kind of neutral in-built avatar and name to sit with the normal pool.
- The algorithm matches you with people who are using your play-style, e.g. shotgun only. One tip for this: use a lot of flashes in a game and you'll be matched with people who know how to use the grenades ("util").
- Maybe in the future Microsoft's Windows Core Isolation (memory-integrity) feature will render the cheats unusable, if Valve make it a requirement.
I have a feeling that Valve use clustering and/or Apriori/FP-Growth style matching for players, and have also a limited set of clusters to choose from - similar to the Youtube algorithm where you were matched to existing "actors". For example you can get put into an "AFK" player cluster, "griefing" player cluster, and often they're combined. This seems to happen when there are less people online, the parameters are more refined when it's evening peek time. At peek time, I've found myself in a team with people who appear to have very similar profiles to me, except 1 or 2 of the accounts look bought, in that they haven't purchased any new games for a long time and their matchmaking history is empty for years, but on the surface we appear to be a very similar cohort.