Use-case

Sift Content Integrity helps minimize fraudulent and abusive user-generated content on your site or app through deep, real-time analysis on user behavior, as well as the content they create. This guide will help you through your Content Integrity integration.

Sending Data to Sift

Send User Activity
Send Key User Events

If you allow commercial transactions through you site or app, you should also send the following events:

User creates an account
  • If users can create accounts, send a $create_account event. Additionally, if the user creates a profile (e.g., a dating profile or a seller profile) as part of account creation, send a $create_content event with a $profile_type.
  • When a user updates their account information, send an $update_account event. When a user updates their profile, also send an $update_account event.
User posts content
  • When a user creates content, send a $create_content event. The content could be one of several pre-defined types (e.g., listings, messages, etc.). Specify the type and ensure that you use the reserved fields as much as possible.
  • Supplement the $create_content event with custom fields. For example, for a dating profile, this could be "age_range" : "25-35".
  • When a user updates a previously created content, send an $update_content event. Be sure to include all the elements of the content, not just the ones that are changed.
  • When the status of a piece of content changes, e.g., from draft to active, send a $content_status event if the user hasn't made other changes (as then you'd send an $update_content event).
User reports bad content
  • When a user reports inappropriate or abusive content, send a $flag_content event.
  • Ensure that the correct $reason is mapped. For example, a $toxic reason should be associate with a content that gets reported because of profanity, harassment, etc.
  • If the reported content is later reviewed by your content moderation team, apply a Decision (covered in the next section).

Send business decisions

You will need to inform Sift which users or content have been identified as fraudulent or abusive by applying Decisions. Decisions can be applied manually via the Sift Console or programmatically through the Decisions API. Decisions applied via the Sift console can be connected to your backend via webhooks.

Content Integrity Decisions can be applied to the user, the specific content (post, listing, message, etc.), or both. The Decision applied should map to the business action being taken. Decisions can be created using your Sift Console.

ScenarioRecommended Action
User is banned permanently
  • Apply a Block User Decision
  • If you are removing content from your platform as a result of banning the user, apply a Block Decision on the content
User is banned temporarily
  • Apply a Block Decision to the offensive content
  • Do not apply a Decision to the user
Only content is removed
  • Apply a Block Decision to the offensive content

Get Started with Sift Scores

One of the key strengths of the Sift Digital Trust Platform is that it consistently learns as you send more data to it. You will see an increase in score accuracy as you send Sift more Decisions and user events. Through your integration process, you should assess your Sift Scores through the Sift Console and determine which actions to take for different score thresholds. Since all businesses are different, finding a score threshold that achieves your business goals is key.

Build your business logic with Sift Scores

Now that you are sending both user events and business actions to Sift, you’re ready to start using Sift Scores in your business logic. Higher Sift Scores correlate to higher risk (e.g., bad user or fraudulent content). Based on the Sift Score, you’ll set up different outcomes within your application (e.g., user with low score is allowed to proceed without friction).

To build this logic, you'll want to evaluate a user's Sift Score at key events where bad users can hurt your business or where good users should have access to a more frictionless experience. We recommend using the Sift Scores at $create_content, $update_content and $flag_content.

For riskier content, you may want to do one or more of the following:

  • Automatically block the content from going live (or the user from posting again).
  • Warn the user against posting suspicious content.
  • Allow the content to be posted and then review it manually. You may even allow the content to be posted in ‘shadow’ mode, where it is not yet visible to the rest of the community until approved by your moderation team.

There are two ways to build this sort of logic using the Sift Scores:

Create a Sift Workflow (recommended): Workflows allow you to automate your Decisions without putting business logic in your code. With Workflows, a business manager can use the Console and set up criteria to evaluate when a specified event occurs. These criteria route users to different outcomes based on Sift Score and other attributes of the user (e.g., Listing contains “Work from home” and Sift Score is greater than 80, then Block User). With Sift Workflows, the response will specify what Decision to apply. A user can also be put into a Sift Review Queue for investigation. To learn more, see our Workflows tutorial.

Build logic in your application: You can synchronously request the Sift Score for a user. To receive a score in the response of an event, append "return_score=true" to the endpoint of the events API call. Here’s a link to more information on receiving risk scores synchronously with events. Upon receiving the Sift Score, you can decide what further action is required.

Any questions? We're happy to talk it through.