A brief introduction to B2C application security.

Questions to ask, tools at your disposal, and what to do next.

 

Step #1 - what questions should you ask yourself? 

At the core of all application security and fraud issues, is how can an attacker extract value from this product, tool, ecosystem? 

That question quickly creates many more. 

  • How do you distinguish between an attacker and a good user? 
  • How do you remediate instances of attacks or fraud? 
  • What are common cases of fraud? 

I’m going to list a few common types of fraud or abuse, and list some questions that are important for any developer to ask when thinking about fighting fraud with application security mechanisms.  

 

Account Takeover 

  • If an attacker got access to the login credentials of an account, what could they do? 
  • What value could they extract? 

It’s important to spend time thinking laterally, because fraudsters definitely will be. 

 

Payments Fraud

  • In what ways could the system be used to launder money? 
  • In what ways could an attacker steal from our users? 
  • What methods are there for extracting money or value from your system? 

 

Content Abuse

  • In what ways can bad listings (UGC) be used to defraud users? 
  • In what ways can content be used to influence people harmfully? 
  • In what ways can our competitors abuse our platform?

Competitors can get creative. Think about when Uber and Lyft were first starting out. There were rumors that Uber used the Lyft API to put up a bunch of fake pickups that would cancel right as the driver pulled up. This wasted the drivers’ time and the company’s time. It not only took away their users’ ability to get to where they were going, but also took away the company’s ability to make revenue. 

 

Account Creation Abuse

 

  • How do you keep bots out of your system? 
  • How do you keep them from creating accounts? 
  • How do you prevent your competitors from signing up? 
  • How do you prevent bad users from signing up?
  • How do you keep undesirable users who have been banned from your system from making new accounts?

 

Promo Abuse

  • How do you make sure that a new promotional user is a new user? 

This is especially important to consider with buy one, give one promotions. 

 

Other Abuse

Not all abuse neatly fits into one of these categories. Every application is different in the ways that it can be attacked. 

 

Some example attacks that hint at the breadth of possibilities: 

  • At a laundromat management system, you have employees with the ability to override a machine in case of a mishap taking money. This process can be used to give out free washes to friends. It may not sound like a big issue, but for an average laundromat, $50k of fraud is a good year. 
  • At an HSA provider, you are legally mandated to fund accounts with at least a penny. What if someone programmatically created a bunch of accounts and then drained each one of that penny? A penny may not seem like a lot, but 10s of thousands of pennies start to add up.

 

In both of these cases, the company is out the money and it has to come from somewhere. It might be taken from your company’s ability to grow or the engineering team’s ability to build. 




Step #2 - what tools are at your disposal? 

 

At the end of the day, loss from fraud eats into other initiatives. The trouble is that fraud can take many forms. It looks one way, one month, and then looks different the next. You can’t think about solving it on an instance by instance basis, otherwise you’ll just be playing whac-a-mole. You have to think about solving it generally. 

Just like a carpenter has their tool belt, a developer has an array of tools for securing their application from fraud and abuse. 

 

Fraud engines / Data scoring

  • Feed a bunch of data in, get a risk score out.

Data enrichment

  • Feed a couple data points in, get a bunch of data out.

Device intelligence 

  • Who is making these requests?

MFA / 2FA / Smart Friction

  • Proof that the user who owns this account is making the request.

Identity verification / Know your customer

  • Proof that a real human is making the requests.

Case management

  • Flag suspicious cases and triage them internally.

Investigations / “Risky” Customer Data Platform / User Journey View  

  • See existing user data to make a determination on the user’s legitimacy.

Other progressive friction (Shared Secret, Captcha)

  • Make it harder for bad users to commit fraud.




Step #3 - What should you do about it?

 

When we’re talking about solving for application security, we’re talking about combining these tools to solve our specific problems. 

 

Account Takeover

You’re going to want to pull in information about the devices making a request (Device Intelligence). You’re going to want to feed that to a fraud model (Fraud engine). You’re going to want to trigger an MFA when a risky request is found (Smart Friction). You’re also going to want to file a case for manual investigation, if that device gets in and performs some risky behavior like transfering money out (Case Management, Investigations). 

 

Payments Fraud

You’re going to want to check - does this payment look like fraud? (Fraud Engine) Is this person on a banned list? Anti-money laundering list (Data Enrichment)? Internal banned user list? Are they transferring a certain amount of money that’s above a threshold? What country is this request coming from? Does it match with the user’s typical country? Is there buyer-seller collusion going on?  (Case Management, Investigations) If the behavior is risky, can you surface product friction? (Smart Friction) 

 

Account Creation Abuse 

First, check if this person is who they say they are (KYC/Identity Verification). Check if this person is the user they say they are (MFA). Are there lots of accounts created by the same device? (Device Intelligence) Are these bots? (Device intelligence) Are they from a bad IP address? (Enrichment) Check account creation abuse score (Fraud engine). Who does the phone number or data point belong to? What or who is associated with that information? (Enrichment)

 

Other Abuse

For content abuse, you’ll need to think about textual or visual content. Depending on the content, you may use NLP filtering, keyword banning, or an image recognition system. Regardless of the content, you’ll want some amount of manual review or approval to make sure the scoring & recognition is working correctly. Once you know that content is filtered correctly, you’ll also want to make sure that banned users are not able to get back onto the platform (Device Intelligence).

 

For anything else, you’re going to need to use some combination of the application security tools at your disposal to address a new kind of fraud. You may even have to create custom fraud models that universal fraud engines aren’t instrumented towards. 





Step #4 - Consider an infrastructure layer 

 

To develop a robust application security strategy, there are a lot of different tools that need to be integrated. These integrations need to be individually maintained. They also need to be woven together. As new tools come onto the market, existing tools (no matter how deeply integrated) need to be swapped for newer ones. These all need to be reintegrated and maintained.

As you can see in our abuse examples, a new Fraud Engine appears in lots of different places, and would take quite the lift to replace. Nevermind the fact that you have to learn the APIs, read the docs for each one of these new tools, go through a sales process with each, sign contracts, etc, etc. All that. 

The alternative is to use an application security integration & orchestration tool like Dodgeball to do one integration that unlocks all of these tools. Unlock not only the ones you have today, but also the ones you’ll need in the future. 



Some advantages of adding some orchestration infrastructure:

  • Instead of coding and maintaining all of the accept/deny request logic within your own application, you can decouple your product code and your Trust, Fraud, and Security logic by putting it into an orchestration tool. Simplify all the complex if/then logic trees into a simple yes/no decision. 
  • Not all users should be blocked. If a user appears risky, you may want them to require an MFA code or display some other progressive friction (e.g. shared secret), before having them retry the request. Dodgeball automatically handles displaying progressive friction and retrying the request, so you don’t have to code and maintain that complex request flow. 
  • You can now drag & drop tools into key moments of risk, instead of having to make tedious code changes. 
  • You can tune your strategy without having to make another deployment. 
  • You can future proof any moment of risk. 



Want to ask our engineering team about any of the finer points in this guide? Feel free to chat us or contact us at hello+engineering@dodgeballhq.com