Fraud Management
QA
12 min
there are four stages of qa for a new fraud management integration in order to provide you the best possible decisions at launch review api request logs integration tests in sandbox data validation in production listen mode review api request logs to ensure that forter remains flexible for all customers with different use cases, we will return a 200 ok status even when some required fields are missing while we'll accept the request and return a decision, it's important that you identify and resolve any important missing data in your integration to avoid impacting decision accuracy you can identify missing data for your specific use cases by reviewing your api request logs in integration center during your development process if a ⚠️ appears in the data issues column, it means the payload requires attention clicking into the details of that request will display a list of detected data issues, outlining the missing fields that need to be added for a successful integration integration tests in sandbox as you complete your development in your sandbox site, you can begin to work through the integration tests that have been determined as relevant to your use cases you must be logged into your test environment in forter portal to view this tool create each test scenario as you click through each test, you’ll find a how to guide explaining what its for, how to create the scenario, and a sample request payload snippet that would pass as you create each scenario by taking user actions on your sandbox site as described, your server will be sending api requests to your forter test environment in many cases, this will involve creating a new order and checking out as if you were a customer test the corresponding api request after you have created the scenario, you should see the api request received by forter in your logs you can then use the integration test tool to confirm that forter has received the data as expected for that scenario note that the dropdown list will only include requests to the endpoint relevant to the specific test when you select an api request from the list, its payload will show and the relevant parameters will be flagged, so you can review them before running the test the flag does not determine whether the test will pass or fail, but serves to highlight the relevant fields running a test will assess the selected api request to evaluate its attributes and values described in the first step the example above would fail the test for an order placed from android but would pass the test for web if you receive an error and are testing in real time, it may take a few minutes for our system to fully process the new request we recommend you try the same request again in a few minutes you can re test as many times as you like by selecting a new api request to run, and review your previous test history in this scenario because the test only checks specific fields, we recommend also reviewing the api request log in case there are any missing data points not checked by the test confirm your response handling additionally, you can use this process for end to end testing if a test passes on forter’s side, this is a good time to confirm that your system has also received forter’s response and handled it accordingly if you are unable to pass any of the tests, contact your forter implementation engineer for help troubleshooting data validation in production once you have completed development and integration testing in sandbox, you are invited to deploy to production ensure that your production site is using your forter production credentials https //docs forter com/reference/environments#x fxu in both your backend api requests and your frontend javascript and mobile sdks please note that deploying is not the end of the integration process, as you will still need to complete the remaining qa steps on this page until then, all of your api responses will return a decision of "not reviewed" forter's data validation tool executes a set of automated tests to check accuracy and completeness of the production data received and produces a report you must be logged into the production environment of forter portal to view these reports daily and manual reports when you first deploy, data validation reports will run daily at midnight gmt over live traffic from the previous 24 hours, with a minimum of 30 transactions you may also run additional reports to check data after a code push or over a larger timeframe upon reviewing a report, you can use the sort and filter options in the table to focus on particular areas of your integration you can learn more about the test types, severity levels, and statuses in the glossary review failed test details we recommend that you prioritize addressing failed tests according to the severity level upon clicking into the test results, you'll find more detail including the required pass rate or expected values and a suggested action use the data visualization and the list of examples to identify potential patterns and determine if any changes are required to your integration to address the detected issue if you apply and deploy a code change to fix an issue, wait for enough live orders to come through before running a new report to confirm the issue is resolved exemptions if a data validation test is not relevant to your particular integration, you may request an exemption within the tool for your forter implementation engineer to review pending exemptions are marked with a comment flag, and still show the test status until the exemption is approved data validation testing is considered complete when there are no failed results in a report any tests where the result is inconclusive due to lack of sufficient data should be assessed for risk with your forter implementation engineer listen mode once you have passed all of the relevant data validation tests, your implementation engineer will confirm the transition into "listen mode", which can be expected to take up to two (2) weeks during this time, forter will begin to simulate decisions, evaluate performance, and adjust our analytical model to ensure optimal outcomes for the business objectives you've shared with us stay in close touch with your implementation engineer during this period to align on a launch date once we have completed "listen mode"