Offer Allocation Ingestion
Creating large volumes of accounts in wallets is very time-consuming, or something that has to be orchestrated and managed outside of the AIR platform by clients. Therefore, we have implemented the ability to ingest a large number of accounts in one go and have those accounts automatically added to the required wallets.
This process is managed by a file ingestion process. Clients send Eagle Eye a CSV formatted file containing the details of what offers to allocate to which customers (details of the file format are outlined below). This file is then picked up and processed by the Eagle Eye ingestion engine. Once completed an output file is created which outlines the success or failure of each record in the file.
File Format
The table below outlines the CSV file structure required for this process.
Name | Description | Example |
|---|---|---|
| The target wallet for the offer. This can be either a walletId or an identity value. Identity values can be any alphanumeric values whereas walletId's are always numeric. |
OR
|
| The target offer to be added to a customer's wallet. This can either be a campaignId or a campaignReference. |
OR
|
| Any associated data to include in the account creation. This field is JSON formatted. Any fields which can be passed to the campaign account creation can be included in this field as documented here: https://developer.eagleeye.com/reference/createwalletcampaignaccount | {"dates":{"start":"2020-04-17T08:00:00+00:00","end":"2020-04-23T23:59:59+00:00"},"status":"ACTIVE","state":"LOADED","overrides":{"key1":"value"},"meta":{"foo":"bar"}} |
File Encoding
All files must be encoded with Linux line endings rather than Windows.
Field Encapsulation and Escaping
The data field is a JSON-encoded field and therefore needs to be encapsulated inside double quotes. It is advised to wrap all fields in double-quotes. Because of this, the double quotes inside the field need to be escaped themselves. This can be done by adding a \ character before the " character.
An example of this is shown below in the sample file format.
File Naming
There is no restriction on the file name setup for this job other than a file extension of .csv. It is advised however to include some data points in the file name for easy identification and troubleshooting. For example, a file name structure similar to the one below is recommended
YYYYMMDDHHMMSS_offer_allocation.csv
This format will mean files are ordered in date/time order when listed in an SFTP client, it's clear the file is an offer allocation file. An element of random could be added to the file name if required.
Record Ordering
The ingestion process takes advantage of batching records. As such the job is faster and more efficient if the input file is sorted by the subject column. For example, all the offers for a wallet with an ID of 1234 should be placed together in the file.
Example File
"subject","target","data"
"walletId:1234","campaignId:1234","{\"dates\":{\"start\":\"2020-04-17T08:00:00+00:00\",\"end\":\"2020-04-23T23:59:59+00:00\"},\"status\":\"ACTIVE\",\"state\":\"LOADED\",\"overrides\":{\"key1\":\"value\"},\"meta\":{\"foo\":\"bar\"}}"
"identity:A123456","campaignId:1234","{\"dates\":{\"start\":\"2020-04-17T08:00:00+00:00\",\"end\":\"2020-04-23T23:59:59+00:00\"},\"status\":\"ACTIVE\",\"state\":\"LOADED\",\"overrides\":{\"key1\":\"value\"},\"meta\":{\"foo\":\"bar\"}}"
Items to note in this file:
- It's possible to have some rows using
walletIdand some usingidentity. It's advised to be consistent however and only use one - All fields are encapsulated in
"characters - The
"characters inside the JSONdatafield are escaped with a\character
SFTP Folder Setup
Your Eagle Eye onboarding team will get the sftp folders set up for you during the onboarding process. The folders created for this job will be as below:
/offer-allocation/outbound/
/offer-allocation/inbound/
/offer-allocation/processed/
/offer-allocation/error/
When you are wishing to ingest a file - it needs to be placed in the inbound folder. The first thing the job does is move this file into the processed directory before downloading the file. This is to prevent any duplicate processing or race conditions.
When the job is completed, an output file will be placed in the outbound directory. This file contains all the same fields as the input file but with four extra columns, a success field (Y/N) and an errorCode holdign the errorCode returned from the API request, an errorMessage field that holds the error message from the API and a accountId column that holds the accountId of the created account. If there is any error in ingesting a single record, the success column will have a value of N and the errorMessage column will contain the details of why the record has failed.
Example output file
"subject","target","data","success","errorCode","errorMessage","accountId"
"walletId:1234","campaignId:1234","{\"dates\":{\"start\":\"2020-04-17T08:00:00+00:00\",\"end\":\"2020-04-23T23:59:59+00:00\"},\"status\":\"ACTIVE\",\"state\":\"LOADED\",\"overrides\":{\"key1\":\"value\"},\"meta\":{\"foo\":\"bar\"}}","Y",,"123456789"
"identity:A123456","campaignId:1234","{\"dates\":{\"start\":\"2020-04-17T08:00:00+00:00\",\"end\":\"2020-04-23T23:59:59+00:00\"},\"status\":\"ACTIVE\",\"state\":\"LOADED\",\"overrides\":{\"key1\":\"value\"},\"meta\":{\"foo\":\"bar\"}}","N","IR","Campaign not found",
Ingestion Speed and Timing
As the AIR platform is multi-tenanted, your onboarding team will work with you to find a time at which the file processing can be run uninterrupted and complete the job at a time of day to meet your business needs.
This ingestion process is built for scale. We can handle files with hundreds of millions of records in them. It's worth noting that the larger the file the longer the job will take to complete. As such it's advised to work backwards from the time at which the offers in the target wallets need to be live to a time to start the ingestion leaving an element of buffer for handling any issues.
Updated about 4 hours ago
