The first step to sync data from Xero to Redshift is to put them in a source that Redshift can pull it from. As it was mentioned earlier there are three main data sources supported, Amazon S3, Amazon DynamoDB, and Amazon Kinesis Firehose, with Firehose being the most recent addition as a way to insert data into Redshift.
To upload data to Amazon S3 you will have to use the AWS REST API, as we see again APIs play an important role in both the extraction but also the loading of data into our data warehouse. The first task that you have to perform is to create a bucket, you do that by executing an HTTP PUT on the Amazon AWS REST API endpoints for S3. You can do this by using a tool like CURL. Or use the libraries provided by Amazon for your favorite language. You can find more information by reading the API reference for the Bucket operations on Amazon AWS documentation.
After you have created your bucket you can start sending data to Amazon S3, using again the same AWS REST API but by using the endpoints for Object operations. As in the Bucket case you can either access the HTTP endpoints directly or use the library of your preference.
DynamoDB imports data again from S3, it adds another step between S3 and Amazon Redshift so if you don’t need it for other reasons you can avoid it.
Firehose is the latest addition as a way to insert data into Redshift and offers a real-time streaming approach into data importing. The necessary steps for adding data to Redshift through Kinesis Firehose are the following:
- create a delivery stream
- add data to the stream
whenever you add new data to the stream, Kinesis takes care of adding this data to S3 or Redshift, again going through S3, in this case, is redundant if your goal is to move your data to Redshift. The execution of the previous two steps can be performed either through the REST API or through your favorite library just as in the previous two cases. The difference here is that for pushing data into the stream you’ll be using a Kinesis Agent.
Amazon Redshift supports two methods for loading data into it. The first one is by invoking an INSERT command.
The way you invoke the INSERT command is the same as you would do with any other SQL database, for more information you can check the INSERT examples page on the Amazon Redshift documentation.
Redshift is not designed for INSERT like operations, on the contrary, the most efficient way of loading data into it is by doing bulk uploads using a COPY command. You can perform a COPY command for data that lives as flat files on S3 or from a DynamoDB table.
When you perform COPY commands, Redshift is able to read multiple files simultaneously and it automatically distributes the workload to the cluster nodes and performs the load in parallel. As a command COPY is quite flexible and allows for many different ways of using it, depending on your use case. Performing a COPY on Amazon S3 is as simple as the following command:
For more examples on how to invoke a COPY command, you can check the COPY examples page on Amazon Redshift documentation. As in the INSERT case, the way to perform the COPY command is by connecting to your Amazon Redshift instance and then invoke the commands you want using the SQL Reference from Amazon Redshift documentation.