The first step to load your Google AdWords data to Redshift is to put them in a source Redshift can pull it from. As mentioned earlier, there are three main data sources supported, Amazon S3, Amazon DynamoDB, and Amazon Kinesis Firehose, with Firehose being the most recent addition to insert data into Redshift.
To upload your data to Amazon S3, you will have to use the AWS REST API. APIs play an important role in both the extraction and the loading of data into our data warehouse. The first task that you have to perform is to create a bucket, and you do that by executing an HTTP PUT on the Amazon AWS REST API endpoints for S3. You can do this by using a tool like CURL or Postman. You can also use the libraries provided by Amazon for your favorite language. You can find more information by reading the API reference for the Bucket operations on Amazon AWS documentation.
After you have created your bucket, you can start sending your data to Amazon S3, using the same AWS REST API again but by using the endpoints for Object operations. As in the Bucket case, you can either access the HTTP endpoints directly or use your preferred library.
DynamoDB imports data again from S3. It adds another step between S3 and Amazon Redshift so if you don’t need it for other reasons; you can avoid it.
Amazon Kinesis Firehose is the latest addition to insert data into Redshift and offers a real-time streaming approach to data importing. The necessary steps for adding data to Redshift through Kinesis Firehose are the following:
- create a delivery stream
- add data to the stream
Whenever you add new data to the stream, Kinesis adds these data to S3 or Redshift. Again going through S3, in this case, is redundant if your goal is to move your data to Redshift. The execution of the previous two steps can be performed either through the REST API or through your favorite library, just as in the previous two cases. The difference here is that you’ll be using a Kinesis Agent for pushing your data into the stream.
Amazon Redshift supports two methods for loading data into it. The first one is by invoking an INSERT command. You can connect to your Amazon Redshift instance with your client using either a JDBC or ODBC connection, and then you perform an INSERT command for your data.
insert into category_stage values
(12, 'Concerts', 'Comedy', 'All stand-up comedy performances');
The way you invoke the INSERT command is the same as you would do with any other SQL database. For more information, you can check the INSERT examples page on the Amazon Redshift documentation.
Redshift is not designed for INSERT-like operations. On the contrary, the most efficient way of loading data into it is by doing bulk uploads using a COPY command. You can perform a COPY command for data that lives as flat files on S3 or from an Amazon DynamoDB table. When you perform COPY commands, Redshift can read multiple files simultaneously and automatically distributes the workload to the cluster nodes, and performs the load in parallel. As a command, COPY is quite flexible and allows for many different ways of using it, depending on your use case. Performing a COPY on amazon S3 is as simple as the following command:
For more examples of invoking a COPY command, you can check the COPY examples page on Amazon Redshift documentation. As in the INSERT case, the way to perform the COPY command is by connecting to your Amazon Redshift instance using a JDBC or ODBC connection and then invoke the commands you want using the SQL Reference from Amazon Redshift documentation.