To upload your data to Amazon S3, you will have to use the AWS REST API. The first task that you have to perform is to create a bucket. You can do that by executing an HTTP PUT on the API endpoints for S3.
You can do this by using a tool like CURL or Postman. Or use the libraries provided by Amazon for your favorite language. You can find more information by reading the reference for the Bucket operations on Amazon AWS documentation.
After you have created your bucket, you can start sending data to Amazon S3, using the same API and the endpoints for Object operations. As in the Bucket case, you can either access the HTTP endpoints directly or use your preferred library.
Amazon Redshift supports two methods for loading data into it. The first one is by invoking an INSERT command. You can connect to your Redshift instance with your client using either a JDBC or ODBC connection, and then you perform an INSERT command for your data.
The way you invoke the INSERT command is the same as you would do with any other SQL database; for more information, you can check the INSERT examples page on the Redshift documentation.
Redshift is not designed for INSERT-like operations. On the contrary, the most efficient way of loading data into it is by doing bulk uploads using a COPY command.
You can perform a COPY command for data that lives as flat files on S3 or from an Amazon DynamoDB table. When you perform COPY commands, Redshift can read multiple files simultaneously, and it automatically distributes the workload to the cluster nodes and performs the load in parallel.