How to load data from Zendesk Chat to Snowflake
Access your data on Zendesk Chat
The first step in loading your Zendesk Chat data to any kind of data warehouse solution is to access your data and start extracting it.
Zendesk Chat offers a rich and well-defined API that belongs to the Representational State Transfer (REST) category. Using it you can perform RESTful operations such as reading, modifying, adding, and deleting your helpdesk data, thus allowing you to programmatically interact with your account.
Among the 10 provided resources, you can find information about Accounts, Agents, Visitors, Chats, Shortcuts, Triggers, Bans, Departments, Goals, Skills, and Roles.
In addition to the above, the things that you have to keep in mind when dealing with the Zendesk Chat API, are:
- Rate limits. The API is rate limited, i.e. it only allows a certain number of requests per minute.
- Authentication. If the Zendesk Chat account is created in Zendesk Support, the user must authenticate with an OAuth access token.
If a stand-alone Chat account is used then either a basic authentication can be used or an OAuth access token. - Paging and dealing with a huge amount of data
Transform and prepare your Zendesk Chat Data
After you have accessed your data on Zendesk Chat, you will have to transform it based on two main factors,
- The limitations of the database that the data will be loaded onto
- The type of analysis that you plan to perform
Each system has specific limitations on the data types and data structures that it supports. If for example, you want to push data into Google BigQuery, then you can send nested data like JSON directly.
Of course, when you are dealing with tabular data stores, like Microsoft SQL Server, this is not an option. Instead, you will have to flatten out your data, just as in the case of JSON, before loading it into the database.
Also, you have to choose the right data types. Again, depending on the system that you will send the data to and the data types that the API exposes to you, you will have to make the right choices. These choices are important because they can limit the expressivity of your queries and limit your analysts on what they can do directly out of the database. Zendesk Chat has a very limited set of available data types which means that your work to do these mappings is much straightforward but equally important with any other case of a data source.
Due to the rich and complex data model that Zendesk Chat follows, some of the provided resources might have to be flattened out and be pushed in more than one table.
Data in Snowflake is organized around tables with a well-defined set of columns with each one having a specific data type.
Snowflake supports a rich set of data types. It is worth mentioning that a number of semi-structured data types are also supported. With Snowflake, is possible to load directly data in JSON, Avro, ORC, Parquet, or XML format. Hierarchical data is treated as a first-class citizen, similar to what Google BigQuery offers.
There is also one notable common data type that is not supported by Snowflake. LOB or large object data type is not supported, instead, you should use a BINARY or VARCHAR type. But these types are not that useful for data warehouse use cases.
A typical strategy for loading data from Zendesk Chat to Snowflake is to create a schema where you will map each API endpoint to a table.
Each key inside the Zendesk Chat API endpoint response should be mapped to a column of that table and you should ensure the right conversion to a Snowflake data type.
Of course, you will need to ensure that as data types from the Zendesk Chat API might change, you will adapt your database tables accordingly, there’s no such thing as automatic data type casting.
After you have a complete and well-defined data model or schema for Snowflake, you can move forward and start loading your data into a database.
Load data from Zendesk Chat to Snowflake
Usually, data is loaded into Snowflake in a bulk way, using the COPY INTO command. Files containing data, usually in JSON format, are stored in a local file system or Amazon S3 buckets. Then a COPY INTO command is invoked on the Snowflake instance and data is copied into a data warehouse.
The files can be pushed into Snowflake using the PUT command, into a staging environment before the COPY command is invoked.
Another alternative is to upload the data directly into a service like Amazon S3 from where Snowflake can access data directly.
Updating your Zendesk Chat data on Snowflake
As you will be generating more data on Zendesk Chat, you will need to update your older data on Snowflake. This includes new records together with updates to older records that for any reason have been updated on LinkedIn Ads.
You will need to periodically check Zendesk Chat for new data and repeat the process that has been described previously while updating your currently available data if needed. Updating an already existing row on a Snowflake table is achieved by creating UPDATE statements.
Another issue that you need to take care of is the identification and removal of any duplicate records on your database. Either because Zendesk Chat does not have a mechanism to identify new and updated records or because of errors on your data pipelines, duplicate records might be introduced to your database.
In general, ensuring the quality of data that is inserted into your database is a big and difficult issue.
The best way to load data from Zendesk Chat to Snowflake
So far, we just scraped the surface of what you can do with Snowflake and how to load data into it. Things can get even more complicated if you want to integrate data coming from different sources.
Are you striving to achieve results right now?
Instead of writing, hosting, and maintaining a flexible data infrastructure use Rudderstack that can handle everything automatically for you.
Rudderstack, with one click, integrates with sources or services, creates analytics-ready data, and syncs your Zendesk Chat to Snowflake right away.