Documentation Index
Fetch the complete documentation index at: https://docs.kameleoon.com/llms.txt
Use this file to discover all available pages before exploring further.
Data warehouse integrations are available as a premium add-on for our Web Experimentation and Feature Experimentation module. For more information, please contact your Customer Success Manager.
- Allows precise data collection, enhancing audience targeting for personalized campaigns tailored to specific audience needs.
- Powering goal metrics to improve real-time performance tracking.
Considerations
Keep these things in mind when using this integration:- Data volume: Keep in mind the volume of data you plan to interact with, as it can affect query performance and costs.
- Query complexity: Complex queries may require more time and resources to execute. Optimize your queries for efficiency.
- Data privacy: Ensure compliance with data privacy regulations when handling user data within your warehouse.
- Access control: Implement proper access controls to limit who can configure and use the integration within your organization.
- Data schema: Maintain a clear and consistent data schema to facilitate data retrieval and analysis.
- Monitoring: Regularly monitor your data warehouse usage to manage costs and performance effectively.
- Documentation: Maintain documentation for queries, configurations, and integration processes to facilitate collaboration and troubleshooting.
Prerequisites
To configure this integration, you need the following information:- Databricks personal access token (PAT)
- Proper access to create Databricks schema and grant access.
Setup
1. Create a personal access token (PAT)
Kameleoon will authenticate to your Databricks SQL warehouse with a personal access token. You should create a Databricks service principal and then create a PAT for that service account. Once a service principal is created, you can generate a PAT with the Databricks CLI, using the Service Principal “Application Id” that you can find in the Service Principal management page of the Databricks UI.2. Create kameleoon_configuration schema
When using Databricks as a source
Create a dedicated schema for Kameleoon polling configuration within the catalog that contains the data that Kameleoon will be polling. This schema must be calledkameleoon_configuration. You must also grant read and write access to the Service Principal that Kameleoon will be using. Here are some example commands:
{Service Principal Application Id} with your service principal’s application id.
my_catalog prefix can be omitted when running queries directly in the necessary catalog.3. Grant read access to your data
Kameleoon must have access to the tables you wish to read from or write into. This can be achieved by such commands as: Using Databricks as a source:my_catalog prefix can be omitted when running queries directly in the necessary catalog.