Basic Settings

Force full syncThis will run full historical sync jobs every time, including records that have already been synced.
Force discoverForce discover will refresh the field map before every sync job. This makes jobs run longer, but is useful when each user has different fields.
Automatically trigger a job when an integration is linkedThis will run an initial full sync job as soon as your customer links an integration.
Trigger a discover when an integration is linkedThis will refresh the field map after a user links. This is especially useful if you want your users to map fields before running the first job.
Automatically rollback job on failureWhen a critical error is detected, this setting will automatically roll back the entire job.
Automatically enable default sync scheduleThis will create a sync schedule every time a user links a new source. This only applies to sources synced after this setting is enabled. Learn more about default sync schedules here.
Save snapshots even when jobs failBy default, snapshots will only persist if a job is successful. Turning this on will allow snapshots to be updated whether or not the associated job succeeds. Learn more about snapshots here.

Datadog integration

You can configure hotglue to push job events (success or failures) directly to your Datadog events to maintain observability in one platform. To configure, head to your Environment Job Settings, and configure your Datadog Region and API Key.

Once these settings are saved, hotglue will automatically begin pushing job events to your Datadog account!


Datadog Settings

Slack integration

For a simpler notification system, you can configure a free Slack app to get notified about failed jobs.

  1. Create a free Slack app.
  2. Paste your new incoming webhook URL into hotglue. This URL will look something like this:

Custom storage

In certain cases you may want to snapshot or cache the data that hotglue processes during a specific sync job. By default this is done using hotglue’s infrastructure, but you can configure this to be done using your own AWS S3 buckets.

To store either the snapshot or cache data in your own S3 bucket, you need to:

  1. Create an S3 bucket that hotglue will use during jobs
  2. Create an IAM user for hotglue to access this S3 bucket with programmatic credentials (Access Key Id / Secret Access Key pair)
  3. Create an IAM permission policy and attach it to the IAM user. The policy should be as follows:
	"Sid": "VisualEditor0",
	"Effect": "Allow",
	"Action": [
	"Resource": [
		"arn:aws:s3:::<bucket name>",
		"arn:aws:s3:::<bucket name>/*",
		"arn:aws:s3:::<bucket name>",
		"arn:aws:s3:::<bucket name>/*"
  1. Finally, you can save the bucket name, access key id, and secret access key to your hotglue settings, as pictured below:

hotglue job storage settings

hotglue job custom storage settings