Job Details

The job_details.json object describes metadata about a job. Each job lifecycle webhook will contain corresponding job details for that stage.

KeyDescription
job_idA convenience ID to track jobs. While generally unique, idempotency is NOT enforced.
env_idThe hotglue environment ID
flow_idThe flow ID, found in the General or Settings tab of your flow
job_nameAn optional identifier passed when triggering jobs via API. Job names are auto-assigned when triggered by webhooks or the scheduler.
tenantThe tenant ID, usually 1:1 with your internal ID for your customer
started_byUsually the same as the tenant ID
s3_rootA unique path identifying the job. The S3 root is ALWAYS unique for a given job
start_timeThe UTC datetime at which the task began running
stateAn optional JSON object that holds webhook payloads, API write job payloads, and other data for use in targets and ETL scripts
tapThe data source of the job.
targetThe data target of the job
connector_idIn V2 flows, the connector linked by the user. This may function as a source or a target
job_typeIn V2 flows, whether the job is of type READ or WRITE
statusThe lifecycle status of the job
scheduled_jobA boolean value, denoting whether or not the job was kicked off by hotglue's scheduler.
sync_typeJobs that run with a state.json are denoted as incremental_sync. All other jobs will be a full_sync.
override_start_dateAn optional datetime passed to the job that overrides the start_date and state.json. This date is not persisted outside of a given job
messageThe error or success message from a given job. For failing jobs, this will be hotglue's best prediction of the cause of the failure, although it may not always provide full context.
task_idThis is an internal hotglue ID used for debugging.
task_typeDenotes whether the job runs on hotglue's STANDARD, LOW_LATENCY, or STATIC_IP infrastructure.
task_definitionAn object with metrics defining the memory, CPU, and server location that a job runs on
last_updatedThe last registered job event, in UTC. For completed jobs, this will be the time of completion.
metrics- recordCount shows the number of records fetched for each stream
- exportSummary is displayed when using a target built on hotglue's SDK. This shows each stream that was written to a hotglue target, with the number of records in each state: success, fail, existing, updated
- exportDetails shows a more detailed view of the export summary. This will include the id of the created record in the external system, and, if passed, the externalId that you send with each record.
errorAn object containing the error message
data_sizesThe size of job files, in megabytes
resources_usageThe amount of memory and CPU used. This can offer clues if your ETL script has memory leaks or inefficiencies