Failed jobs on eu-central-1 stack (AWS EU)

We have discovered a problem on one of servers running Queue jobs on eu-central-1 stack (AWS EU). Jobs were terminated unexpectedly in between 13:00 UTC and 14:45 UTC. The problem has been removed and all jobs should be running OK now again. We are still looking for the root cause to prevent it happening again in the future. We apologize for any inconvenience this may have caused.

Delayed jobs start in AWS EU

2022-12-21 14:05 UTC - We are investigating delayed jobs start on AWS EU Keboola Connection stack ( Next update in 30 minutes.

2022-12-21 14:55 UTC - Services are stable now. There could be some delayed jobs between 13:55-14:30 UTC. We are monitoring situation and investigation root cause. We apologize for any inconvenience this may have caused.

Failures of Google Drive Data source

We have seen an increase in the number of errors in the Google Drive data source since 15:00 UTC. We are currently investigating the issue and rolling back the recent release. 

The error messages are identified by this error message

Unrecognized options "sheets, outputBucket" under "root.parameters". 

We will provide an update in 15 minutes.

UPDATE 15:27 UTC: The issue has been resolved, and the previous version is functioning as expected. If you have encountered this issue, please restart your jobs and flows. We apologize for any inconvenience this may have caused.

Delayed orchestrations on Azure North Europe stack

2022-11-29 17:55 UTC - We are investigating delayed orchestrations on Azure North Europe Keboola Connection stack ( Next update in 30 minutes.

Update 2022-11-29 18:40 UTC - We have deployed a fix and the orchestration schedules will gradually catch on. Next update in 1 hour.

Update 2022-11-29 19:27 UTC - Orchestration schedules are now on time. The incident is no resolved, but we'll keep monitoring the situation. We apologize for the inconvenience.

Stuck jobs and unable to start workspaces in AWS EU

Nov 29 07:08 UTC - We are investigating multiple stuck jobs on stack. Affected jobs became stuck around 03:00 AM UTC, other jobs are processing and starting without issues. Next update in 30 minutes or when new information will be available.

Nov 29 08:02 UTC - We have unblocked stuck jobs, and we no longer see queueing of jobs. We are investigating the root cause and impact of the incident. Next update when new information will be available.

Nov 29 08:35 UTC - We're still seeing further symptoms of the outage and we're actively investigating. 

Affected services are: 

  • Workspaces - partial outage (workspaces may have difficulties starting)

Nov 28 10:22 UTC - Platform is now fully operational. We're monitoring all systems closely. 

We're sorry for the inconvenience. If you experienced any job failures please run them again.

OAuth outage on

11-28 09:53 CET:  We are facing an outage of the OAuth service for the stack, the problem is caused on our side and we are currently solving the problem. 

Update 09:30 CET: We have deployed the previous version and everything should work fine.

We apologize for the inconvenience.

Output mapping not being able to find views on Snowflake

We are investigating errors in output mapping not being able to find views in schema on Snowflake since 2022-11-25 12:48 UTC. You can see an error similar to "Processing output mapping failed: Table "all_done" not found in schema "WORKSPACE_925215531". We are rollbacking to a previous version until we find a proper solution.

Update 2022-11-25 13:42 UTC: Rollback to a previous version has finished. Everything is running as expected and fully operational now. We are sorry for the inconvenience.

Stalled jobs in US stack

We are seeing stalled jobs since approximately 16:20 UTC (Nov 23) on the stack. These are caused by inaccessibility of Snowflake services in the region.

Update 17:15 UTC: One incident is marked as resolved on Snowflake side, but there is another one ongoing. Most jobs in Keboola Connection should gradually return to their usual run times. Though some customer using BYODB Snowflake might still see slower job run times. We keep monitoring the situation.

Update 17:55 UTC: The Snowflake incident is still ongoing. Some customers using their own Snowflake (BYODB) in the affected region, may still see jobs executing slowly or being stalled in processing or waiting phase, or occasionally failing with a message similar to "Load error: An exception occurred while executing a query: SQL execution internal error: Processing aborted due to error 300004:867554464; incident 3178266.".

Update 19:20 UTC: The Snowflake incident is still ongoing. 

Update 20:25 UTC: The Snowflake incident is still ongoing. 

Update 21:50 UTC: The Snowflake incident is still ongoing, but we see strong improvement in query processing. Jobs in the affected projects should be gradually returning to the usual processing.

Update 23:00 UTC: The Snowflake incident is marked as resolved. According to our monitoring jobs and queries are already running as usual, so we consider the incident to be resolved. We'll keep monitoring the platform closely for any irregularities.

Transformation details not displayed

Nov 2022-11-21 19:32 UTC - Transformations using code patterns are showing blank details in the UI. We found a root cause and fix should be deployed within two hours.

Nov 2022-11-21 20:26 UTC - We have deployed a fix. All transformation details are now accessible from the UI.