Snowflake backend project errors

We are investigating connection errors in projects with Snowflake backend. It is related to today's snowflake maintenance, we are in contact with their support.

We will update this post when we have more information. Sorry for any inconvenience.


UPDATE 09:30 PM PDT Storage is fixed and failed orchestrations were restarted. We are working on Snowflake transformations fix.

UPDATE 10:15 PM PDT Snowflake transformations are also working. Snowflake did the rollback of release which caused problems.

Waiting jobs

We are investigating a problem with waiting orchestrations jobs.

We will update this post when we have more information.


UPDATE 12:02 AM PDT We have found and fixed the issue, waiting orchestrations are now starting.  

UPDATE 12:26 AM PDT All waiting jobs were processed. Everything should now be working normally. If you encounter some problem, please let us know.

Sorry for any inconvenience.

Orchestration manual run issue

UPDATE: this issue has been solved and one shall not encounter the problem described below.

We have encountered problem when running orchestration manually from the orchestration detail page: 

error message detail: "Error. [Task 0] Job task is different from orchestration task"

We are working on the fix of the problem which should be released within a short while.

Temporal workaround: if you are experiencing such a problem, you can fix it by resaving the orchestration tasks(Edit Tasks->Save).

All other orchestration features are working normally. We are sorry for any inconvenience.  

Connection outage

We have experienced a brief outage of  Keboola Connection application and API between 12:42 and 12:45 UTC. Some jobs might have failed with an application exception.

We're sorry for this inconvenience, if not sure what was the cause of any failed job, please contact us atsupport@keboola.com.

Jobs outage

Jobs were unaccessible between 4:30am–4:31am PST which caused failure of few orchestrations. These were restarted.

UPDATE 6:22am PST

The issue reappeared between 5:48am and 6:09am PST. All failed orchestrations were restarted.

We are sorry for any inconvenience.

Files storage errors

AWS S3 service which powers files storage is reporting increased error rate in our main region. It is causing failure of some jobs.

We hope for a quick fix, please bear with us, we'll post any updates here.

UPDATE 1:52 AM PDT AWS team have identified the root cause of the elevated error rates and is actively working on the recovery process now.

UPDATE 3:46 AM PDT The error has been corrected and the service is operating normally.

UPDATE 12:26 PM PDT Unfortunately problems are back again. We are waiting response from AWS.

UPDATE 12:28 PM PDT The issue has been resolved and the service is operating normally.