Stalled jobs in US stack

We are seeing stalled jobs since approximately 16:20 UTC (Nov 23) on the stack. These are caused by inaccessibility of Snowflake services in the region.

Update 17:15 UTC: One incident is marked as resolved on Snowflake side, but there is another one ongoing. Most jobs in Keboola Connection should gradually return to their usual run times. Though some customer using BYODB Snowflake might still see slower job run times. We keep monitoring the situation.

Update 17:55 UTC: The Snowflake incident is still ongoing. Some customers using their own Snowflake (BYODB) in the affected region, may still see jobs executing slowly or being stalled in processing or waiting phase, or occasionally failing with a message similar to "Load error: An exception occurred while executing a query: SQL execution internal error: Processing aborted due to error 300004:867554464; incident 3178266.".

Update 19:20 UTC: The Snowflake incident is still ongoing. 

Update 20:25 UTC: The Snowflake incident is still ongoing. 

Update 21:50 UTC: The Snowflake incident is still ongoing, but we see strong improvement in query processing. Jobs in the affected projects should be gradually returning to the usual processing.

Update 23:00 UTC: The Snowflake incident is marked as resolved. According to our monitoring jobs and queries are already running as usual, so we consider the incident to be resolved. We'll keep monitoring the platform closely for any irregularities.

Transformation details not displayed

Nov 2022-11-21 19:32 UTC - Transformations using code patterns are showing blank details in the UI. We found a root cause and fix should be deployed within two hours.

Nov 2022-11-21 20:26 UTC - We have deployed a fix. All transformation details are now accessible from the UI.

MongoDB Extractor failures

Today, 2022-11-19, we are experiencing failures of MongoDB extractor latest release (deployed 2022-11-18 13:40 UTC) on extracting larger amount of data. We rolled back to the previously working version and now monitoring the status. 

UPDATE: 2022-11-19 22:00 UTC - We additionally found out the roll back didn't deploy the correct version and furthermore we fixed the deploy and rolled back again, now the previously working version is deployed and running.

UPDATE: 2022-11-19 23:48 UTC - We've verified that the rollback was indeed successful and previously affected jobs are now running successfully. 

Dynamic backends does not respect selected backend size

Nov 2022-11-14 07:03 UTC  - Transformations that support Dynamic Backends such as Python and Snowflake Transformations are not respecting selected backend size when triggered by an orchestration and have been running on the default small backend since Friday Nov 09. We are working on the issue and will provide an update shortly.

Nov 2022-11-14 07:51 UTC - We rolled back the release. All new transformation jobs will be started with correct backend size. Everything is running as expected and fully operational now.

Problem on US stack

2022-11-01 21:01 UTC - We are investigating problem on US stack, the problem can cause tasks to get stuck. Next update in 30 minutes or when new information will be available.

2022-11-01 21:30 UTC - We still have not been able to find the cause of the problem, we are still looking for the cause and will let you know when we have more information.

2022-11-02 06:06 UTC - We are still investigating issues with synchronous actions (feature which allows eg. testing credentials of Database Data Source) and Python/R workspaces start and stop. Jobs processing is not affected at the moment.

2022-11-02 11:42 UTC - We're working with AWS support engineers to find the root cause of the issue. They have acknowledged the issue and engaged their team to resolve the issue. 

2022-11-02 13:32 UTC - AWS team was able to resolve the issue on their side. Everything is running as expected and the platform is fully operational now. We are sorry for the inconvenience.

Stucked jobs processing

2022-10-31 11:47 UTC - We are investigating stucked jobs processing on multiple stacks. Next update in 30 minutes or when new information will be available.

2022-10-31 12:14 UTC Stucked jobs were caused by earlier release. We rolled back the release but it didn't unblock currently stucked jobs. We are working on the fix and it should be released within 90 minutes.

2022-10-31 14:23 UTC - We're fixing the stucked jobs stack by stack. All stacks should be fixed within 20 minutes.

2022-10-31 15:11 UTC - All jobs has been fixed and platform was stabilized. Everything is running as expected and fully operational now. We are sorry for the inconvenience.

Job failures

Today at 10:26 UTC with latest release of Job Runner, we have introduced a bug resulting in all component jobs to end with Application error. 

We have reverted to previous version now.

We are very sorry for any inconvenience this might have caused.

Stuck jobs in US stack

We are investigating stucked jobs since Oct 8 1:13 UTC on stack.

Update Oct 8, 3:00 UtC - Stucked jobs were unblocked. We continue to monitor the situation.

We are very sorry for any inconvenience.

Stuck jobs in AWS US and EU stack

14:08 UTC: We are investigating created jobs stuck to start processing in AWS US and EU stacks. The next update in 15 minutes.

14:23 UTC: We found the root cause, rolled back the previous working version and newly created job now should process as expected. We are still trying to push previously created jobs (created within 1 hour ago) to the processing state. The next update in 15 minutes.

14:50 UTC: We have released the stuck jobs in US stack, we continue to release the stuck jobs in EU stack. Next update in 30 minutes.

15:15 UTC: All stuck jobs in EU have been released. Everything is operational now and running as expected.