Azure and Snowflake in Azure regions are reporting general service disruptions. We are closely monitoring the situation and, so far, we have observed only a few symptoms of the issues and the platform operations have not been impacted. Please refer to the status updates of the affected services for more information.
We're sorry for this inconvenience.
]]>2023-01-23 21:50 UTC We're investigating increased job wait times in Azure North Europe stack AWS US stack (connection.north-europe.azure.keboola.com) . Next update in 15 minutes or when new information is available.
2023-01-23 22:10 UTC The root cause was fixed and all operations are back to normal.
]]>UPDATE 12:55 UTC: We have identified the problem and rolled back previous version of our service.
UPDATE 13:05 UTC: All services are now operating normally.
We're investigating increased error rate in AWS US stack (connection.keboola.com). Next update in 15 minutes or when new information is available.
UPDATE 04:40 UTC: We have identified and replaced a number of corrupted nodes with healthy ones, and operations are now back to normal. We apologize for the inconvenience caused.
UPDATE 05:40 UTC: This issue appears to be ongoing, and a new symptom has been identified: jobs are taking longer to start than usual, or are getting stuck in a waiting state. The next update will be in 30 minutes.
UPDATE 06:25 UTC: We're still investigating the issue. Next update in 30 minutes.
UPDATE 07:15 UTC: We have found the root cause and we're fixing it.
UPDATE 08:44 UTC: The root cause was fixed and all operations are back to normal.
2023-01-21 13:05 UTC - We have identified delayed jobs processing the Azure North Europe Keboola Connection stack (https://connection.north-europe.azure.keboola.com) since 08:30 UTC. We have restarted the affected services and all operations are back to normal. We apologize for any inconvenience this may have caused.
]]>2023-01-13 22:45 UTC We have identified an issue with the legacy queue system. Specifically, during Snowflake transformation, the incremental output mapping could ignore filters configured in the "Delete Rows" process, resulting in all rows in the target table being deleted.
The problem began with a release that took place today at 9:30 UTC. At 22:15 UTC we rolled back to a previous version, which has resolved the issue for the time being.
We are still investigating the root cause of the problem and apologize for any inconvenience this may have caused.
2023-01-18 8:35 UTC We found the root cause of the problem and deployed the fixed version.
]]>12:10 UTC On PayAsYouGo projects on the https://connection.north-europe.azure.keboola.com/ stack, an incorrect credit balance may be displayed. The situation will be fixed shortly.
12:15 UTC The credits are now reported correctly again. In case you attempted to run a job within the incident timeframe, it erroneously failed with "You do not have credits to run a job". Please restart such jobs. We sincerely apologize for the trouble.
2023-01-09 07:45 UTC - We have identified an issue on one of the servers running Queue jobs on the EU Central 1 (AWS EU) stack. Numerous jobs are stuck in a terminating state and we are currently investigating the cause of the issue.
2023-01-09 08:05 UTC - We have unblocked the stuck jobs, which were unexpectedly terminated. We are investigating the root cause of the node failure.
In very rare circumstances, some (less than 10 per day) jobs may be delayed by almost exactly two hours in the AWS EU stack (connection.eu-central-1.keboola.com). During this period, the job will be stuck doing nothing for two full hours, and unfortunately terminating the job will not help.
We are currently trying our best to debug and fix an underlying network connectivity issue. If you have any questions or concerns, please reach out to our support.
We are sorry for this inconvenience and will provide an update on this post once we know more or have an ETA of the fix.
Update 2023/01/23 8:40 - We have implemented a fix for the issue and so far there are no occurrences of this issue in the past 12hours. We're continuing to monitor the issue thouroughly.
2022-12-30 08:15 UTC - We are investigating occasional job failures that started on December 29, 2022 at 11:00 PM UTC. We will provide an update with new information when it becomes available.
2022-12-30 09:12 UTC - The error rate is lower, but there are still some occurrences of errors. We are investigating the root cause and will provide an update with new information when it becomes available.
2022-12-30 10:38 UTC - We have identified and fixed the problem, which was caused by rate limiting on the container registry. The last error occurred at 10:08 AM UTC. We are monitoring all systems closely.
2022-12-30 11:23 UTC - We don't see any new occurrences of errors. Platform is fully operational and incident is resolved.
]]>We have discovered a problem on one of servers running Queue jobs on eu-central-1 stack (AWS EU). Jobs were terminated unexpectedly in between 08:20 UTC and 09:20 UTC. The problem has been removed and all jobs should be running OK now again. We are still looking for the root cause to prevent it happening again in the future. We apologize for any inconvenience this may have caused.
We have discovered a problem on one of servers running Queue jobs on eu-central-1 stack (AWS EU). Jobs were terminated unexpectedly from 12:00 AM CET. We are investigating the cause of the problem
Update 12:50 PM CET - The problem has been removed and all jobs should be running OK now again. We are still looking for the root cause to prevent it happening again in the future.
We apologize for any inconvenience this may have caused.
]]>We have discovered a problem on one of servers running Queue jobs on eu-central-1 stack (AWS EU). Jobs were terminated unexpectedly in between 13:00 UTC and 14:45 UTC. The problem has been removed and all jobs should be running OK now again. We are still looking for the root cause to prevent it happening again in the future. We apologize for any inconvenience this may have caused.
]]>2022-12-21 14:05 UTC - We are investigating delayed jobs start on AWS EU Keboola Connection stack (https://connection.eu-central-1.keboola.com). Next update in 30 minutes.
2022-12-21 14:55 UTC - Services are stable now. There could be some delayed jobs between 13:55-14:30 UTC. We are monitoring situation and investigation root cause. We apologize for any inconvenience this may have caused.
]]>We have seen an increase in the number of errors in the Google Drive data source since 15:00 UTC. We are currently investigating the issue and rolling back the recent release.
The error messages are identified by this error message
Unrecognized options "sheets, outputBucket" under "root.parameters".
2022-11-29 17:55 UTC - We are investigating delayed orchestrations on Azure North Europe Keboola Connection stack (https://connection.north-europe.azure.keboola.com). Next update in 30 minutes.
Update 2022-11-29 18:40 UTC - We have deployed a fix and the orchestration schedules will gradually catch on. Next update in 1 hour.
Update 2022-11-29 19:27 UTC - Orchestration schedules are now on time. The incident is no resolved, but we'll keep monitoring the situation. We apologize for the inconvenience.
Nov 29 07:08 UTC - We are investigating multiple stuck jobs on connection.eu-central-1.keboola.com stack. Affected jobs became stuck around 03:00 AM UTC, other jobs are processing and starting without issues. Next update in 30 minutes or when new information will be available.
Nov 29 08:02 UTC - We have unblocked stuck jobs, and we no longer see queueing of jobs. We are investigating the root cause and impact of the incident. Next update when new information will be available.
Nov 29 08:35 UTC - We're still seeing further symptoms of the outage and we're actively investigating.
Affected services are:
Nov 28 10:22 UTC - Platform is now fully operational. We're monitoring all systems closely.
We're sorry for the inconvenience. If you experienced any job failures please run them again.
11-28 09:53 CET: We are facing an outage of the OAuth service for the connection.north-europe.azure.keboola.com stack, the problem is caused on our side and we are currently solving the problem.
Update 09:30 CET: We have deployed the previous version and everything should work fine.
We apologize for the inconvenience.
We are investigating errors in output mapping not being able to find views in schema on Snowflake since 2022-11-25 12:48 UTC. You can see an error similar to "Processing output mapping failed: Table "all_done" not found in schema "WORKSPACE_925215531". We are rollbacking to a previous version until we find a proper solution.
Update 2022-11-25 13:42 UTC: Rollback to a previous version has finished. Everything is running as expected and fully operational now. We are sorry for the inconvenience.
]]>We are aware of a bug in the user interface that makes it impossible to undo the action after clicking collapse tables in IM.
We are sorry for complications. A bug fix will be deployed within minutes.
Update 14:05 UTC The fix has been released to all stacks.
]]>We are seeing stalled jobs since approximately 16:20 UTC (Nov 23) on the https://connection.keboola.com/ stack. These are caused by inaccessibility of Snowflake services in the region.
Update 17:15 UTC: One incident is marked as resolved on Snowflake side, but there is another one ongoing. Most jobs in Keboola Connection should gradually return to their usual run times. Though some customer using BYODB Snowflake might still see slower job run times. We keep monitoring the situation.
Update 17:55 UTC: The Snowflake incident is still ongoing. Some customers using their own Snowflake (BYODB) in the affected region, may still see jobs executing slowly or being stalled in processing or waiting phase, or occasionally failing with a message similar to "Load error: An exception occurred while executing a query: SQL execution internal error: Processing aborted due to error 300004:867554464; incident 3178266.".
Update 19:20 UTC: The Snowflake incident is still ongoing.
Update 20:25 UTC: The Snowflake incident is still ongoing.
Update 21:50 UTC: The Snowflake incident is still ongoing, but we see strong improvement in query processing. Jobs in the affected projects should be gradually returning to the usual processing.
Update 23:00 UTC: The Snowflake incident is marked as resolved. According to our monitoring jobs and queries are already running as usual, so we consider the incident to be resolved. We'll keep monitoring the platform closely for any irregularities.
]]>Nov 2022-11-21 19:32 UTC - Transformations using code patterns are showing blank details in the UI. We found a root cause and fix should be deployed within two hours.
Nov 2022-11-21 20:26 UTC - We have deployed a fix. All transformation details are now accessible from the UI.
Today, 2022-11-19, we are experiencing failures of MongoDB extractor latest release (deployed 2022-11-18 13:40 UTC) on extracting larger amount of data. We rolled back to the previously working version and now monitoring the status.
UPDATE: 2022-11-19 22:00 UTC - We additionally found out the roll back didn't deploy the correct version and furthermore we fixed the deploy and rolled back again, now the previously working version is deployed and running.
UPDATE: 2022-11-19 23:48 UTC - We've verified that the rollback was indeed successful and previously affected jobs are now running successfully.
]]>Nov 2022-11-14 07:03 UTC - Transformations that support Dynamic Backends such as Python and Snowflake Transformations are not respecting selected backend size when triggered by an orchestration and have been running on the default small backend since Friday Nov 09. We are working on the issue and will provide an update shortly.
Nov 2022-11-14 07:51 UTC - We rolled back the release. All new transformation jobs will be started with correct backend size. Everything is running as expected and fully operational now.
]]>2022-11-01 21:01 UTC - We are investigating problem on US stack, the problem can cause tasks to get stuck. Next update in 30 minutes or when new information will be available.
2022-11-01 21:30 UTC - We still have not been able to find the cause of the problem, we are still looking for the cause and will let you know when we have more information.
2022-11-02 06:06 UTC - We are still investigating issues with synchronous actions (feature which allows eg. testing credentials of Database Data Source) and Python/R workspaces start and stop. Jobs processing is not affected at the moment.
2022-11-02 11:42 UTC - We're working with AWS support engineers to find the root cause of the issue. They have acknowledged the issue and engaged their team to resolve the issue.
2022-11-02 13:32 UTC - AWS team was able to resolve the issue on their side. Everything is running as expected and the platform is fully operational now. We are sorry for the inconvenience.
2022-10-31 11:47 UTC - We are investigating stucked jobs processing on multiple stacks. Next update in 30 minutes or when new information will be available.
2022-10-31 12:14 UTC - Stucked jobs were caused by earlier release. We rolled back the release but it didn't unblock currently stucked jobs. We are working on the fix and it should be released within 90 minutes.
2022-10-31 14:23 UTC - We're fixing the stucked jobs stack by stack. All stacks should be fixed within 20 minutes.
2022-10-31 15:11 UTC - All jobs has been fixed and platform was stabilized. Everything is running as expected and fully operational now. We are sorry for the inconvenience.
Today at 10:26 UTC with latest release of Job Runner, we have introduced a bug resulting in all component jobs to end with Application error.
We have reverted to previous version now.
We are very sorry for any inconvenience this might have caused.
]]>We are investigating stucked jobs since Oct 8 1:13 UTC on https://connection.keboola.com/ stack.
Update Oct 8, 3:00 UtC - Stucked jobs were unblocked. We continue to monitor the situation.
We are very sorry for any inconvenience.
We are investigating stucked jobs since Oct 2 11:00 UTC on https://connection.eu-central-1.keboola.com/ stack.
18:47 UTC - Stucked jobs were unblocked. We continue to monitor the situation. Everything should be operational now and running as expected.
]]>14:08 UTC: We are investigating created jobs stuck to start processing in AWS US and EU stacks. The next update in 15 minutes.
14:23 UTC: We found the root cause, rolled back the previous working version and newly created job now should process as expected. We are still trying to push previously created jobs (created within 1 hour ago) to the processing state. The next update in 15 minutes.
14:50 UTC: We have released the stuck jobs in US stack, we continue to release the stuck jobs in EU stack. Next update in 30 minutes.
15:15 UTC: All stuck jobs in EU have been released. Everything is operational now and running as expected.
]]>