We experienced a brief outage in the application where the platform was inaccessible due to a UI bug. We promptly addressed the issue by rolling back the latest change, and this resolution has been deployed across all stacks.
It's important to note that no flows, running jobs, or workspaces were affected during this incident.
]]>2023-11-26 3:19 UTC - The processing of jobs in Azure NE stack (https://connection.north-europe.azure.keboola.com) has been slowed down, we're investigating the issue.
2023-11-26 4:15 UTC - Slowdown of starting new jobs was caused by heavy load. The backlog of jobs has been cleared, and new jobs are processing with no delays. We apologise for the inconvenience.
2023-11-03 14:40 UTC We're experiencing job processing slowing down on Azure NE stack (https://connection.north-europe.azure.keboola.com) due to heavy load. We apologise for the inconvenience.
Update 2023-11-03 15:10 UTC We have managed to find the cause and are working on a fix.
Update 2023-11-03 15:40 UTC We managed to mitigate the main cause. It will take some time for the platform to return to normal.
Update 2023-11-03 16:05 UTC The incident is now resolved, and the platform is stable again.
We apologize for the inconvenience.
]]>2023-10-30 15:17 UTC We're experiencing log delays on running jobs on AWS US stack (https://connection.keboola.com) due to heavy load.
Update 2023-10-30 17:34 UTC We have managed to find the cause and are working on a fix.
Update 2023-10-30 19:50 UTC The incident is now resolved, and the platform is stable again.
We apologize for the inconvenience.
2023-10-11 5:54 UTC - Some runs of component using OAuth Broker are failing with User errors "OAuth Broker v2 has been deprecated". We are investigating the issue.
2023-10-11 6:30 UTC - We have identified the cause of the issue. Yesterday at 13:20 UTC, a new UI version was deployed that contained a bug in configuration editor.
During the configuration editing of components that are using the OAuth Broker, the version paremeter was of the broker 3 was inadvertently deleted.
It affected only configrations that were created or modified between 2023-10-10 13:20 UTC and 2023-10-11 6:00 UTC. Jobs of these configurations are ending with User Error "OAuth Broker v2 has been deprecated on September 30, 2019."
UI changes were reverted, but your failing configurations need to be fixed manually.
Go to the all version history in your configuration and use "Restore" button to revert changes you made. It will also revert OAuth configuration back to use newer broker v3. New configuration modifications won't break the settings anymore.
Second option to fix the issue is reauthorization of OAuth in your configuration.
We apologize for any inconvenience caused.
Due to the changes at Twitter we are no longer able to provide it as a Data Source component.
We will be introducing an X Data Source if demand calls for it.
If you have any questions, or would like to push for introducing an X Data Source then please let us know at support@keboola.com.
2023-09-13 12:55 UTC - Processing of jobs in AWS EU stack has been slowed down, we're investigating the issue.
2023-09-13 13:43 UTC - We see minor improvements. We're still investigating the issue, next update when new information will be available.
2023-09-13 14:31 UTC - The backlog of jobs has been cleared, and job processing should be functioning properly once more. We had to temporarily disable the "Child Jobs" checkbox on the Jobs page to address the issue. We are continuing to monitor the situation.
]]>13:10 UTC the processing of jobs in AWS EU stack has been slowed down, we're investigating the issue
13:40 UTC the issue has been resolved, jobs are now being processed normally. We're still investigating the root cause of this issue.
14:30 UTC we're investigating the issue again, since jobs in AWS EU stack are slowed down
15:00 UTC we have found a root cause of incident and preparing fix to mitigate the issue. Next update in 1 hour.
16:00 UTC we are still working on issue mitigation. Next update in 1 hour.
16:55 UTC issue was fixed, all systems are stable now. There is several projects which still have problem to list jobs, we are working to fix this issue. Next update when new information's will be available.
]]>
11:40 UTC - We are investigating failing synchronous actions (check database credentials and similar) in all our stacks since 11:00 UTC.
11:50 UTC - We deployed previous version of affected service, all systems are now operational. We apologize for any inconvenience caused.
]]>Since 03:35 UTC the processing of jobs in AWS EU stack has been slowed down, we're investigating the issue
Update 8:00 UTC the issue has been resolved, jobs are now being processed normally. We're still investigating the root cause of this issue.
23:40 UTC: We are experiencing jobs failures in AWS US stack when importing data to or from storage. It is caused by an incident in Snowflake, see https://status.snowflake.com/incidents/6d594mbq4v93
00:25 UTC: We are not experiencing the errors anymore, although the Snowflake hasn't closed the incident, see the last status update: "We've identified the source of the issue, and we're developing and implementing a fix to restore service."
(Resolved) 00:40 UTC : After further monitoring we don't see any errors when importing data to or from storage, hence we consider platform operational.
]]>The latest version (8.2.0) of Microsoft SQL Server Extractor terminates with an internal error. This version was deployed yesterday, and we are currently performing a rollback Next update will be available in 15 minutes.
[Resolved] UTC 07:56: We have rollback to version 8.1.1, and the extractions are now functioning without any issues. We apologize for any inconvenience caused.
UTC 14:05: We're investigating delayed jobs starts in AWS US stack https://connection.keboola.com/. Jobs are "stuck" in "created" state.
UTC 14:33: The incident is now resolved, jobs are now starting normally.
UTC 22:30: We are investigating too many storage jobs waiting to be executed. Next update in 30 minutes.
UTC 23:00: The too many storage jobs waiting seem to be only in one particular project, not affecting the whole platform. Still we continue investigation. Next update in 30 minutes.
[Resolved] UTC 23:40: We mitigate the jobs creation in the affected project, double checked the consequences and conclude the platform is operational.
]]>UTC 12:30 We're investigating issues with https://buffer.keboola.com/v1 endpoint in us-east-1 region.
UTC 13:03 The internal database got overloaded, we're working on scaling it up and processing the backlog. We expect that the endpoint would be restored within an hour.
UTC 14:23 The restore is unfortunately taking longer than expected. We're still working on it.
UTC 14:50 The restore is taking longer because insufficient compute capacity of particular instance types in AWS. We're still working on it.
UTC 15:35 The endpoint is partially operational, but not fully replicated. We're still working on it.
UTC 15:56 The endpoint is operational and should be stable now.
]]>We have discovered that some writer jobs in the projects that were migrated to the new job queue (Queue V2) after the beginning of May are missing information about the data transferred. That information is used to calculate the number of credits consumed by those jobs.
We will deploy a fix tomorrow (10th Aug), which will add missing credits to the jobs affected. For affected projects regularly using writers, the result may be that they have a higher recorded consumption of credits.
The issue is related solely to the telemetry and does not affect Keboola Connection in any way. Moreover, it affects the telemetry only for projects that were recently migrated to Queue V2.
UPDATE 2023-08-10 11:04 UTC: The fix was deployed and the affected writer jobs show consumed credits again.
When a project is migrated to Queue V2, any jobs created in the past several months are also migrated, so that the user can keep track of what is going on in their Keboola project UI. Jobs in both Queue V1 (the old queue) and Queue V2 contain information about the data transferred by these jobs as different metrics. However, this information is not passed from an original job to the corresponding migrated one during the migration process.
Generally, Queue V1 jobs take precedence over Queue V2 jobs. To prevent any issues, they are used in the telemetry calculations, rather than the migrated jobs, as they have the original data.
In May, to speed up the telemetry calculations, the input mapping of Queue V1 jobs in a transformation was switched so that only data updated in the last 30 days was incrementally loaded for further processing.
As noted above, when a project was migrated to Queue V2, migrated jobs were also created. So, when processing jobs, loads of migrated jobs from the past several months were processed but only recently updated Queue V1 jobs (from the last 30 days) were processed alongside them. Thus, the older Queue V1 jobs could not take precedence over the newer migrated Queue V2 jobs, so the latter were incorrectly used for the telemetry output. Hence, information is missing about transferred data, resulting in no credits.
For the bug fix, a transformation will now always load the entire history of Queue V1 jobs to prevent migrated jobs from incorrectly being used in telemetry calculations.
]]>2023-07-25 15:30 UTC - We are investigating jobs failing to start in EU region. Next update in 30 minutes.
2023-07-25 16:00 UTC - We still continue investigating the issue. Next update in 30 minutes.
2023-07-25 16:30 UTC - We still continue investigating the issue. Next update in 30 minutes.
2023-07-25 17:00 UTC (Resolved) - We found running jobs in a disabled project to cause the other jobs failing to start and took immediate actions to resolve the problem. Jobs are now starting and platform is operational. We will investigate more to find the root cause.
UTC 10:30 We have confirmed that new and old sandboxes are now being correctly displayed. There is a chance that there workspaces created between approximately July 13 10:40 UTC - July 14 10:00 UTC that may still be invisible. If you are missing a workspace in your workspace list, please contact us through support, where we'll fix these cases individually. We sincerely apologize for the trouble.
UTC 8:25 We're working on a fix, we expect it to be ready in approximately 2hours. Next update in 2 hours.
UTC 7:20 We have identified the cause and working on a fix. As a workaround, you can create a workspace in Development branch, where it should display correctly. We have confirmed that this is only an issue with listing the workspaces, so no data is lost. Next update in 1 hour.
UTC 6:40 We're investigating reports of users not being able to new or recently created workspace in list of workspaces. Preliminary results show that this is only an issue with the listing, the workspaces do actually exist. Next update in 30 minutes.
UTC 10:30 We're seeing again increased number of errors, this time these are reported as "Cannot import data from Storage API: Request body not valid". The first occurrence of this error is 9:40 UTC. We're investigating the details. Next update in 20 minutes.
UTC 10:40 We have identified the approximate cause. Only jobs in projects not using QueueV2 are affected and Workspaces in all projects could've been affected. We're working on a fix. Next update in 20 minutes.
UTC 10:58 The fix was deployed, the issue is now resolved. We apologize again for the inconvenience.
]]>UTC 9:40: We're seeing reports of increased number of application errors in all stacks. It seems that mostly exporting tables is affected.
We're investigating the issue. Next update in 15 minutes.
UTC 9:55: The issue was caused by temporary internal inconsistency during deployment of one of our services. Approximately 30 jobs failed across all stacks. The issue is now resolved. We apologize for the konvencie.
July 10th, 12:20 UTC: We are investigating slow jobs start in AWS EU stack that we started to experience since 5th of July at CET midnight time.
July 10th 13:30 UTC: We have implemented certain measures that we believe could mitigate the issue; however, we have not yet identified the root cause. We will continue to closely monitor the situation and conduct further investigation. The next update will be provided tomorrow (July 11th) or as soon as new information becomes available.
July 11th 11:33 UTC: We are still experiencing intermittent slow job starts during peak times, and our investigation is ongoing. The next update will be provided as soon as new information becomes available.
July 13th 10:46 UTC: At 09:45 UTC, we deployed multiple optimizations to address and reduce job start delays. We will continue to closely monitor the situation, and we will provide the next update as soon as new information becomes available.
July 14th 06:34 UTC: Significant improvements have been achieved since the previous deployment, restoring performance to pre-July 5th levels. We continue to monitor the situation closely to maintain stability. Thank you for your patience and support.
July 17th 06:44 UTC: Performance is back to pre-July 5th levels, the issue is now resolved. We apologize for any inconvenience caused.
]]>
After last telemetry update, incremental processing of kbc_usage_metrics_values table might caused showing of higher credits usage for some projects and usage breakdowns.
2023-07-04 11:15 UTC We are investigating problems with UI loading on all stacks.
2023-07-04 11:35 UTC Project UI is now working. Root cause was bug in UI deployment.
We apologize for any inconvenience caused.
2023-06-30 12:20 UTC We are investigating problems with listing jobs on all stacks. The error is manifested by the Invalid configuration for path \"job.branchType\": BranchType must be one of dev, default message.
Next update in 30 min
2023-06-30 12:45 UTC [resolved] We have re-deployed the last functional version and the problem is now solved.
We apologize for any inconvenience caused.
We are observing an increased number of faulty storage jobs, resulting in the error message "Cannot import data from Storage API“ in connection.eu-central-1.keboola.com. The main cause has been identified and resolved, and now all systems should be running smoothly. We will continue to monitor the situation, and the next update will be provided in 30 minutes.
We apologize for any inconvenience caused.
UPDATE 7:20 UTC [resolved] All systems are functioning normally, and the incident has been resolved and closed.
]]>
This is a reminder that the deadline to update your whitelist for the new outbound IP addresses is approaching. It is crucial to act before June 30, 2023, to avoid any disruption to your connectivity.
If you are still seeing the following alert in your projects, then you have not yet migrated to the new IP addresses:
Please note that if you have not manually updated your whitelist by the deadline, Keboola will perform the switch globally. This means that your projects will be automatically switched to the new IP addresses after June 30, 2023.
If you have not yet migrated, please follow the actions required as described in the New Outbound IP Addresses announcement.
2023-06-20 13:10 UTC We are investigating problems with workspace creation on Azure North Europe Stack (connection.north-europe.azure.keboola.com)
2023-06-20 13:25 UTC Issue is now resolved. The root cause was a misconfiguration of one of our services.
We apologize for any inconvenience caused.
Today, 16th of June since 3:03 UTC we are experiencing jobs are stuck on import and export data. It is due to a Snowflake incident in Azure west europe region https://status.snowflake.com/ where the warehouse of the Azure North Europe stack is located.
We monitor the snowflake incident and keep you updated here.
UPDATE 6:15 UTC - the snowflake incident is still ongoing, with the last update at 05:28 UTC: "We've identified an issue with a third-party service provider, and we're coordinating with the provider to develop and implement a fix to restore service. We'll provide another update within 60 minutes.". The issue is most likely due to a problem in Azure, which informed about an incident in West Europe region see https://azure.status.microsoft/en-us/status.
UPDATE 7:00 UTC - we see progress, that is storage import/export data jobs are being processed. However the snowflke incident is still open, we continue to monitoring it.
UPDATE 8:00 UTC [resolved] - Snowflake has resolved incident stating "We've coordinated with our third-party service provider to implement the fix for this issue, and we've monitored the environment to confirm that service was restored. If you experience additional issues or have questions, please open a support case via Snowflake Community.". We don't see any more stuck jobs so we conclude it is resolved as well.
]]>