High error rate in AWS US stack

We're investigating high error rate in AWS US (https://connection.keboola.com) stack. Next update in 60 minutes or when new information is available. 

We're sorry for this inconvenience.

UPDATE 15:55 UTC It seems that the root cause is a service disruption in AWS. We're waiting for official confirmation of this issue. You may see intermittent errors (404 or 500), refreshing the page can help. Next update in 60 minutes or when new information is available.

UPDATE 16:25 UTC Service disruption in AWS US region may also cause issues in other Keboola Connection stacks, e. g. when running components jobs. Next update in 60 minutes or when new information is available.

UPDATE 16:55 UTC AWS acknowledge the service disruption and are active working towards recover. See https://status.aws.amazon.com/ for more details. Once the AWS service disruption is over our services should start running smoothly again. Next update in 60 minutes or when new information is available.

UPDATE 18:27 UTC Service disruption on the AWS US region persists. The availability of our services has improved slightly but we are still experiencing errors regarding Workspaces. We continue to monitor the situation. Next update in 2 hours.

UPDATE 21:40 UTC Service disruption on the AWS US region are reducing. Our affected services are showing significant improvement. Next update in 12 hours or as new information is available.

UPDATE Dec 8th, 07:12 UTC Most services in the affected AWS region have already been repaired. Our services are operating normally. Next update in 4 hours or as new information is available.


UPDATE Dec 8th, 15:130 UTC We're sorry for the late update. AWS services have already recovered. Everything should be running without any issues now.


Slow event processing in AWS US

We're investigating possible intermittent slower event processing in AWS US stack (https://connection.keboola.com/). The API response can be delayed up to 2 seconds. This may cause 

  • poor UI response and
  • slowing down of jobs that write metadata (e. g. column datatypes).

This issue does not cause any job failures.

We're sorry for this inconvenience. Next update in 24 hours or as new information is available.

UPDATE Dec 4th, 15:00 UTC We have identified a few possible root causes of this issue and minimized the impact. The situation is now stable, but we're monitoring it closely. 

This is the last update and we'll reopen the communication here only if the situation escalates. 

Column, table or bucket metadata possibly overwritten

We’re investigating possible issue with column, table and bucket metadata in all stacks. We’re seeing suspicious behaviour when running output mapping from a workspace (e. g. Snowflake transformation or SQL Workspace). Under yet unknown conditions column, table or bucket metadata may have been overwritten. This should not affect any existing configurations or jobs. 

Next update in 24 hours or when new information is available. 


UPDATE Dec 2 16:25 UTC: We can confirm issue with metadata, the issue occurs when column, table or bucket metadata has two (or more) metadata with the same key but different provider. Then if metadata is updated for one provider, values will change for all of them. We still investigating scope of issue, next update in 24 hours or when new information is available.

UPDATE Dec 3 9:03 UTC: We have fixed the issue that caused the overwriting of metadata. We found out the problem influenced only buckets, tables or columns in their own scope (no data was mixed between projects, buckets, tables and columns). Now we are investigating scope of affected projects. We're also examining the option of backfilling the overwritten data from backups.Next update in a week or sooner if there are new information available.

We are sorry for the inconvenience.

FTP Extractor new files only redownloaded

Recent release 1.7.0 of FTP extractor released on 30 Nov 2021 09:31 UTC caused misbehaviour of Only New Files flag which led to redownload of all matching files. If you encountered any problems of FTP extractor configuration after this release please contact Keboola support from your project for assistance.

We are sorry for the inconvenience.

Failing Facebook Ads and Instagram extractors

Today, 26th November 2021, between 10:00 and 11:00 UTC we experienced Facebook ads and Instagram extractors failing on an internal error. We fixed the problem and the extractors should be working as expected. If you run jobs in between the mentioned timeframe, please restart your jobs. We are sorry for the inconvenience.

Increased error rate for components communicating with Google APIs

12.11.2021 10:43 CET

We are experiencing increased error rate for components communication with Google APIs

Google reports there are several services disruptions:

We continue to monitor the situation.

12.11.2021 14:11 CET

We no longer see increased component failures, everything is working as expected.

Support for legacy state update in configuration update API will be removed

Updating state using configuration update API call has been deprecated for some time and will soon be completely removed. Please make sure your integrations are not using it anymore.

The legacy state update API has been part of the configuration update API. The behavior was inconsistent, because updating state didn't create a new version of a configuration. The state update has therefore been moved to a dedicated API call.

Delayed processing of jobs in AWS eu-central-1 stack

2021-10-19 23:07 UTC - We are investigating  job processing delays in connection.eu-central-1.keboola.com.  Next update when new information will be available or in hour.

Update 2021-10-20 00:01 UTC We have identified the root cause and working on a fix. Next update when new information will be available or in hour.

Update 2021-10-20 00:45 UTC Everything should be running without any issues now. We're sorry for this inconvenience. 

Google AdWords extractor jobs consume all credits in PAYG

Some jobs of keboola.ex-google-adwords-reports-v201809 in the Azure North Europe stack fail immediately with an error. The job details is missing the job start date and for PAYG customers the billing consumes all available credits.

We're investigating this issue and will update this status in 60 minutes or when an update is available. 

UPDATE 18:43 UTC: We have found the root cause and we're working on a fix. Next update in 60 minutes or when an update is available. 

UPDATE 19:45 UTC: We have fixed billing stats in all affected projects. The root cause has not been fixed yet that means a new job of the keboola.ex-google-adwords-reports-v201809 component will again consume invalid number of credits. We'll update the billing stats later tonight and tomorrow early morning to keep the projects running smoothly. Next update as soon as we have any news or at 07:00 UTC. 

UPDATE Oct 11 06:55 UTC: Unfortunately we're still seeing failing jobs of the affected component after releasing the fix. We're further investigating the issue and preparing a new fix. Next update in 6 hours (13:00 UTC) or when new information is available. 

UPDATE Oct 11 11:40 UTC: We have deployed and verified the fix. We'll continue monitoring jobs closely to see double check for any re-occurrences. 

Azure EU Maintenance

Oct 9, 2021, 10:00 CET - Azure EU is down for scheduled maintenance: https://status.keboola.com/azure-eu-maintenance-announcement-1

Oct 9, 2021, 10:38 CET - Azure EU is back and fully operational. Thanks for your patience.