PostgreSQL data destination writing no data

Today at 07:38 (UTC) a new version (1.9.1) of PostgreSQL data destination was released which introduced a bug causing some configurations to write no data. The component version was rolled back to 1.9.0 at 13:48 (UTC) and everything should work as expected.

If you still encounter similar symptoms please let us know.

We're sorry for this inconvenience. 

Microsoft Azure services planned maintenance

This applies only to Azure North Europe stack (https://connection.north-europe.azure.keboola.com).

Microsoft Azure will be performing maintenance on database services between 16:00 UTC on Feb 10 2022 and 08:00 UTC on 11 Feb 2022. This maintenance will last 60 to 120 seconds and may happen multiple times in this window. This may cause

  • brief platform unavailability, 
  • job retries, 
  • job errors and
  • very rarely also duplicate job execution

As we do not know when this will happen exactly we're unable to plan a maintenance window. If you encounter any issues due to this maintenance please contact our support. 

Linking buckets from Data catalog errors

Since Jan 20 14:00:00 UTC linking buckets from Data catalog fails with this error message:

Invalid data - async: This field was not expected.

We're currently investigating this issue, next update in 30 minutes.

Update 13:50 UTC: The issue has been fixed and everything is working as expected. 

We're sorry for this inconvenience.

High error rate in AWS US stack

We're investigating high error rate in AWS US (https://connection.keboola.com) stack. Next update in 60 minutes or when new information is available. 

We're sorry for this inconvenience.

UPDATE 15:55 UTC It seems that the root cause is a service disruption in AWS. We're waiting for official confirmation of this issue. You may see intermittent errors (404 or 500), refreshing the page can help. Next update in 60 minutes or when new information is available.

UPDATE 16:25 UTC Service disruption in AWS US region may also cause issues in other Keboola Connection stacks, e. g. when running components jobs. Next update in 60 minutes or when new information is available.

UPDATE 16:55 UTC AWS acknowledge the service disruption and are active working towards recover. See https://status.aws.amazon.com/ for more details. Once the AWS service disruption is over our services should start running smoothly again. Next update in 60 minutes or when new information is available.

UPDATE 18:27 UTC Service disruption on the AWS US region persists. The availability of our services has improved slightly but we are still experiencing errors regarding Workspaces. We continue to monitor the situation. Next update in 2 hours.

UPDATE 21:40 UTC Service disruption on the AWS US region are reducing. Our affected services are showing significant improvement. Next update in 12 hours or as new information is available.

UPDATE Dec 8th, 07:12 UTC Most services in the affected AWS region have already been repaired. Our services are operating normally. Next update in 4 hours or as new information is available.


UPDATE Dec 8th, 15:130 UTC We're sorry for the late update. AWS services have already recovered. Everything should be running without any issues now.


Slow event processing in AWS US

We're investigating possible intermittent slower event processing in AWS US stack (https://connection.keboola.com/). The API response can be delayed up to 2 seconds. This may cause 

  • poor UI response and
  • slowing down of jobs that write metadata (e. g. column datatypes).

This issue does not cause any job failures.

We're sorry for this inconvenience. Next update in 24 hours or as new information is available.

UPDATE Dec 4th, 15:00 UTC We have identified a few possible root causes of this issue and minimized the impact. The situation is now stable, but we're monitoring it closely. 

This is the last update and we'll reopen the communication here only if the situation escalates. 

Column, table or bucket metadata possibly overwritten

We’re investigating possible issue with column, table and bucket metadata in all stacks. We’re seeing suspicious behaviour when running output mapping from a workspace (e. g. Snowflake transformation or SQL Workspace). Under yet unknown conditions column, table or bucket metadata may have been overwritten. This should not affect any existing configurations or jobs. 

Next update in 24 hours or when new information is available. 


UPDATE Dec 2 16:25 UTC: We can confirm issue with metadata, the issue occurs when column, table or bucket metadata has two (or more) metadata with the same key but different provider. Then if metadata is updated for one provider, values will change for all of them. We still investigating scope of issue, next update in 24 hours or when new information is available.

UPDATE Dec 3 9:03 UTC: We have fixed the issue that caused the overwriting of metadata. We found out the problem influenced only buckets, tables or columns in their own scope (no data was mixed between projects, buckets, tables and columns). Now we are investigating scope of affected projects. We're also examining the option of backfilling the overwritten data from backups.Next update in a week or sooner if there are new information available.

We are sorry for the inconvenience.

Job delays and unsuccessfull job terminations in all Azure stacks

Since 2021-07-20 17:00 UTC some jobs processing may be delayed and job termination requests may be unsuccessful in all Azure stacks. The total number of affected jobs or requests is very small.

This bug was introduced due to a network settings change. The change has been reverted and is currently being deployed to all Azure stacks. If you experience any of the mentioned symptoms please get in touch with our support so we can mitigate the issue faster.

We're very sorry for this inconvenience. 

Increased error rate for Python transformations

We're seeing increased user error rate in Python transformation jobs since May 25 16:08 UTC. Most common error message is "ERROR: pip's dependency resolver does not currently take into account all the packages that are installed." but also other errors may appear.

We're sorry for this inconvenience, we're actively investigating the issue. Next update in 30 minutes

Update 07:28 UTC: We have rolled back Python transformations from version 1.4.1 to version 1.4.0 and everything seems to be working again. If you encounter further issues please get in touch with us. We'll be monitoring this issue and post an update in one hour. 

Update 08:36 UTC: The situation is stable, we're not seeing any further errors since the rollback. Please accept our apology for the inconvenience. 

Job errors in Azure North Europe region

From 2021-05-18 21:13 UTC to 2021-05-19 05:06 UTC we experienced increased jobs failures in the Azure North Europe region.

These errors were caused by a faulty deploy on a single node. The deploy has been fixed and the situation is back to normal and you can restart the failed jobs.

We're sorry for this inconvenience.