Azure EU Maintenance Announcement

A maintenance of Azure EU Keboola Connection ( will take place on Saturday, Oct 9th, 2021 from 10:00 UTC  and should take less than two hours.

During the maintenance, you can't access your data and projects. All network connections will be terminated by "HTTP 503 - down for maintenance" status message.

Orchestrations and running component jobs should be generally delayed, but not interrupted.  However, feel free to re-schedule your saturday orchestrations to avoid this maintenance window.

Sending emails with empty body from Connection

2021-09-21 08:27 UTC -  We have noticed that emails from our platforms are being sent with empty body (email..title, email..body etc.).  Next update in one hour.

2021-09-21 11:14 UTC  - we successfully rolled back the previous working version. 

We're sorry for this inconvenience.

Failures Saving Queries and Scripts

2021-09-13 08:15:10 UTC:  there was a release of the UI which broke the parsing of transformations into separate queries.  Queries that were not edited this morning were not affected.  

2021-09-13 11:48:29 UTC: We identified broken version and applied a fix

We advise all users to reload their browsers and resave any affected transformations

Deprecated Salesforce Extractor

In order to provide a better experience and support we are taking over the maintenance of Salesforce components from the 3rd party developer. Due to this, the original Salesforce (Bulk API) extractor (htns.ex-salesforce component) is being deprecated and replaced with the new Keboola maintained Salesforce extractor.

The old extractor configuration will continue to work, however you won’t be able to create new configurations. It is highly recommended to migrate to the new extractor version. We have recently added an option to perform most of the migration process automatically.

What’s new?

The new extractor comes with improved functionality:

  1. Improved dynamic UI

  2. Incremental fetching and primary key support

Now you have a better control of how the data is loaded. In the previous version users only had option to switch the `incremental` flag which caused addition of a hardcoded primary key. This caused issues with incremental loading of objects like *History which do not have an Id column at all. Now it is possible to define the primary key manually (defaults to Id) and choose between several options:

  • Full Load: Overwrite all data at destination

  • Incremental: Upsert data in destination

  • Incremental with Incremental fetching: Bring in only data that have changed since previous execution. The date column used for this comparison can be specified. This allows you to use for instance CreatedDate for History objects where the LastModifiedDate is not available.

  1. Fetch deleted records

Now it is possible to control whether to include also the deleted records

Migration process

We have released a migration script that will help you migrate any existing configurations directly from the project. 

Migration Behaviour:

  • Encrypted values (credentials) are NOT TRANSFERRED - because the component has a new ID users must set up credentials in the new configuration again manually.

  • Migration transfers component state so any incremental configurations will start from where they left off. 

  • The output bucket is maintained  - there is no need to change any downstream input mappings

  • Orchestrations are not updated - this step is left for the user so it can be done safely

Steps to migrate:

  1. Each affected configuration will display the following message:

  1. Click on the “PROCEED TO MIGRATION”

  2. You’ll be displayed a list of affected configurations and orchestrations:

  1. Click the “Migrate” button.

  2. After successful migration you will find all new configurations in your project

  3. Replace the credentials

  4. Inspect configurations and test if needed.

Change affected Orchestrations manually and replace the old configuration with the new one.

Failing Output Mappings into Storage

2021-08-28 10:25 UTC - We have noticed increase rate of failed jobs when importing data into Storage. They are failing with user errors Some columns are missing in the csv file.

2021-08-28 10:50 UTC - We identified the problem and rolled back the previous version of our Storage.

2021-08-28 11:50 UTC - Failed jobs was related with minor Storage changes released on 2021-08-27 between 8-9 AM UTC.

Table imports started after that could finished with User Errors "Some columns are missing in the csv file." when columns names in the source file was ending with non-alphanumeric characters.

We're sorry for this inconvenience.

Failing creation/restore of Jupyter workspaces

On 2021-08-17 since 7:30 UTC we are experiencing some Jupyter based workspaces failing to create or restore in all regions. We are now rolling back previous working version.

2021-08-17 9:35 UTC  - we successfully rolled back the previous working version and creation/restore of Jupyter workspaces works now as expected. As a root cause we identified mishandled process when scaling up server instances, so only creation/restore that triggered instances scale up would fail. We work on the fix and will be released soon after.

Increased rate of API Errors

On 2021-08-15, between 22:40 and 11:10 CET we experienced increased rate of Storage API errors. We identified the problem and resolved it. As a consequence some jobs may have failed to start. We're very sorry for this inconvenience. 

Delayed processing of jobs in AWS US stacks

2021-08-02 07:40 UTC - We are investigating component job delays in  Next update when new information will be available or in hour.

UPDATE 2021-08-02 08:13 UTC - We have identified the root cause of overload and added more capacity. Backlog is cleared and new jobs are processed immediately. Jobs started before the incident might be still running longer than usually. We are going to monitor the situation and keep you posted.

UPDATE 2021-08-02 10:14 UTC - Incident is resolved. All operations are back to normal. We're sorry for this inconvenience.

Corrupted telemetry data

We are currently investigating an issue regarding corrupted data obtained via our Telemetry Data component (keboola.ex-telemetry-data).

We have most probably identified the issue and we're working on a fix.

We are very sorry for any inconvenience that this might caused you.

Next update in 15:00 UTC.

Update: We have modified the component so that it now loads data using full loads only. To ensure that you have the correct telemetry data, all you need to do is run the extractor (or wait for your pipeline to run it). We will re-implement the incremental fetching in the following months.