Deprecated Salesforce Extractor

In order to provide a better experience and support we are taking over the maintenance of Salesforce components from the 3rd party developer. Due to this, the original Salesforce (Bulk API) extractor (htns.ex-salesforce component) is being deprecated and replaced with the new Keboola maintained Salesforce extractor.


The old extractor configuration will continue to work, however you won’t be able to create new configurations. It is highly recommended to migrate to the new extractor version. We have recently added an option to perform most of the migration process automatically.


What’s new?

The new extractor comes with improved functionality:

  1. Improved dynamic UI

  2. Incremental fetching and primary key support

Now you have a better control of how the data is loaded. In the previous version users only had option to switch the `incremental` flag which caused addition of a hardcoded primary key. This caused issues with incremental loading of objects like *History which do not have an Id column at all. Now it is possible to define the primary key manually (defaults to Id) and choose between several options:

  • Full Load: Overwrite all data at destination

  • Incremental: Upsert data in destination

  • Incremental with Incremental fetching: Bring in only data that have changed since previous execution. The date column used for this comparison can be specified. This allows you to use for instance CreatedDate for History objects where the LastModifiedDate is not available.

  1. Fetch deleted records

Now it is possible to control whether to include also the deleted records


Migration process

We have released a migration script that will help you migrate any existing configurations directly from the project. 


Migration Behaviour:

  • Encrypted values (credentials) are NOT TRANSFERRED - because the component has a new ID users must set up credentials in the new configuration again manually.

  • Migration transfers component state so any incremental configurations will start from where they left off. 

  • The output bucket is maintained  - there is no need to change any downstream input mappings

  • Orchestrations are not updated - this step is left for the user so it can be done safely


Steps to migrate:


  1. Each affected configuration will display the following message:

  1. Click on the “PROCEED TO MIGRATION”

  2. You’ll be displayed a list of affected configurations and orchestrations:

  1. Click the “Migrate” button.

  2. After successful migration you will find all new configurations in your project

  3. Replace the credentials

  4. Inspect configurations and test if needed.

Change affected Orchestrations manually and replace the old configuration with the new one.

Failing Output Mappings into Storage

2021-08-28 10:25 UTC - We have noticed increase rate of failed jobs when importing data into Storage. They are failing with user errors Some columns are missing in the csv file.

2021-08-28 10:50 UTC - We identified the problem and rolled back the previous version of our Storage.

2021-08-28 11:50 UTC - Failed jobs was related with minor Storage changes released on 2021-08-27 between 8-9 AM UTC.

Table imports started after that could finished with User Errors "Some columns are missing in the csv file." when columns names in the source file was ending with non-alphanumeric characters.

We're sorry for this inconvenience.

Failing creation/restore of Jupyter workspaces

On 2021-08-17 since 7:30 UTC we are experiencing some Jupyter based workspaces failing to create or restore in all regions. We are now rolling back previous working version.

2021-08-17 9:35 UTC  - we successfully rolled back the previous working version and creation/restore of Jupyter workspaces works now as expected. As a root cause we identified mishandled process when scaling up server instances, so only creation/restore that triggered instances scale up would fail. We work on the fix and will be released soon after.

Increased rate of API Errors

On 2021-08-15, between 22:40 and 11:10 CET we experienced increased rate of Storage API errors. We identified the problem and resolved it. As a consequence some jobs may have failed to start. We're very sorry for this inconvenience. 

Delayed processing of jobs in AWS US stacks

2021-08-02 07:40 UTC - We are investigating component job delays in connection.keboola.com.  Next update when new information will be available or in hour.

UPDATE 2021-08-02 08:13 UTC - We have identified the root cause of overload and added more capacity. Backlog is cleared and new jobs are processed immediately. Jobs started before the incident might be still running longer than usually. We are going to monitor the situation and keep you posted.

UPDATE 2021-08-02 10:14 UTC - Incident is resolved. All operations are back to normal. We're sorry for this inconvenience.



Corrupted telemetry data

We are currently investigating an issue regarding corrupted data obtained via our Telemetry Data component (keboola.ex-telemetry-data).

We have most probably identified the issue and we're working on a fix.

We are very sorry for any inconvenience that this might caused you.

Next update in 15:00 UTC.

Update: We have modified the component so that it now loads data using full loads only. To ensure that you have the correct telemetry data, all you need to do is run the extractor (or wait for your pipeline to run it). We will re-implement the incremental fetching in the following months.


Delayed processing of job in Azure North Europe stack

Since 2021-07-22 06:00 UTC We are experiencing number of jobs in waiting state more than usual. We are going to monitor the situation and keep you posted.

UPDATE 2021-07-22 07:40 UTC We replaced an unhealthy instance with a new one and the issue has been resolved. We're sorry for this inconvenience. We continue the analysis.

Job delays and unsuccessfull job terminations in all Azure stacks

Since 2021-07-20 17:00 UTC some jobs processing may be delayed and job termination requests may be unsuccessful in all Azure stacks. The total number of affected jobs or requests is very small.

This bug was introduced due to a network settings change. The change has been reverted and is currently being deployed to all Azure stacks. If you experience any of the mentioned symptoms please get in touch with our support so we can mitigate the issue faster.

We're very sorry for this inconvenience. 

Delayed processing of job in Azure North Europe stack

Since 2021-07-21 07:00 UTC We are experiencing number of jobs in waiting state more than usual. We are going to monitor the situation and keep you posted.

UPDATE 2021-07-21 08:00 UTC We replaced an unhealthy instance with a new one and the issue has been resolved. We're sorry for this inconvenience.