Pingdom Extractor

We've launched a new extractor for Pingdom.

This extractor will fetch data collected by the Pingdom service about your web application's uptime and performance metrics. This allows you to directly examine your application's performance and what effects that may have on your campaigns, sales activities, or other business metrics.

Detailed descriptions of the data structure and a guide to help get you started are available in the KBC User Documentation.



Week in Review -- June 6th, 2016

Let's start the week with a resume of features/improvements introduced in the last week.

New/Updated Components

  • MongoDB extractor
  • Prague's stand up comedian and node.js developer Radek Tomasek created FTP/FTPS Writer and added visual configuration for his SFTP/WebDAV writer

UI Improvements

  • Primary key in input mappings is automatically populated when the destination table already exists
  • Performance improvements in generic extractor (and in extractors based on generic extractor, eg. all templated extractors) - parsing large JSON responses is significantly faster


New version of Geocoding Augmentation Extractor

Geocoding Augmentation Extractor has been rewritten as Docker component and the old version will be switched off by the end of June. Till then its users are kindly requested to migrate their configuration to the new version. Here is how to do it:

  1.  Choose Geocoding v2 from list of extractors
  2. Create new configuration and give it some name as usual
  3. Add extraction configuration
    1. Add one or more input tables to input mapping. Please note that the table must have exactly one column with locations (or two columns with latitudes and longitudes in case of reverse geocoding) or you have to map the one (or two) columns in the input mapping.
    2. Add exactly one table to output mapping which will be filled with result of geocoding.
    3. Fill parameters configuration. It is in json format and must contain method of geocoding, data provider and other optional parameters like api key or locale. See https://github.com/keboola/geocoding-augmentation#configuration for more details

FTP/FTPS Writer

It's my pleasure to announce another addition to Keboola Connection - a FTP/FTPS Writer. 

I was asked to extend the SFTP/WebDAV Writer and add a support for FTP/FTPS protocols. However, after some discussion with Keboola developers, a decision of making a separated connector was made.

The use-case is simple, you can upload your data from Keboola Storage to either FTP or FTPS location. The great thing from my perspective is that this writer supports the new configuration schema that helps to pass the input configuration in a very convenient way.

This writer is developed independently by Blue Sky Media. For more information on how to use the writer, please refer to the documentation. In case you run into some issues or you have more questions, please contact me directly (radek@bluesky.pro). 

Redshift Incremental Load Issues (duplicate rows)

On Jun 2, 2016 3:40pm UTC+2 a new version of Storage API was released containing the following bug.

Incremental loads into Redshift tables with primary keys do not correctly deduplicate data - rows with duplicate primary key may exist in the table.

We'll be deploying the original version shortly and then we'll dedup all affected tables.

We're sorry for this inconvenience, we'll keep you updated in this post. 

UPDATE Jun 3, 2016, 2:30pm UTC+2

Original version deployed. We're starting recovery process, no service outage will be required.

UPDATE Jun 4, 2016, 9:15am UTC+2

Recovery process for all affected tables is finished, all duplicate records should be mitigated. We're now investigating the root cause of the issue to prevent similar incidents in the future.

OAuth issues (Dropbox Writer, TDE Writer)

We're experiencing errors in our legacy OAuth component. Affected components are

  • Dropbox Writer
  • TDE Writer (using Dropbox Writer)

Some configurations may not run. We're sorry for this inconvenience and investigating this issue. 

UPDATE 10pm UTC+2: The issue was resolved, everything should be running smoothly again. Please let us know if you still have errors.



Incidents on June 1st 2016

Around 9pm UTC+2 we encountered 2 system wide issues.

1) One of our API servers ran out of disk space. Requests running on this server might have finished with an error. This influenced UI, orchestrations, listing jobs or worker jobs using our APIs.

2) AWS encountered increased API error rates. This might have influenced all components of Keboola Connection, from UI to orchestrations.

Both issues are now resolved and all operations are resumed. We're sorry for any inconvenience and thank you for your patience. We're currently going through failed orchestrations and restarting them. 

Failed jobs

One of our metadata servers was restarted by AWS at 1:55pm UTC+2. 

This may have caused some jobs end with an application error. 

We're sorry for this inconvenience and we'll restart all affected orchestrations. 

MongoDB Extractor

There's a new extractor available in our group of Docker extractors - MongoDB Extractor.

This extractor allows you fetch data from your MongoDB databases. By specifying collection, query, sort, limit and mapping you're able to extract exactly which parts of your data you want.

On the other hand, MongoDB extractor is very similar to our new set of standard database extractors, so there's a chance you are already familiar with some parts of its UI.

Main features of the MongoDB extractor:

  • you can specify query, sort and limit to filter your data (as in mongoexport commnad)
  • each export has to be named, which helps you identify your exports better
  • there's a mapping section (actually most important) through which you can specify how your data will be processed, how they'll be split to multiple tables and which columns will be exported (thus you can join them with ease)
  • as always, there's an option to have multiple exports in one configuration

Here's a sneak peak of sample configuration:

For more information about its configuration follow the guide at our help site.