On Jun 13, 2016 from 5:50am till 6:20am UTC+2 we encountered issues with DNS that may have led to failure of some running jobs at the time. We are sorry for any inconvenience.
We've launched a new extractor for Pingdom.
This extractor will fetch data collected by the Pingdom service about your web application's uptime and performance metrics. This allows you to directly examine your application's performance and what effects that may have on your campaigns, sales activities, or other business metrics.
Detailed descriptions of the data structure and a guide to help get you started are available in the KBC User Documentation.
Let's start the week with a resume of features/improvements introduced in the last week.
New/Updated Components
- MongoDB extractor
- Prague's stand up comedian and node.js developer Radek Tomasek created FTP/FTPS Writer and added visual configuration for his SFTP/WebDAV writer
UI Improvements
- Primary key in input mappings is automatically populated when the destination table already exists
- Performance improvements in generic extractor (and in extractors based on generic extractor, eg. all templated extractors) - parsing large JSON responses is significantly faster
Geocoding Augmentation Extractor has been rewritten as Docker component and the old version will be switched off by the end of June. Till then its users are kindly requested to migrate their configuration to the new version. Here is how to do it:
- Choose Geocoding v2 from list of extractors
- Create new configuration and give it some name as usual
- Add extraction configuration
- Add one or more input tables to input mapping. Please note that the table must have exactly one column with locations (or two columns with latitudes and longitudes in case of reverse geocoding) or you have to map the one (or two) columns in the input mapping.
- Add exactly one table to output mapping which will be filled with result of geocoding.
- Fill parameters configuration. It is in json format and must contain method of geocoding, data provider and other optional parameters like api key or locale. See https://github.com/keboola/geocoding-augmentation#configuration for more details
It's my pleasure to announce another addition to Keboola Connection - a FTP/FTPS Writer.
I was asked to extend the SFTP/WebDAV Writer and add a support for FTP/FTPS protocols. However, after some discussion with Keboola developers, a decision of making a separated connector was made.
The use-case is simple, you can upload your data from Keboola Storage to either FTP or FTPS location. The great thing from my perspective is that this writer supports the new configuration schema that helps to pass the input configuration in a very convenient way.
This writer is developed independently by Blue Sky Media. For more information on how to use the writer, please refer to the documentation. In case you run into some issues or you have more questions, please contact me directly (radek@bluesky.pro).
On Jun 2, 2016 3:40pm UTC+2 a new version of Storage API was released containing the following bug.
Incremental loads into Redshift tables with primary keys do not correctly deduplicate data - rows with duplicate primary key may exist in the table.
We'll be deploying the original version shortly and then we'll dedup all affected tables.
We're sorry for this inconvenience, we'll keep you updated in this post.
UPDATE Jun 3, 2016, 2:30pm UTC+2
Original version deployed. We're starting recovery process, no service outage will be required.
UPDATE Jun 4, 2016, 9:15am UTC+2
Recovery process for all affected tables is finished, all duplicate records should be mitigated. We're now investigating the root cause of the issue to prevent similar incidents in the future.
We're experiencing errors in our legacy OAuth component. Affected components are
- Dropbox Writer
- TDE Writer (using Dropbox Writer)
Some configurations may not run. We're sorry for this inconvenience and investigating this issue.
UPDATE 10pm UTC+2: The issue was resolved, everything should be running smoothly again. Please let us know if you still have errors.
GoodData platform is experiencing some problems with Amazon S3 which can cause fails of data loads in some projects. See their status for more details: https://support.gooddata.com/hc/en-us/articles/220301407-Issues-with-Amazon-S3. Thanks for you patience.
Around 9pm UTC+2 we encountered 2 system wide issues.
1) One of our API servers ran out of disk space. Requests running on this server might have finished with an error. This influenced UI, orchestrations, listing jobs or worker jobs using our APIs.
2) AWS encountered increased API error rates. This might have influenced all components of Keboola Connection, from UI to orchestrations.
Both issues are now resolved and all operations are resumed. We're sorry for any inconvenience and thank you for your patience. We're currently going through failed orchestrations and restarting them.
One of our metadata servers was restarted by AWS at 1:55pm UTC+2.
This may have caused some jobs end with an application error.
We're sorry for this inconvenience and we'll restart all affected orchestrations.