Failing GoodData Writer uploads

After last GoodData maintenance some uploads to GoodData using our writer are started failing with the following error:

Running the LoadData task again manually will be successful. If you're running more tasks in one job, you can disable the failing job to get all the other tasks through. We have discussed the error with GoodData support and this is what they have to say:

I have already confirmed by our R&D engineers, that we have introduced the issue during the R119. We have prepared the fix for the issue and the internal discussion about the fix delivery has already been triggered. The fix unfortunately requires short time (up to 10 minutes) GoodData upload subsystems outage, therefore the our responsible managers are trying to find the best window for its delivery.

I can also confirm, that temporal switching OFF of the gZip in the command would serve as workaround, but I do as well understand the concerns you are mentioning that keeping you away of using it.

I'll share further information w/ you as soon as I have them available.

Currently we cannot turn off gzipped transfers for all projects as it will slow down all uploads. Let's wait if GoodData can release the fix during the weekend and if not, we'll implement temporary gzip disabling for affected projects early next week. Please bear with us, feel free to drop us a line at support@keboola.com if you're concerned about your project.

UPDATE 4:08pm CEST: GoodData announced maintenance for August 20th 2016. During this maintenance this bug should be resolved.

UPDATE Monday August 22 7:30am CEST: We're still rarely experiencing this issue.

UPDATE Monday August 22 2:40pm CEST: We have deployed a workaround fix for the bug so GoodData data loads now should work as expected.

We're sorry for this inconvenience.

Week in Review -- August 15, 2016

Since our last update here's what happened in Keboola Connection

Bugfixes, minor changes

  • Facebook Ads extractor was updated to API v2.7

CSV Import

Are you tired of loading CSV files into a table over and over again? Do you sometimes forget to set the incremental flag or specify a wrong delimiter?... Well those days are over!  Now there's a simple new way to load CSV files.

The new CSV Import allows you to create and save a configuration and just select the file next time. You'll find it under Extractors in your project and you can read a bit more in our documentation.


Job Failures

Today, July 12th 2016, between 10:03 and 10:07 UTC+2 our Elastic cluster was unavailable. 

Some jobs had failed, we're restarting all affected orchestrations. We're sorry for this inconvenience. 



Week in Review -- June 6th, 2016

Let's start the week with a resume of features/improvements introduced in the last week.

New/Updated Components

  • MongoDB extractor
  • Prague's stand up comedian and node.js developer Radek Tomasek created FTP/FTPS Writer and added visual configuration for his SFTP/WebDAV writer

UI Improvements

  • Primary key in input mappings is automatically populated when the destination table already exists
  • Performance improvements in generic extractor (and in extractors based on generic extractor, eg. all templated extractors) - parsing large JSON responses is significantly faster


Redshift Incremental Load Issues (duplicate rows)

On Jun 2, 2016 3:40pm UTC+2 a new version of Storage API was released containing the following bug.

Incremental loads into Redshift tables with primary keys do not correctly deduplicate data - rows with duplicate primary key may exist in the table.

We'll be deploying the original version shortly and then we'll dedup all affected tables.

We're sorry for this inconvenience, we'll keep you updated in this post. 

UPDATE Jun 3, 2016, 2:30pm UTC+2

Original version deployed. We're starting recovery process, no service outage will be required.

UPDATE Jun 4, 2016, 9:15am UTC+2

Recovery process for all affected tables is finished, all duplicate records should be mitigated. We're now investigating the root cause of the issue to prevent similar incidents in the future.

OAuth issues (Dropbox Writer, TDE Writer)

We're experiencing errors in our legacy OAuth component. Affected components are

  • Dropbox Writer
  • TDE Writer (using Dropbox Writer)

Some configurations may not run. We're sorry for this inconvenience and investigating this issue. 

UPDATE 10pm UTC+2: The issue was resolved, everything should be running smoothly again. Please let us know if you still have errors.



Incidents on June 1st 2016

Around 9pm UTC+2 we encountered 2 system wide issues.

1) One of our API servers ran out of disk space. Requests running on this server might have finished with an error. This influenced UI, orchestrations, listing jobs or worker jobs using our APIs.

2) AWS encountered increased API error rates. This might have influenced all components of Keboola Connection, from UI to orchestrations.

Both issues are now resolved and all operations are resumed. We're sorry for any inconvenience and thank you for your patience. We're currently going through failed orchestrations and restarting them. 

Failed jobs

One of our metadata servers was restarted by AWS at 1:55pm UTC+2. 

This may have caused some jobs end with an application error. 

We're sorry for this inconvenience and we'll restart all affected orchestrations.