GoodData Writer issues

We have been fixing multiple errors regarding proper handling of GoodData maintenance during Saturday which could cause failing of some Writer's jobs. All problems were solved and shouldn't appear again. We apologize for any inconvenience.

Docker bundle enhancements

We're excited to announce new features in Docker bundle. 

For those who don't know that is a component, that allows anyone to run apps encapsulated in Docker to run in Keboola Connection.

Streaming Logs

If your app writes to stdout or stderr these logs are immediately forwarded to Storage API Events, so you can notify about important events in your app live.

More abut streamed logs in the documentation.

Incremental File Processing

In a scenario, where you're processing unknown number of files on a regular basis, incremental file processing comes in handy. Successfully processed files get tagged and are excluded on the next run. 

More about incremental file processing in the documentation.

Development and troubleshooting API calls

We added sandbox, input and dry-run API calls to Docker bundle. They are similar to the counterparts from Transformation API and allow you to

  • prepare data and serialized configuration file for your application before you start developing the app, so you don't have to prepare the folder structure manually (sandbox)
  • see exactly what data comes in to your application (input)
  • see the data input and output of your app (dry-run)

The data is compressed in a ZIP file and stored in File Uploads in the given project.

More about these API calls in the documentation.

Want to know more, interested in developing your own apps in KBC? Read more in the documentation or get in touch with support@keboola.com.

GoodData Writer update

GoodData Writer has been updated to use asynchronous mechanism of Syrup instead of it’s own. This brings several changes. All jobs are visible in Jobs section of Keboola Connection amongst jobs of other extractors and writers. Logic of jobs clustered to batches had to be changed and former batch now corresponds to single job and former jobs of a batch now correspond to tasks of a job.


All scheduled Orchestration tasks should work without change. If you call GoodData Writer API in your own application either directly or by using our php client you shouldn’t notice any problem but we recommend to upgrade the client as soon as possible. Backwards compatibility is only temporary and will be removed within few weeks. New version of the client will be released today.

The switch unfortunately affected some running jobs, we will review them and restart if necessary in nearest time.


Storage Redshift dependencies

Redshift tables in Storage API cannot be deleted until its dependencies are removed. Dependencies include any Redshift alias where the source table is involved, or any view in running transformations or sandboxes.

Deleting a table that had some of these dependencies triggered an internal error. This bug is now fixed and dependencies are listed in the error response message.



Storage table rows counts and size estimates

MySQL storage tables were displaying inaccurate results. Tables loaded by full load often shown zero rows and 32KB size even if the table had many GB.

This bug is now fixed, and these estimates are updated every hour. These values are still not accurate however, as the approximations provided by MySQL are used as the source of these values and they may vary from the actual values by as much as 40 to 50%.

This bug does not affect Redshift storage, row counts and sizes are 100% accurate for Redshift and always have been.

SSL security improvement

Please review the entire post carefully to determine whether your use of the services will be affected.

As of 12:00 AM PDT April 30, 2015, we will discontinue support of RC4 cipher for securing connections to connection.keboola.com. 

These requests will fail once we disable support for RC4 cipher for the Keboola Connection. To avoid interrupted access, you must update any client software (or inform any clients to update software) making the requests that are using RC4 cipher to connect to our API services.

Security improvements

We're announcing few security improvements:

  • All our servers, facing to clients, are using EV security certificates (what is EV?
  • All our servers have encrypted disks by using Amazon AWS KMS.
  • All our Elasticsearch clusters encrypt all events.
  • Amazon Redshift backends are encrypted by default. Existing customers can request to be moved to encrypted backends.
  • Storage API employ native Amazon S3 file encryption by default
  • All our Multi-AZ RDS metadata servers have encrypted data by default.
  • New Amazon RDS servers are encrypted by default. Existing customers can request to be moved to encrypted backends.

Long story short: if you're connecting to Keboola Connection, client facing servers are covered by strong encryption SSL with displayed identity in browser's address bar + all client's data in Keboola Connection are encrypted by default.