Maintenance Announcement

On January 17th, 2015 we will perform scheduled upgrade of our meta-data servers. This will cause a maintenance window from 2:00 pm to 3:00 pm (GMT+1), or 5:00am to 6:00am (PST).

During the maintenance, you can't access your data. All network connections will be terminated by "HTTP 503 - down for maintenance" status message.

All running tasks will be monitored by us and restarted in case of any interruption. Orchestrations and running transformations will be generally delayed, but not interrupted. However, feel free to re-schedule your saturday's orchestrations to avoid this maintenance window.

GoodData LDM Visualizer

There's a direct link to GoodData LDM Visualizer in the model page in GoodData Writer. So in a single click you can compare what is defined in Keboola Connection and what has been already uploaded to GoodData and how the model is interpreted.

Transformation Descriptions

We're now showing bucket and transformation descriptions in the UI. Currently there's no way to change the bucket description - it is defined setup when the bucket is created, but we're working on a way to make them editable. Transformation descriptions can be changed easily in the transformation detail

Editing in Storage Console

Due to a possible significant inconsistency in real and estimated number of rows (and table sizes) on the MySQL Storage backend which could lead to data loss when editing the data sample, we turned off the data sample editing for all IN and OUT MySQL buckets. SYS buckets are fully editable and will always show and edit all available rows. 

Redshift works as expected - editing is allowed for all tables if they contain less than 800 cells (excluding headers). 

Jobs failures

Several projects may have experienced errors in extractor jobs processing. Database, Zendesk, Google drive and some other extractors were affected. Issue is resolved now and we are investigation the cause of the issue.

We have restarted failed or waiting jobs. We're sorry for any inconvenience! 

Transformations EA Preview

We'll be releasing updated version of the Transformation API (and accordingly modified UI) later this week. Changes will include:

All transformation and sandbox job are asynchronous. 

More reliability, durability and scalability of both UI and API. 

You can monitor all transformation jobs in the Jobs app, processing jobs will show detailed 

Running a transformation or a bucket will not keep the modal window open. 

It will provide you a link to the job detail instead. 

Sandbox jobs will keep the window with the details, though.

Custom credentials are deprecated.

We're providing you with enough power to process your data. If you want to run transformations on your own database servers, please contact support@keboola.com.

Running disabled transformation is disabled.

May sound weird, but it is. You were allowed to run disabled transformation from the transformation detail in the UI. 

If you want to separate a transformation, set it to a different phase or migrate it to another bucket.

Testing period

If you want to make sure everything works fine in your project, you can try it out. The new UI is available and is connected to the new version of API in the Applications app.

You can try running certain transformations, create sandboxes etc. If you want to try the new API within an orchestration, you need to change manually the component name in the configuration table (sys.c-orchestrator.*) from transformation to transformation-new.

In the Orchestrations UI the task name will show Transformations (EA Preview)

When your testing is done, please reset the value back to transformation. 

Stay updated for the release date and please report all bugs and concerns to support@keboola.com.


Transformation Events

As a part of the ongoing Transformation API overhaul we've changed transformation events. We tried to keep it simple, so there's one event for each:

  • Engine startup
  • Start of a phase
  • Database cleanup (happens at the beginning and end of a phase)
  • Input mapping
  • Input mapping that takes longer than 120s (configurable)
  • Transformation (as a whole, not a query in the transformation)
  • Query that takes longer than 120s (configurable)
  • Output mapping
  • Engine shutdown (success or error)
There are no more any START or END events and the engine produces less events. Further activity can be found in related storage events or jobs (eg. table imports and exports).

Generic REST API Extractor

We've just developed a new extractor, which allows you to export data from various APIs, by just setting their URL and a few other parameters in configuration bucket.

What it can do:

  • Export data from any REST API
  • Authenticate using HTTP Basic authentication
  • Authenticate using a generated signature in a query string
  • Scroll through result pages using offset or page number parameters

What can't it do:

  • Paginate using a value within API's response (eg. "next_page": "http://api.example.com/endpoint/nextSetOfResults")
    • This functionality will be enabled soon(TM)!
  • Authenticate using OAuth
    • This, however, won't be enabled anytime soon due to the nature of OAuth, where the application has to be registered at the API, and therefore doesn't allow use of the API without developer interaction
  • Simple configuration!
    • This extractor is designed to be as universal as possible, and we're at work to develop an easy to understand user interface, that'll not take away from the application's abilities, but won't require bleeding out of eyes, ears nor nose to set up!

How does it work?

You create a bucket, which defines to what API should the extractor connect and how, and the nickname of the API. Then you can create a table within such bucket, and its name can be used as config parameter.

Documentation:

https://developers.keboola.com/extend/generic-extractor/

See attached images for an example (how to set up Conductor API)

Feel free to contact support@keboola.com for help with configuration for any API, or inquiries whether some API is supported, or whether it can possibly be supported, or just with any issues you may encounter setting the extractor up and running!