Retrying Orchestration Jobs and Warning Notifications

We've heard your cries about how difficult it was to re-run failed jobs in the Orchestrator, so we did something about it:

You can now retry any failed job in your orchestration's job queue. On the (failed) job's detail page you'll see a "Job Retry" button in the upper right corner:

Just click on it and press "run" to re-run failed tasks:

If you need to run just a few tasks (failed or not), click on "Choose orchestration tasks to run" to show the task selection list. Select the ones you want by click on grey button in the middle of the window and middle area and activate/de-activate desired tasks.

The run button will create new tasks, so everything will run in the original environment, under the same circumstances and with the same job parameters.  Just take care to note that it is possible that the data underlying the configuration may have been modified by a different process (ie: someone else working with it) in between the last time the job was run and your re-run.

Notifications

If some tasks are prone to fail often (i.e. wrong credentials in client's Google Analytics), you'll want to activate the "Continue on Failure" flag for the "unstable" tasks. If activated, the Orchestrator will not send an error notification when that specific task fails. Instead the Orchestrator will send out a message to our new notifications channel for "Warnings". Go ahead and subscribe to receive emails about all Warnings:

Manual File Uploads Fixed

We've made some changes to the file uploads.

Previously, manual file uploads were behaving a little unreliably.  Occasionally, the file appeared to have been uploaded, but in reality it had not been.  This has now  been fixed, so when you see a file listed, you can be sure that it is really there.

Also, all uploaded files are now immediately encrypted for storage.


  

Adform Extractor

We've launched a new extractor for Adform. You can start using it right away -- the extractor's interface will guide you through configuration.

With the Adform extractor we are introducing the concept of configuration templates. Templates are predefined common configurations that help you quickly set up the extractor without tons of settings. Templates also reduce duplication of tasks and support knowledge sharing. Soon other extractors will also gain template support, and we are also working on mechanism of publishing templates.

Setup extractor from predefined template:


You can then tune the extractor created from template:

Feel free to use this extractor and if you find any issue or have any question or suggestion let us know at support@keboola.com .

iTunes Connect Extractor

iTunes Connect helps to manage the content sold on the Apple iTunes, iBooks Store and App Store. If you are working with the content for Apple devices, you have the basic analytics available in iTunes Connection Web Application where you can track the standard information. However, if there is a need for more detailed information and usage in deeper context (e.g. to make a mashup with other data sources), it may be handy to use Keboola Connection (as we needed in DigitalAirways.tv) and that was the main reason for writing of this extension.

iTunes Connect Extractor is based on Apple Autoingestion Tool and written in Node.js and deployed with Docker.

For successful login you need to pass the iTunes Connect username, password and vendor id assigned to your iTunes account. How to pass these credentials and other params is written in Github repository. Apple's official guide is also handy for deeper understanding of all parameters.

The limitation of current version is that there is only possible to extract the Sales data only (with all available fields, described in documentation in Github repository as well) and the configuration have to be passed via JSON. You can download data within specified date period or use the daily increments.The next major version will have functionality for downloading Earnings data and have a proper user interface.

Feel free to use the iTunes Extractor and you find any issue or have any question or suggestion, don't hesitate to contact me at radek@digitalairways.tv.

Generic Extractor update

We have fixed an error in handling recursive calls in the new Generic extractor.

If you had a working configuration which uses recursions - by using the children parameter for a job (see https://github.com/keboola/generic-extractor#jobs), the format of result data and the parent ID column might have changed, which could result in a failed import to Storage API.

Please let us know if such error affects you and we'll take care of it!

Dropbox Writer

We've added Dropbox to our list of Writers. 

You can now easily upload any of your out. tables to your dropbox account by using the Writers menu in your project, add Dropbox and authorize Keboola Connection to write to your dropbox folder. Once you're done with that, you can tick any of your output tables and they will be pushed to your dropbox once you run the Writer, or you might just click the upload button next to a table in the list to do an one-time upload. 


Loading binary files in R transformations

If you want to load some data in your R transformation but it can't be stored in a table, you can now use Saved Files feature. This allows you to specify multiple tags and for each tag the engine will download the latest stored file in File uploads with this tag. If no file is found, the transformation will fail. The files are then stored in /data/in/user/{tag} and the manifest files in in /data/in/user/{tag}.manifest.

This comes extremely handy when you externally pre-generate binary data for a transformation (model, bucketing criteria). You just upload the file to the File upload and assign a certain tag, that you will then use in a transformation.