New version of AdWords Extractor

We have just released a new version of AdWords Extractor. It works with AdWords API v201802 (see the Release notes).

The previous version of the extractor is deprecated and you can use our migration tool which will migrate your AWQL queries. However, you have to reauthorize the extractor and give it access to your AdWords data again. The previous version uses AdWords API v201710 which will be switched off on 11 July 2018.

Week in Review -- March 19, 2018

New Components

Asana Extractor

We’re happy to welcome the Asana Extractor to our family. It can extract your projects and tasks from the Asana application which is designed to help teams track their work. This component was developed by Leo Chan.

Thoughtspot Writer

We're likewise delighted to announce a new writer to Thoughtspot that is now available for public use. 
Thoughtspot is a "search and AI-driven analytics platform".

DynamoDB Extractor

We also released a beta version of the DynamoDB extractor. It does not have any UI yet, and has to be configured via JSON. If you are feeling adventurous, please give it a try and let us know how it goes.

Marketing Miner Extractor

Lastly, but in no way least, we have a new extractor for Marketing Miner that allows you to fetch your project rank tracking data from Marketing Miner. 

New Features

  • The project API Tokens section now shows when a token was refreshed: 

Minor Improvements

  • We've modified the storage job polling to reduce component job run times.  The greatest speedups will be observable in small to medium sized data loads.  
  • Artificial limits were removed from CSV file import. Previously the upload had to go through in 10 minutes. Now it's left to the decission of your web browser. Please note that it still holds that large files should be uploaded through the API.

  • Further improvements to Output mapping. The destination bucket is now prefilled from the transformation name.

Fixes

  • The MSSQL extractor was updated to correctly handle databases with case-sensitive collations.

  • The Email Attachments extractor now supports incremental and addresses in angle brackets, ex: `Joe <email@example.com>`

  • Developer portal vendors can now approve requests to join via the request email.



New S3 Extractor

This one took us a while, but we believe it's worth it. We carefully gathered feedback and made the most commonly used features accessible through a new streamlined UI. And there's even more under the hood.

The original AWS S3 extractor was renamed to Simple AWS S3. It stays fully supported and is not being deprecated. There's no need to migrate your configurations.

There are several major differences between the original and the new extractor. The new AWS S3 extractor

  • can download multiple files/tables using a single set of credentials.
  • fully supports incremental loads.
  • is more flexible.

The UI of the new extractor supports many features, but the extractor is not limited by its UI: it is the first component that openly supports processors. Opening the JSON editor (aka Power User Mode) opens up the configuration to endless possibilities. The extractor itself does only a simple job – downloads a set of files from S3. All other jobs (decompression, CSV fixing, setting the manifest file, etc.) are delegated to processors. You can order and configure the processors so that they handle the files as required. You can even develop your own processor in case you're missing something. We're fully aware that this is not an easy concept to grasp, but it's intended for advanced users. Not advanced? Use the UI.

The list of available processors will be kept and updated in the Developer Portal list of components. A full description of the extractor is available in our documentation.

One step closer to replacing legacy Restbox. The HTTP extractor will follow shortly. 

Week in Review -- March 5, 2018

Improvements

  • Schema in Snowflake Extractor is no longer a required connection parameter. If not set, the table selector allows you select tables from the whole database.
  • Snowflake extractor now supports importing these semi-structured data types: `VARIANT`, `OBJECT`, and `ARRAY` 

  • Updated two factor authentication in Keboola Developer Portal. The SMS authentication is now deprecated. All new users will have to use either Google Authenticator or Duo Mobile app.
  • MySQL extractor now has an option to enable compression of data sent over network
  • Enhanced Output Mapping selector
  • R in Sandbox and Transformations is updated to 3.4.3, also the Tidyverse package is now installed by default.

Bug fixes

  • Data Takeout was randomly failing on backing up your data to S3.
  • Task editor in Orchestrator produced errors when orchestration had configured dozens of tasks.
  • In Twitter extractor template, if a user made a mention of your account, the details of that user account weren't downloaded. Edit and save existing configuration to remedy this issue.

New Email Attachments Extractor

There’s a new version of Email attachments extractor (previously known as Pigeon extractor) you can use from the Keboola Connection’s Extractors tab. It serves for importing csv files to the Storage by sending them as attachments to a generated email address.

Email address for sending csv attachments is generated automatically and the new extractor has a fresh and simpler UI.

The old version is deprecated and will be discontinued on April 6. Please migrate to the new version in upcoming weeks. There is no automatic migration script because you need to generate new email addresses but the switch should be very easy.

Farewell to Custom Science

Yes, we are going to deprecate the Custom Science application. We introduced it more than two years ago as an alternative to components. Unlike components, it was easy to implement and use. However, we've made a lot of progress in simplifying component development.

The latest additions are a simplified component creation workflow, a component generator tool, and a rewritten developer documentation. See a 10 minute video (or this one for Gitlab) on how to create a Hello World component. All of this means that creating a component is much easier that it was two years ago and is definitely worth the effort. 

At the same moment, Custom Science (CS) is producing more and more problems, specifically:

  • We have no trace of what code was actually executed. That means when something breaks, we don't know if the code was changed in the meantime or not. When something was successful, we don't know for sure which version it was. We can't run a configuration with a previous version of the code.
  • There is a direct dependency on the git repository, and while Github and Bitbucket outages are neither common nor long, they do account for dozens of failed jobs (last year).
  • Risk of loss: If you lose access to the git repository, the jobs immediately fail. There is nothing we can do about it. No grace period. No way back. This can easily happen when people change positions or leave their company.
  • Dependency: Typically, there is only one person which can fix broken CS. If an issue arises, we don't know who the person is and can't contact them. Even if we do know the person, they might not respond. In the meantime, we have no way for a workaround (i.e. reverting to the last working state).
  • Poor security: If the repository is private, we need credentials to it. These should be dedicated robot credentials, but most people use their own. Plus, it's your code repository, so why should you give us credentials to it?
  • Poor performance: CS can easily spend 1-2 minutes on the warm up. If it is installing packages, then it is even more because they are being installed on every run.

We are fully aware that there are some disadvantages of converting every CS into a component. Specifically:

  • It takes several minutes before the updated code is deployed in KBC.
  • The initial setup takes several minutes of your work.

The first issue is not going to change any time soon (we will work on shortening the delay, but there will always be some delay). We tried to minimize the second issue – you can follow our migration guide, or see a 10 minute video of migration (done manually and using our tool) or see the new Component development tutorial.

Overall, CS is great for experimenting. The problem is that we are unable to draw the line between experimenting and production use. And CS in production usually causes countless problems. We are aware that creating components is not ideal for ad hoc stuff, and we're going to improve that too before the final demise of Custom Science which will be October 1, 2018.

Facebook and Instagram extractors failures

Some of the configurations of Facebook and Instagram extractors are failing during import to Storage. 

We are working on a fix and we'll update this status when the issue is resolved.


UPDATE 09:56 AM UTC - The issue was resolved. All Facebook and Instagram extractors configurations should be working again.

Week in Review -- February 19, 2018

New components

Bug fixes and smaller improvements

  • Bug fix in Currency extractor - exchange rates for Danish Krone (DKK) and Icelandic Krona (ISK) were not updated for some time because of a bug in its configuration.
  • Snowflake extractor now offers views in the tables list too.

Time Travel Restore

Snowflake has a wonderful feature that they call Time Travel.  It allows you to replicate your table from its state in the past.  We're happy to announce initial support for this great feature in Keboola Connection. 

To begin with, every project with a Snowflake backend has been set to retain data history for 7days. That means that you can restore a table to how it existed at any point within the last week.  It is possible to increase the data history retention period, so if you're interested in doing that please let us know by using the support button in your project


We've added this restoration method to the snapshots tab in the storage console:


Restoring a table is very simple, just use the calendar to pick the date and time, give the new table a name, and choose which bucket to put it in.


We plan on extending the use of this feature to be able to use time travel replicas as an input option for transformations and to create a "Storage Trash".  

Happy travelling!