Farewell to Custom Science

Yes, we are going to deprecate the Custom Science application. We introduced it more than two years ago as an alternative to components. Unlike components, it was easy to implement and use. However, we've made a lot of progress in simplifying component development.

The latest additions are a simplified component creation workflow, a component generator tool, and a rewritten developer documentation. See a 10 minute video (or this one for Gitlab) on how to create a Hello World component. All of this means that creating a component is much easier that it was two years ago and is definitely worth the effort. 

At the same moment, Custom Science (CS) is producing more and more problems, specifically:

  • We have no trace of what code was actually executed. That means when something breaks, we don't know if the code was changed in the meantime or not. When something was successful, we don't know for sure which version it was. We can't run a configuration with a previous version of the code.
  • There is a direct dependency on the git repository, and while Github and Bitbucket outages are neither common nor long, they do account for dozens of failed jobs (last year).
  • Risk of loss: If you lose access to the git repository, the jobs immediately fail. There is nothing we can do about it. No grace period. No way back. This can easily happen when people change positions or leave their company.
  • Dependency: Typically, there is only one person which can fix broken CS. If an issue arises, we don't know who the person is and can't contact them. Even if we do know the person, they might not respond. In the meantime, we have no way for a workaround (i.e. reverting to the last working state).
  • Poor security: If the repository is private, we need credentials to it. These should be dedicated robot credentials, but most people use their own. Plus, it's your code repository, so why should you give us credentials to it?
  • Poor performance: CS can easily spend 1-2 minutes on the warm up. If it is installing packages, then it is even more because they are being installed on every run.

We are fully aware that there are some disadvantages of converting every CS into a component. Specifically:

  • It takes several minutes before the updated code is deployed in KBC.
  • The initial setup takes several minutes of your work.

The first issue is not going to change any time soon (we will work on shortening the delay, but there will always be some delay). We tried to minimize the second issue – you can follow our migration guide, or see a 10 minute video of migration (done manually and using our tool) or see the new Component development tutorial.

Overall, CS is great for experimenting. The problem is that we are unable to draw the line between experimenting and production use. And CS in production usually causes countless problems. We are aware that creating components is not ideal for ad hoc stuff, and we're going to improve that too before the final demise of Custom Science which will be October 1, 2018.

Facebook and Instagram extractors failures

Some of the configurations of Facebook and Instagram extractors are failing during import to Storage. 

We are working on a fix and we'll update this status when the issue is resolved.


UPDATE 09:56 AM UTC - The issue was resolved. All Facebook and Instagram extractors configurations should be working again.

Week in Review -- February 19, 2018

New components

Bug fixes and smaller improvements

  • Bug fix in Currency extractor - exchange rates for Danish Krone (DKK) and Icelandic Krona (ISK) were not updated for some time because of a bug in its configuration.
  • Snowflake extractor now offers views in the tables list too.

Time Travel Restore

Snowflake has a wonderful feature that they call Time Travel.  It allows you to replicate your table from its state in the past.  We're happy to announce initial support for this great feature in Keboola Connection. 

To begin with, every project with a Snowflake backend has been set to retain data history for 7days. That means that you can restore a table to how it existed at any point within the last week.  It is possible to increase the data history retention period, so if you're interested in doing that please let us know by using the support button in your project


We've added this restoration method to the snapshots tab in the storage console:


Restoring a table is very simple, just use the calendar to pick the date and time, give the new table a name, and choose which bucket to put it in.


We plan on extending the use of this feature to be able to use time travel replicas as an input option for transformations and to create a "Storage Trash".  

Happy travelling!

Week in Review -- February 12, 2018

New Components

  • Google Trends extractor: this component, developed by Leo Chan (cleojanten@hotmail.com), allows to extract search trends for given keywords in a specified region.

Deprecations 

Indexed columns

With the deprecation/removal of the MySQL backend, we deprecated indexed columns because there is no more use for them. You can search/filter through any column now without the need to mark it as indexed.

The following attributes will be removed from manifest files by the end of March 2018:

  • indexed_columns – with the deprecation of the MySQL backend, there is no need to define indexes.
  • rows_count and data_size_bytes – these values are not (and never were) in sync with the input table data and are useless.
  • attributes – table attributes are replaced by table metadata.
  • is_alias – this is something that has nothing to do with the exported data.

Fixes

  • The Developer portal is now available under a new URL: components.keboola.com (instead of apps.keboola.com). The main reason is that we used the word application in two meanings, and that was confusing. For example, there were applications of type Extractor but also applications of type Application. From now on, everything is a Component. Components are of four types: Extractors (loading data from somewhere), Writers (writing data somewhere), Applications (manipulating data), and Processors (data processing helpers).

Week in Review -- January 30, 2018

Plantyst Extractor

To those who are collecting data from productions machines to Plantyst, you can employ new extractor made by BizzTreat and start doing complex data analysis.

Stories.BI writer

You can automatically push data to Stories.bi and get automatic insights instead of crunching business data by hand.


Updated Components

  • Sklik extractor has new variable accountID
  • YouTube extractor has new version. It is based on Generic Extractor. Old extractor will be deprecated on March 1, 2018
  • Snowflake extractor is now a bit faster and has better error handling
  • Geneea NLP App is now available in EU region
  • BingAds extractor is now available in EU region
  • Facebook extractor with new Page Tokens can newly fetch Page Reviews
  • Twitter extractor is now available in EU region
  • Snowflake and Redshift writers has fixed eventual columns mismatch.


Minor Improvements

  • Quick search in component list was improved - it has better accuracy
  • Component name can be finally submitted by pressing ENTER


Week in Review -- January 22, 2018

Linked/Source Buckets

From now, you'll be able to find source/linked buckets information in Storage section in Keboola Connection. This is very helpful when you need to find out which projects are using (linking) your shared bucket. Also, vice versa, which bucket is the source for your bucket.

MFA required also for Google Login

If you have MFA (Multi-factor Authentication) enabled, a confirmation code is required if you use "Login with Google" functionality. Please contact us if you have problems with login.

Facebok Extractor uses page access token for page/posts insights retrieval

Due to breaking changes recently introduced by Facebok API our facebook extractor is updated so it uses page access token for page or posts insights retrieval instead of user access token. This leads to slower extraction if more pages are included in a query. User access token is still used for other data retrieval such as feed, likes, comments. Moreover, this change does not affect facebook ads extractor. 

Improvements

  • We slightly updated UI for the recently published Tokens page

Fixes

  • Project Power consumption is shown only for the current + 2 previous months. This is only a temporary limitation.  It will soon be solved and return to showing you more data


Snowflake Outage in US Region

There was a short Snowflake outage between 10:30 and 10:35 CEST (09:30am and 09:35am UTC) in US region.

  • Sandboxes might have lost their data and worksheets
  • Transformation jobs might have finished with an error
  • Async data loads and exports were unaffected

We're investigating the impact and root cause and will update this post as soon as we know more. Snowflake is now back fully operational.

UPDATE Jan 30 2018: Snowflake released their RCA.

New UI section for API Tokens

We are glad to introduce a new UI for Storage Api Tokens that can be now found under the Users & Settings section. We will be removing the old one found under Storage section. The new UI covers the same functionality as the old one.

As a security measure, the token itself will not be shown anymore, only once after its creation. The only way to see an existing token in the UI is to send it via email (temporary link to token is sent) or refresh it and get a new token string. On the backend, the token can still be seen in the response from the tokens list api call but will be removed in the near future.