Orchestration Failures in the US Region

Today, on March 15 2019 from 16:34:15 UTC to 16:35:12 UTC there were some orchestration failures in the US region due to an internal system upgrade.

There were not many failures (around 20, so very few are affected), but if you had an orchestration running at that time, please check to make sure that you were unaffected.

We are working on making sure that this will not happen again for any future upgrades.

Snowflake issues in EU region

We were affected by a brief outage of the EU region Snowflake database on Mar 07 between 17:45:00 UTC and 18:25:00 UTC in the EU region. The problem affected extractors and transformations. Please check your orchestrations and re-run them if necessary. Projects in the US region were unaffected. We apologise for the inconvenience caused.

Weeks in review -- March 1, 2019

New Features

  • Oracle Writer - supports setting custom schema in credentials configuration
  • GitHub Extractor - adds organization to downloaded commits and issues
  • Storage API - supports multiple where filters and order by statement inside datapreview and asynchronous table export. You can quickly search for your data in datapreview with multiple conditions. We are going to add this feature to UI soon.
  • BigQuery Extractor - supports extracting data from the EU region


UI

  • full page table preview for Storage tables



Bug fixes

  • OAuth Broker API - fixed missing component credentials
  • Oracle Extractor - fixed manifest for exported tables if there were more than one table with the same name in different schemas
  • Transformations 
    • unpaired closing comment tag ( */ ) in SQL query is now properly identified as a user error
    • added additional retries when creating a workspace


    Deprecation of GET method in GoodData SSO login

    GoodData SSO login using GET links will stop working on March 18. It means that property link in the response of SSO login using GoodData Writer will stop working too. 

    If you use any custom solution built upon our Writer, you need to migrate it to the new POST login, i.e. take property encryptedClaims from our resource and call this GoodData API call: https://help.gooddata.com/display/doc/API+Reference#/reference/authentication/sso-pgp-login which will login your user. SSO links to GoodData in our Connection UI are already migrated to the new method.


    Refined Storage Console

    We're happy to announce a small technology update of our Storage Console. Several months in the making, this will allow us to bring new features in the near future.

    Even though the primary purpose of this update is to bring the code up to date and align the design with the rest of the UI, we have already made some small improvements:

    • You can no longer see an additional loading page when navigating to Storage from other pages.
    • Search in buckets (or tables) highlights the matched parts of your search query in yellow.

    • An active bucket is highlighted on the left side when its detail or a detail of its table is active.

    • Files and Jobs sections are automatically reloaded every 20 seconds.
    • Event sections have predefined searches, so you can filter events faster.

    • Buttons Create Bucket, Link Bucket and Reload are now bigger.

    • There's an option to create Table Alias directly from a table detail (in Actions).

    • Other minor cosmetic things like navigation, buttons, etc.

    Troubles with new OAuth Broker API

    UPDATE 2019-02-25 9:00 AM UTC
    A fix has beed deployed and the problem now longer occurs. We suppose, that around 50 jobs suffered from this issue.
    We do sincerely apologize for the trouble this may have caused to you. Don't hesitate to contact our support for help.


    2019-02-24 10:00 PM UTC
    We're experiencing issues with new OAuth Broker API. 

    In some cases it might not return the authorised credentials for a component's job. Re-running the job might be successful.

    The fix will be deployed very soon.

    If you haven't migrated to the new version, please wait until the fix is deployed.

    We're terribly sorry for any inconvenience.

    Migrate to new version of OAuth Broker API

    We have just released a new version of our OAuth Broker API.

    OAuth Broker is a KBC service, which handles the authorisation flow for all KBC components (extractors, writers, ...) using OAuth authorisation and also stores the credentials (tokens) for them.

    The new version was needed to simplify integration with KBC and allows us to implement new features into this API more easily.
    The features we are preparing are for example automatic refreshing of OAuth tokens if needed, using more than one OAuth client id for better quota limits handling and so on.

    The old OAuth Broker is now deprecated, and we ask you to migrate affected configurations credentials before May 1, 2019. We can't migrate these credentials automatically because we cannot modify configurations in your project without your consent.

    In the project Overview, you can see whether your project contains any configurations needing migration:


    Proceed to the migration page where you can migrate all the affected configurations in one click:


    Some of the components - GitHub Extractor, Twitter Ads Extractor and ZOHO CRM Writer - need to be reauthorised manually in order to be migrated to the new version.

    New configurations are created with the new version of OAuth API from now. Also, if you reset the authorisation of existing configuration, it will be created with the new version.

    R/Python sandboxes security update

    We need to apply an important OS-level security update to R/Python sandboxes environment. Because of that, the existing sandboxes cannot be extended. This means the following:

    • R/Python sandboxes created prior 2019/02/12 will be terminated no later than 2019/02/17 14:00 UTC even if you try to extend them.
    • If you wish to keep the contents of sandbox created prior 2019/02/12 14:00 UTC, please save them manually and recreate the sandbox
    • R/Python sandboxes created after 2019/02/12 14:00 UTC are unaffected
    • SQL sandboxes are unaffected

    Weeks in review -- February 8, 2019

    Component Updates

    • Python Transformations - now uses the same Python version 3.7.2 as in the transformation sandbox.
    • R transformations have a new backend (v 3.5.2), and we added docs about how to do opt-in in the new version.
    • Storage Writer - now supports the `recreate` mode that will drop and create the target table.
    • Processor Decompress - supports graceful decompression, will skip the file that failed to decompress.
    • Mysql/Mssql/ extractors - allow any numeric or datetime type for incremental fetching.
    • PostgreSQL -  has automatic increment fetching. UI has to be migrated to the new version (by the green button in the config overview).
    • Generic Extractor now supports usage of deeply nested functions.
    • Zendesk Extractor - fixed extracting of custom ticket values fields, existing configurations need to be resaved (switch to template->scroll to the bottom-> select a template again and save).
    • New component for Mailgun (sending emails).


    UI Updates

    • Generic Snowflake sandbox - now uses CLONE TABLE load type. It's way faster and it only loads complete tables (no rows sampling).
    • You can choose a backend version of R/Python transformations.
    • Snowflake writer - adding a new table now autoloads column datatypes if present (usual for tables originated from db extractors).
    • Transformations Output  - shows warning when there are 2 output mappings with the same destination table within one phase.
    • PostgreSQL Extractor - query editor now supports PostreSQL specific syntax.


    Storage and Project Management Updates

    • All newly created tables in Storage have 16MB cell size instead of 1MB.
    • Limit 110 columns in data preview were removed, contents of wider tables are displayed normally.
    • Organization invitations are now working similarly to project invitations - an invited user has to accept the invitation.



    Speeding up transformation outputs in your projects

    We're working hard every day to make Keboola Connection faster and minimize the time you spend waiting for your results. This effort includes a wide variety of components of the platform architecture. Some of the changes are straightforward and transparent to the end users, but others are unexpectedly complicated. 

    This is the case of transformations. We rolled out an update earlier this year and were forced to immediately rollback to the previous version as it broke the data flow in a few projects. This time, we're more prepared. We have identified the source of the incompatibility, and we'll be rolling out the update silently, only for those projects that will not be adversely affected. The projects that would be affected will not be updated. Instead, they will be notified to take steps to fix the incompatibilities. Then they'll become eligible for the update as well. 

    Parallel output processing

    In the original system, when all transformations in a phase are executed, the output processing starts. It takes all the transformations sequentially and processes the outputs one by one. The order of the executions is not defined, but it is predictable and, most importantly, it doesn't change between runs. Some projects rely on a specific order of output processing to achieve certain goals.

    To speed up the output, we have decided to queue all outputs at once and let Storage handle all jobs as fast as possible. But, as you already may have noticed, this mean the transformations can run in any order or even in parallel. This may affect the result of the output if you relied on a particular order.

    Updating the project

    The project overview and all affected transformations will notify you about multiple outputs being written to the same table in Storage. You will be easily navigated to the places that need to be fixed. Please contact our support if you need any help doing that. 

    Once you have fixed all instances where multiple outputs are written to the same table in your project, you can immediately contact us using the support button. We will turn on the update in your project. Or, wait until we update your project automatically (we're watching all projects regularly).