Deprecation of GET method in GoodData SSO login

GoodData SSO login using GET links will stop working on March 18. It means that property link in the response of SSO login using GoodData Writer will stop working too. 

If you use any custom solution built upon our Writer, you need to migrate it to the new POST login, i.e. take property encryptedClaims from our resource and call this GoodData API call: https://help.gooddata.com/display/doc/API+Reference#/reference/authentication/sso-pgp-login which will login your user. SSO links to GoodData in our Connection UI are already migrated to the new method.


Refined Storage Console

We're happy to announce a small technology update of our Storage Console. Several months in the making, this will allow us to bring new features in the near future.

Even though the primary purpose of this update is to bring the code up to date and align the design with the rest of the UI, we have already made some small improvements:

  • You can no longer see an additional loading page when navigating to Storage from other pages.
  • Search in buckets (or tables) highlights the matched parts of your search query in yellow.

  • An active bucket is highlighted on the left side when its detail or a detail of its table is active.

  • Files and Jobs sections are automatically reloaded every 20 seconds.
  • Event sections have predefined searches, so you can filter events faster.

  • Buttons Create Bucket, Link Bucket and Reload are now bigger.

  • There's an option to create Table Alias directly from a table detail (in Actions).

  • Other minor cosmetic things like navigation, buttons, etc.

Troubles with new OAuth Broker API

UPDATE 2019-02-25 9:00 AM UTC
A fix has beed deployed and the problem now longer occurs. We suppose, that around 50 jobs suffered from this issue.
We do sincerely apologize for the trouble this may have caused to you. Don't hesitate to contact our support for help.


2019-02-24 10:00 PM UTC
We're experiencing issues with new OAuth Broker API. 

In some cases it might not return the authorised credentials for a component's job. Re-running the job might be successful.

The fix will be deployed very soon.

If you haven't migrated to the new version, please wait until the fix is deployed.

We're terribly sorry for any inconvenience.

Migrate to new version of OAuth Broker API

We have just released a new version of our OAuth Broker API.

OAuth Broker is a KBC service, which handles the authorisation flow for all KBC components (extractors, writers, ...) using OAuth authorisation and also stores the credentials (tokens) for them.

The new version was needed to simplify integration with KBC and allows us to implement new features into this API more easily.
The features we are preparing are for example automatic refreshing of OAuth tokens if needed, using more than one OAuth client id for better quota limits handling and so on.

The old OAuth Broker is now deprecated, and we ask you to migrate affected configurations credentials before May 1, 2019. We can't migrate these credentials automatically because we cannot modify configurations in your project without your consent.

In the project Overview, you can see whether your project contains any configurations needing migration:


Proceed to the migration page where you can migrate all the affected configurations in one click:


Some of the components - GitHub Extractor, Twitter Ads Extractor and ZOHO CRM Writer - need to be reauthorised manually in order to be migrated to the new version.

New configurations will use the new version of OAuth API from now on. Also, if you reset the authorisation of an existing configuration, it will be created using the new version of the API.

R/Python sandboxes security update

We need to apply an important OS-level security update to R/Python sandboxes environment. Because of that, the existing sandboxes cannot be extended. This means the following:

  • R/Python sandboxes created prior 2019/02/12 will be terminated no later than 2019/02/17 14:00 UTC even if you try to extend them.
  • If you wish to keep the contents of sandbox created prior 2019/02/12 14:00 UTC, please save them manually and recreate the sandbox
  • R/Python sandboxes created after 2019/02/12 14:00 UTC are unaffected
  • SQL sandboxes are unaffected

Weeks in review -- February 8, 2019

Component Updates

  • Python Transformations - now uses the same Python version 3.7.2 as in the transformation sandbox.
  • R transformations have a new backend (v 3.5.2), and we added docs about how to do opt-in in the new version.
  • Storage Writer - now supports the `recreate` mode that will drop and create the target table.
  • Processor Decompress - supports graceful decompression, will skip the file that failed to decompress.
  • Mysql/Mssql/ extractors - allow any numeric or datetime type for incremental fetching.
  • PostgreSQL -  has automatic increment fetching. UI has to be migrated to the new version (by the green button in the config overview).
  • Generic Extractor now supports usage of deeply nested functions.
  • Zendesk Extractor - fixed extracting of custom ticket values fields, existing configurations need to be resaved (switch to template->scroll to the bottom-> select a template again and save).
  • New component for Mailgun (sending emails).


UI Updates

  • Generic Snowflake sandbox - now uses CLONE TABLE load type. It's way faster and it only loads complete tables (no rows sampling).
  • You can choose a backend version of R/Python transformations.
  • Snowflake writer - adding a new table now autoloads column datatypes if present (usual for tables originated from db extractors).
  • Transformations Output  - shows warning when there are 2 output mappings with the same destination table within one phase.
  • PostgreSQL Extractor - query editor now supports PostreSQL specific syntax.


Storage and Project Management Updates

  • All newly created tables in Storage have 16MB cell size instead of 1MB.
  • Limit 110 columns in data preview were removed, contents of wider tables are displayed normally.
  • Organization invitations are now working similarly to project invitations - an invited user has to accept the invitation.



Speeding up transformation outputs in your projects

We're working hard every day to make Keboola Connection faster and minimize the time you spend waiting for your results. This effort includes a wide variety of components of the platform architecture. Some of the changes are straightforward and transparent to the end users, but others are unexpectedly complicated. 

This is the case of transformations. We rolled out an update earlier this year and were forced to immediately rollback to the previous version as it broke the data flow in a few projects. This time, we're more prepared. We have identified the source of the incompatibility, and we'll be rolling out the update silently, only for those projects that will not be adversely affected. The projects that would be affected will not be updated. Instead, they will be notified to take steps to fix the incompatibilities. Then they'll become eligible for the update as well. 

Parallel output processing

In the original system, when all transformations in a phase are executed, the output processing starts. It takes all the transformations sequentially and processes the outputs one by one. The order of the executions is not defined, but it is predictable and, most importantly, it doesn't change between runs. Some projects rely on a specific order of output processing to achieve certain goals.

To speed up the output, we have decided to queue all outputs at once and let Storage handle all jobs as fast as possible. But, as you already may have noticed, this mean the transformations can run in any order or even in parallel. This may affect the result of the output if you relied on a particular order.

Updating the project

The project overview and all affected transformations will notify you about multiple outputs being written to the same table in Storage. You will be easily navigated to the places that need to be fixed. Please contact our support if you need any help doing that. 

Once you have fixed all instances where multiple outputs are written to the same table in your project, you can immediately contact us using the support button. We will turn on the update in your project. Or, wait until we update your project automatically (we're watching all projects regularly).



KBC is not accessible in all regions

[2019-01-22 1:21 UTC]

Snowflake just announced that disabling OCSP check is able circumvent the error. KBC is fully working, you shouldn't have any issue for now!


[2019-01-22 00:51 UTC]

We were removing all OCSP validation and KBC platform is working OK in both (US/EU) regions for now

At this time, we have no more updates from Snowflake support team yet. You shouldn't have any issue with Keboola Connection. In case of any hiccups, please open ticket directly from your KBC project. Once we have RCA report from Snowflake, this post will be updated.

We're very sorry for this inconvenience and thank you so much for your patience with us and Snowflake engineers.


[2019-01-22 00:34 UTC]

SQL Sandboxes are fully working. Take care that all existing credentials were discarded - use new combination of username and password.


[2019-01-22 00:16 UTC]

Just a few components are still having an issue. 

To make up for this outage we're going to add additional resources and run your jobs in Keboola Connection for next few hours on Warp Drive.


[2019-01-22 00:08 UTC]

Almost everything is working now. Last issues are in Transformation Sandboxes.


[2019-01-22 00:02 UTC]

We're very close to fully working platform. Bear with us! 


[2019-01-21 23:50 UTC]

Component jobs are still serving errors from Snowflake DWH. We're disabling OCSP checks on other places in our infrastructure.

 

[2019-01-21 23:37 UTC]

Snowflake just confirmed its SSL validation issue in their ODBC driver (https://community.snowflake.com/s/group/0F90Z000000U8d9/alerts-awsus-west).


[2019-01-21 23:35 UTC]

US region is working.


[2019-01-21 23:34 UTC]

EU region is working.


[2019-01-21 23:32 UTC]

We're building app version with temporarily modified OSCP checks.


[2019-01-21 23:17 UTC]

This issue seem to be connected with OCSP cert validation on Snowflake side. We're still working on it.


[2019-01-21 22:50 UTC]

Starting 2019-01-21 22:33 UTC, all customers are seeing error messages throughout their account. We’re aware of the issue and are working on it urgently.

We’re really sorry to be holding you up today! Please know our engineering and operations teams are working hard to get everything up and running and we will update you right here in 30 minutes with the latest information.

December Failed Jobs Postmortem

In December 2018 we had two incidents (2018-12-14 and 2018-12-19) which resulted in a number of failed jobs. The first one caused 0.8% of jobs to fail (in a 24h window) and the second one caused 1.2% of jobs to fail (in a 24h window). 

Both incidents were caused by unavailability of the Docker container registry (Amazon ECR). In the first incident we were receiving exceeded quota errors and we initially thought that these were related to higher infrastructure load. A thorough investigation showed that we were nowhere near the limits and now we finally got a confirmation from Amazon that this was an error on their side. The second incident was caused by complete unavailability of the ECR for approximately 30 minutes.

Technical background:

The Docker container registry is used to store the executable code for each component running in Keboola Connection. It is accessed on every job run to make sure that a job is run with the most recent version of the component code. During 2017 we moved most of our components to the Amazon ECR which proved to be very reliable. The outage mentioned above is the first one since 2016 when we began using it. 

Most of the Keboola's infrastructure is duplicated with automatic fail-safe mechanisms in place. That means that minor outages in the underlying services are not noticeable by the end-users. Duplicating the Docker container registry, however, is not an easy task because Docker is not really ready for that yet. So this remains a single point of failure.

Measures already taken and yet to be taken:

  • We have immediately implemented a retry mechanism in our code which will handle short outages, the retry mechanism will also be further improved.
  • We have already started (prior to the incident) reworking the component code validation tooling so that the number of queries to the ECR is reduced by several orders. This will help reduce the impact, should a similar incident happen again.
  • We'll use a dedicated ECR for each Keboola Connection region which will reduce the affected scope for any similar incident in the future. 


Job errors

Between 2019-01-15 15:58 and 2019-01-16 8:25 UTC we had a bug in our platform which caused some jobs to fail with user error "Some columns are missing in the csv file". The bug affected jobs where data was imported to Storage with non-default delimiter (default is colon). It is also possible that in some cases an extra column was created in the table. The column contains no data. This column needs to be deleted manually otherwise any subsequent jobs will fail.

We do sincerely apologize for the trouble this may have caused to you. Don't hesitate to contact our support for help.