We experienced Connection API higher error rate and issue with starting new Python sandboxes between 17:45 - 17:55 UTC. The issue is now resolved, jobs processing wasn't affected.
We're investigating an issue with Snowflake in US region which causes some Storage operations with tables to be stuck in processing state. This can cause jobs to be executing longer than expected or seemingly "forever". Terminating and restarting the job does not help in such a case. Only certain projects are affected.
Next update in 1 hour or as new information becomes available.
Update 10:50 UTC: The stuck jobs are unblocked now and should be finishing, we're monitoring the situation if the issue reappears. A post mortem will be published once we get an RCA from Snowflake.
When configuring a Snowflake or Redshift database writer, you can use a Keboola-provided database.
In the past, when you selected this option, the credentials were stored in a configuration in a plain-text format. Storing the credentials this way allowed you to copy the password and use it in your favorite database client (or another system) even if you didn't copy it right after the creation.
To improve the overall security, we decided to show you your password only once and store it encrypted. From now on, when you create a new Keboola-provided database (Snowflake or Redshift), you will see the password only once, right after its creation.
Backward compatibility
The existing credentials will remain untouched. But if you delete them, there's no option to create them the old way.
New Components
-
LiveRamp Identity Resolution application - solving some of the main challenges with customer and prospect data by returning people-based identifiers and metadata for your consumer records
-
KBC Project Metadata extractor - Keboola metadata extractor downloads metadata about all objects in your Keboola project.
-
Avro2CSV processor - Avro is a row-oriented remote procedure call and data serialization framework developed within Apache's Hadoop project. It uses JSON for defining data types and protocols, and serializes data in a compact binary format.
Updated Components
-
MySQL extractor - the "Transaction Isolation Level" setting is now configurable (documentation)
- Oracle extractor / Oracle writer - the "Tnsnames" can be now used to provide login credentials to Oracle database (documentation)
- Generic extractor - added option "caCertificate" which allows you configure custom certificate authority bundle in crt/pem format. (documentation)
Minor Improvements
-
Google BigQuery - updated google-cloud-bigquery package
Python updated to 3.8.5
Julia updated to 1.5.0
We are investigating slight performance degradation of Snowflake in US region, there are no job failures or increased queue backlog but everything seems to run slightly slower. Degradation started around 00:00 AM UTC. Next update in 120 minutes or as new information becomes available.
UPDATE 2020-09-02 14:59 UTC: We still see slight performance degradation of some queries. We are in touch with Snowflake support. Next update tomorrow or as new information becomes available.
UPDATE 2020-09-03 06:31 UTC: The issue is now resolved, performance went back to normal around 2020-09-03 00:00 UTC. We are waiting for more details about the issue from Snowflake.
On September 1st between 8:25 PM UTC and 9:48 PM UTC there was an incident on Snowflake service which led to Storage job failures.
The issue is now resolved and all systems are operational. We apologize for the inconvenience caused by this incident.
Since 2020-08-25 8:35 UTC we are experiencing Storage errors in the US region due to a reported incident in Snowflake. We are going to monitor the situation and keep you posted within 90 minutes.
UPDATE 2020-08-25 9:20 UTC: Snowflake incident update:
We have identified the problem with the Snowflake Service that is interrupting the following services:UPDATE 2020-08-25 9:50 UTC: The problem seems to disappear, we don't get any more errors since 9:30 UTC. But Snowflake hasn't updated the incident yet so we are still monitoring the situation.
1. Access to Snowflake UI
2. Cannot execute queries
Incident Start Time: 01:20 PT Aug 25, 2020
We will provide an update within 30 minutes or as soon as we have more details on the status of the issue.
Updated Components
- AWS S3 Extractor supports Authentication with an AWS role (documentation)
- Twitter Extractor supports Direct Messages
To extract direct messages, you must reauthorize the account since an additional permission (DMs) is needed.
- MongoDB Extractor supports custom URI connection
- MSSQL Extractor supports encrypted (SSL) connection
UI Improvements
- Storage job detail now has a permalink
To get a permalink for the job detail, click on the job ID. A popup with the job detail will appear, and the URL in your browser will change.
Minor Improvements
- Storage API List files endpoint is now returning only valid (not expired) files. Expired files can be retrieved with showExpired option, more in our documentation https://keboola.docs.apiary.io/#reference/files/list-files/list-files
There is some problem on GoodData API since about 01:00 CEST (23:00 UTC) which causes job timeouts. We are investigating the problem.
Next update in 60 minutes or as new information becomes available.
13 Aug 2020 10:30 UTC We are investigating the problem with GoodData support. So far it looks like problem is in WebDav integration on GoodData side. Next update in 60 minutes or as new information becomes available.
13 Aug 2020 11:30 UTC GoodData support is still investigating issue. Next update in 60 minutes or as new information becomes available.
13 Aug 2020 12:05 UTC According to our investigation it seems that all GoodData jobs are working now normally. We will continue to monitor situation. If you have GoodData job which is running unusually long, try to restart it, if it doesn't help please contact out support, we will hand over your pid to GoodData for further investigation. We are still waiting for GoodData support for detailed information's about incident. Next update in 2 hours or as new information becomes available.
13 Aug 2020 14:07 UTC We are still waiting for resolution from GoodData. Next update will be when new information becomes available. If you want detailed information for you project contact GoodData support with you pid and import time of last csv before failure.
13 Aug 2020 16:40 UTC Job failures were caused by slower GoodData DHWs, GoodData take action and fix problem, but several jobs timeout during this period. All services are stable now.
04 Aug 2020 04:28 UTC We're seeing a higher load and longer execution time in EU and US Snowflake warehouse queries. We are investigating the causes. Next update in 60 minutes or as new information becomes available.
04 Aug 2020 04:53 UTC We have added some processing power to the Snowflake warehouse in both regions but the backlog is still present. Snowflake identified the problem with the interruption which may cause the processing slowdown. Next update in 60 minutes or as new information becomes available.
04 Aug 2020 05:20 UTC Backlogs are cleared in both regions and the situation seems normal, but we're monitoring it closely for next couple hours. Next update in two hours or as new information becomes available.
04 Aug 2020 06:25 UTC Snowflake marked the incident as resolved, everything should be back to normal. We'll keep monitoring our platform closely.
04 Aug 2020 07:27 UTC Query times are back to normal and all the backlogs are cleared for some time already. The incident is completely resolved.