Snowflake Issues

Between Oct 26 10:40am UTC and Oct 26 12:50am UTC we encountered issues connecting to Snowflake. 

Some jobs might have ended with Internal error or one of these exceptions:

  • The error message SFExecuteQueryFailed could not be found in the en-US locale. Check that /en-US/SFMessages.xml exists., SQL state S1000 in SQLPrepare
  • Failed to execute query. Error code=-1, GS error code=601, GS error message=Service temporarily unavailable. Retry the request later.

We're investigating the root cause with Snowflake support and identifying and restarting affected orchestrations. 


Failed Jobs: Database Server Restart (UPDATE)

On October 21, 2016 at 11:43:58 PM CEST/UTC+2 one of our database servers was restarted due to a hardware failure. Jobs running during that time failed when trying to reconnect to the database later in the processing.

Update October 21, 2016 at 01:20 AM CEST/UTC+2: Jobs/orchestrations affected with this restart didn't stop processing and finished successfully, they may show incorrect job result as error instead of success.

We're sorry for this inconvenience and we're restarting all failed orchestrations.

Stalled Transformations

Our transformation MySQL server was under heavy load between October 20th 12:00am and 3:00am UTC. Transformation processes were slowed down or halted. 

We have identified the blocking processes and all operations returned to normal. In a few cases we have terminated stalled transformations and restarted the orchestrations. 

We're sorry for this inconvenience.

OpenRefine Transformations: Public Beta

We're opening OpenRefine transformations to public. You can now use OpenRefine in your transformation pipeline. 

No further need to write long string replaces in SQL or study how to open CSV files in Python or R. OpenRefine excels in data cleanup and many other data wrangling tasks. 

To create an OpenRefine transformation choose OpenRefine (beta) when creating a new transformation.

Learn more about OpenRefine and its functions and about OpenRefine integration in Keboola Connection.

Strict Input/Output Validation

During last days we have turned on strict input/output mapping validation. Each input/output mapping is checked against the table in Storage if

  • all columns exist
  • the primary key is equal in both cases 
  • datatype/indexes/distkey/sortkey or filter column names have the same letter case

Although we tried to detect all breaches of this ruleset beforehand and contact project owners some have unfortunately slipped through. We're closely monitoring all errors and fixing/restarting all failed orchestrations. 

In case your project is subject to this issue on a larger scale than a single failure, we're able to remove the validation temporarily. Please contact us at support@keboola.com with any further questions/requests.

We're deeply sorry for any inconvenience. 

Snowflake Issues UPDATED

We're currently investigating issues with Snowflake workspaces (sandboxes, transformations). We'll keep this post updated.

Update 7:36pm CEST: We have passed information to Snowflake support team and they're investigating the issue.

Update 22:36pm CEST: Snowflake team has identified the issue and is working on fix which should be deployed later tonight.

Update 05:55am CEST: The issue has been resolved and all operations are back to normal. Due to high number of affected transformations/orchestrations we won't be restarting them to prevent system overload. Please restart your orchestrations manually if needed.

Thanks for your patience and understanding.

Redshift Transformation/Sandbox Provisioning

We're changing the way Redshift transformations and sandboxes are provisioned.

During the week from September 26th to September 30th all projects will be migrated. What does that mean for you?

Faster with less errors

In certain situations (eg. table locks) creating of a sandbox would take a long time. After the migration provisioning will no longer depend on any locks in the whole Redshift cluster.

Provisioning and data loading

We're offloading the input mapping work to Storage API. Storage API is now in charge of creating the sandbox or transformation workspace for you as well as loading the data into it. Storage API will decide the fastest way to load required data into your workspace.

Credentials change

Username, password and schema for your Redshift sandbox will change and new sandboxes will be created. Your current sandboxes will be deleted 7 days after the migration. UI will no longer serve credentials for your current sandbox, only for the newly generated one. You will not be able to load data into your current sandbox.

No direct access to bucket data

If your transformation uses data directly from a Redshift schema, this won't be supported. Queries like

SELECT * FROM "in.c-mybucket"."table" 

will no longer have access to in.c-mybucket Redshift schema. If your transformation contains such queries, please adjust the query so that it uses a table specified in the input mapping.

Transparent and hassle free migration

There will be no service interruption and no expected delay in your transformations. Unless you're using direct access to bucket data there is no action required. In case of any problems the whole migration is reversible. 

If you are concerned about your operations please get in touch with us at support@keboola.com. We can try out the migration or change the date of migration.