We are experiencing some jobs failures and timeouts in the US region due to a shortage in one availability zone in the Amazon Elastic Compute Cloud service. We are going to monitor the situation and keep you posted.
UPDATE 15:48 CEST: Apparently it has some overlap to other AWS services too because login to Developer Portal (which uses AWS Lambda and Cognito) timeouts intermittently.
UPDATE 15:54 CEST: AWS confirms that some EC2 instances are impaired and some EBS volumes are experiencing degraded performance within a single Availability Zone in the US-EAST-1 Region. Some EC2 APIs are also experiencing increased error rates and latencies. They are working to resolve the issue.
UPDATE 16:37 CEST: The works on resolving the issue are still in progress.
UPDATE 17:06 CEST: The impaired instances and EC2 APIs are being recovered. AWS support continues to work towards recovery for all affected EC2 instances.
UPDATE 18:04 CEST: Recovery is in progress for instance impairments and degraded EBS volume performance. On our side, it looks that the problems more or less disappeared an hour ago and the platform is back to normal.