Amazon Web Services - Reporting US East 1 Connectivity Issues
Incident Report for STEPpay
Resolved
Since 7:56 AM PDT / 9:56 AM CDT, the Coalesce Payments STEPpay gateway has been fully operational.

Below is the latest message we have received from Amazon Web Services. Again, please note that they are reporting in Pacific time:

At 4:33 AM PDT one of ten data centers in one of the six Availability Zones in the US-EAST-1 Region saw a failure of utility power. Our backup generators came online immediately but began failing at around 6:00 AM PDT. This impacted EC2 instances and EBS volumes in the Availability Zone. Power was fully restored to the impacted data center at 7:45 AM PDT. By 10:45 AM PDT, all but 1% of instances had been recovered, and by 12:30 PM PDT only 0.5% of instances remained impaired. Since the beginning of the impact, we have been working to recover the remaining instances and volumes. A small number of remaining instances and volumes are hosted on hardware which was adversely affected by the loss of power. We continue to work to recover all affected instances and volumes and will be communicating to the remaining impacted customers via the Personal Health Dashboard.
Posted Aug 31, 2019 - 18:01 CDT
Update
We are continuing to monitor for any further issues.
Posted Aug 31, 2019 - 17:50 CDT
Monitoring
Since 9:56:42 AM CDT, Coalesce Payments Gateway has not seen any additional issues with our services.

Since Amazon Web Services is still addressing their issues, we cannot guarantee that intermittent issues may not still may occur so monitoring closely.

Below are the most recent updates from AWS which you can also find at https://status.aws.amazon.com/. Keep in mind they are reporting in PDT.

8:06 AM PDT We are starting to see recovery for instance impairments and degraded EBS volume performance within a single Availability Zone in the US-EAST-1 Region. We are also starting to see recovery of EC2 APIs. We continue to work towards recovery for all affected EC2 instances and EBS volumes.

8:25 AM PDT We are starting to see recovery for connectivity issues impacting some single-AZ instances in a single Availability Zone in the US-EAST-1 Region. We continue to work towards recovery for all impacted instances.

8:33 AM PDT We are starting to see recovery of WorkSpaces instance impairments within a single Availability Zone in the US-EAST-1 Region. We continue to work towards recovery for all affected WorkSpaces instances.
Posted Aug 31, 2019 - 10:57 CDT
Update
Updating with additional information from Amazon Web Services below. Please note the timezone is Pacific.

7:16 AM PDT We are investigating connectivity issues affecting some single-AZ RDS instances in a single Availability Zone in the US-EAST-1 Region.
7:37 AM PDT We can confirm that some instances are impaired and some EBS volumes are experiencing degraded performance within a single Availability Zone in the US-EAST-1 Region. We are investigating increased error rates for new launches within the same Availability Zone. We are working to resolve the issue.
7:50 AM PDT We can confirm that some WorkSpaces instances are impaired within a single Availability Zone in the US-EAST-1 Region. We are working to resolve the issue.
Posted Aug 31, 2019 - 10:00 CDT
Identified
Our engineers are working to isolate the affected instances and take them out of the affected availability zone.

From AWS:
6:22 AM PDT We are investigating connectivity issues affecting some instances in a single Availability Zone in the US-EAST-1 Region.
6:54 AM PDT We can confirm that some instances are impaired and some EBS volumes are experiencing degraded performance within a single Availability Zone in the US-EAST-1 Region. Some EC2 APIs are also experiencing increased error rates and latencies. We are working to resolve the issue. https://status.aws.amazon.com/
Posted Aug 31, 2019 - 09:21 CDT
This incident affected: Application and Database Servers (Application Servers, Vault, Backend Microservices).