Wait times are intermittently spiking to 100 or more seconds on all clouds. These spikes only last about three minutes and occur about once every two hours. We are investigating the cause.
2019-March-11 Service Incident
Incident Report for Sauce Labs US West Data Center
Postmortem

Dates:

March 11th, 2019 2:53 PM - 4:06 PM PDT

What happened:

Our ondemand and API endpoints were unreachable for our US-based product.

Why it happened:

A subtle error in our configuration automation caused a key proxy service to stop working in our US West environment. This issue had not emerged in our other production environments, despite having deployed the same change using the same automation.

How we fixed it:

We reverted the change.

What we are doing to prevent it from happening again:

We’ve refactored the configuration automation so that this issue is no longer possible.

Posted 5 months ago. Mar 18, 2019 - 10:54 PDT

Resolved
We have recovered and all systems should be returned to normal.
Posted 5 months ago. Mar 11, 2019 - 16:04 PDT
Monitoring
We have taken remedial action and are monitoring.
Posted 5 months ago. Mar 11, 2019 - 15:38 PDT
Update
We are continuing to investigate this issue.
Posted 5 months ago. Mar 11, 2019 - 15:01 PDT
Investigating
Our Virtual Desktop, Emulator, and Simulator services are inaccessible right now. We are taking remedial action.
Posted 5 months ago. Mar 11, 2019 - 14:58 PDT
This incident affected: Manual Testing (Manual VM Testing), REST API (REST API VMs), Sauce Connect (Sauce Connect VM), and Automated VM Testing (Automated PC Testing, Automated Mac Testing, Automated iOS Simulator Testing, Automated Android Emulator Testing).