Microsoft Cloud Services – We were just having a bad day. Oops, days.

News Upcoming Webinars Trade Shows and Events Press Releases Newsletters Blog

Microsoft Cloud Services – We were just having a bad day. Oops, days.

Despite the fact that Microsoft has multiple datacenters located worldwide, with globally redundant, enterprise grade reliability with automatic failover and a strict privacy policy and a 99.9% financially backed Service Level Agreement (SLA), accidents do occur.

Moving to cloud based services is still considered risky, but the benefits seem to be outweighing the cons. But the cons can’t be ignored. Let’s first look at hackers. The increase in hacking is on the rise with nefarious gangs of hackers springing up globally and targeting the largest organizations. Major breaches at Google,, Twitter, and Amazon have all proven that there are hidden costs and repercussions from compromised data.

But some outages and problems can be directly attributed to carelessness within the organization. Not to pick on Microsoft, February 23, 2013 was not a good day inside the walls of Redmond. First the world wide Azure cloud service went offline due to an expired security certificate. If that wasn’t bad enough, they discovered a malware infection on internal computers (already discovered and fixed by Twitter, Facebook, and Apple) had crept into Microsoft in-house systems. Add to that the continuing woes of security holes in Java. Although Java 7 was supposed to address these holes it appears that they continue.

A week or so later, Microsoft users went through a similar ordeal which mostly affected Hotmail, and SkyDrive — three of Microsoft’s more essential cloud services. The service interruption was caused primarily to and when they “suffered from a service interruption caused by a firmware update which failed “in an unexpected way”, according to Microsoft’s Vice President Arthur de Haan. The failed firmware update occurred in one of Microsoft’s datacenters, in a “core part” of its physical plant, subsequently leading to a “substantial temperature spike in the datacenter”. The heat was “significant enough” causing the “safeguards to come in to place for a large number of servers in this part of the datacenter”. In that area of the datacenter Microsoft houses “parts of the,, and SkyDrive infrastructure”.

In a just released update, Office 365/Outlook users and administrators are experiencing transition problems. The ‘fix’ for this supposedly non-disruptive transition is to verify the configuration on every device, which could be extremely cumbersome. Microsoft’s guidelines to Partners state: “In an effort to keep your service upgrade experience as seamless as possible, we suggest that you take a few minutes to validate that your current environment is configured properly. Prior to upgrade it is important that you verify your Autodiscover and MX records in DNS, as well as ensuring that your clients are up-to-date with the versions listed below. Failure to do so could result in your Outlook clients being unable to connect to their mailboxes after the Service Upgrade occurs.”

This isn’t necessarily indicative of Microsoft cloud services, as they are in the same boat as others. Should we just chalk it up to the organization having a few bad days? Or is oversight somewhat lax? Do these types of potential issues impact your decision when selecting a cloud application vendor?

Concept Searching