Looking Ahead: A Cloud Report from 2015 – oder – Blick in die Kristallkugel von einem, der es wissen müsste

Mark Settle CIO BMC

In der Regel sind es die Analysten und Marktforscher die regelmäßig einen Blick in die Kristallkugel wagen und uns dann erzählen, was wir in zwei, fünf oder zehn Jahren tun werden. Beim nachfolgenden Beitrag wagte einmal ein “Betroffener” dieser Prognosen, ein IT-Anwender, den Blick zurück: Aus dem Jahr 2015 auf die dann vergangenen fünf Jahre Cloud Computing. Es handelt sich dabei um Mark Settle, Chief Information Officer, BMC Software.

Im Vorwort zum Brief erklärt BMC Software, dass man sich im Unternehmen sehr viele Gedanken über die Zukunft von Cloud Computing macht – und sich dabei die Frage stellt, was dies für IT-Unternehmen, für Anwenderunternehmen und gerade für CIOs bedeutet, die sich darum kümmern müssen, dass die IT das Business unterstützt. Aus diesem Grund bat man den eigenen CIO, Mark Settle, doch einmal einen “Brief aus dem Jahr 2015” zu schreiben und darin zu beschreiben, wie sich Cloud Computing aus seiner Sicht weiterentwickelt haben wird und wie CIOs deshalb ihre Denkweise ändern müssen, um diese Technologie am effizientesten einzusetzen.

Musings from 2015…

With the benefit of 20/20 hindsight, I have to ask myself how I — and most other CIOs — deluded ourselves into thinking that hardware was cheap for so long. In the old days, we would let different functional groups in the company fall in love with individual business applications, each of which had its own unique hardware requirements and middleware architectures. Because hardware expenses were always the smallest fractional piece of any major system acquisition, we would continually oversize the hardware to guard against potential system performance problems. To be perfectly blunt, we didn’t really perform capacity management because we didn’t want to know the answer: we all intuitively knew that our server and storage farms were significantly underutilized.

When public cloud providers started to provision IT infrastructure-on-demand on a “pay-as-you-go” basis, we all awoke from our reveries and realized that these new on-demand options would soon displace our in-house operations with their efficiency and reliability. So, we started to act like public cloud providers within our own companies. We laid down the law with our business clients and told them which hardware and middleware architectures we would support and which ones we would not. We told them: “If it doesn’t run on one of our standard architectures, you can’t buy it.”Then we had our day of reckoning with our CFOs. We had to go in and admit that all of that hardware they had allowed us to purchase over the past decade was really just sitting down in the data center running below 50 percent of capacity on any given day (excepting the mainframe, of course). We told them that if they would just let us buy capacity in advance of demand in the future, and avoid incremental purchases tied to the acquisition of major systems, we could achieve much higher rates of return on our IT hardware. We just had to agree to start reporting on capacity utilization in exchange for the new policy of purchasing capacity in advance of demand.

It’s amazing how long we fussed and fretted about security concerns in the public clouds back then. Now, we routinely burst out to public cloud providers to handle specific types of workloads. We characterize the security requirements of each computing workload and use the appropriate encryption or data aliasing techniques to ensure the security of data being passed to the external providers. Although some forms of data are too sensitive to ever leave our company data center, those workload allocation decisions are now handled through our automated provisioning processes. We don’t have philosophical debates about what can and can’t be passed outside our firewall. We solved that problem a long time ago. Ten years ago, it could take three-to-six weeks of meetings, emails, hardware evaluations, and pricing negotiations just to get a set of servers provisioned for a development group. By 2010, using standard configurations and automated scripts in a virtual environment, CIOs could get those same servers up in either a public or private cloud in three-to-four hours. Today, we can do it in minutes.

Having standard, automated scripts ready to provision different types of servers (such as development, QA, Web, or applications) in the cloud not only saved huge amounts of staff effort but also kept the business flexible. These days, I manage a hybrid cloud computing platform consisting of both internal, on-premise infrastructure and a collection of third-party providers that I use on an “as-needed” basis to handle specific computing workloads. When you come to think of it, that’s no different than the way in which we have used strategic offshore outsourcing partners to help develop and maintain many of our business applications.

Following the Y2K scare in the late 1990s, I spent five years learning how to virtualize my application and development teams by leveraging IT resources situated in Asia and Eastern Europe. I’ve spent the past five years doing the same thing with my infrastructure, blending the use of in-house computing assets and external assets to meet the needs of my business clients as cost effectively as possible. I guess history does repeat itself after all.

It’s a good thing we were smart enough to hop on the cloud computing bandwagon back in 2010, or else we would never have had the flexibility and resources required to leverage the explosion in mobility applications that has occurred over the past five years. In fact, we’d still be spending 60 percent of each IT dollar “keeping the lights on” for our legacy business systems. Back in 2010, I had a good year if I was able to deliver two-to-three major new business applications and churn out the quarterly and monthly releases needed to support the “rat’s nest” of legacy systems I had to maintain. I found myself competing with Apple and Android developers who were churning out 10,000+ new apps per quarter! It’s kind of the same story as cloud computing. I gave up trying to compete with these mobile developers and became one of them instead. My team started developing much more granular business services that could be delivered on tablet platforms much more frequently. In effect, our application development teams evolved into small, internal SaaS vendor teams, mashing together granular services to improve the productivity of our end users and respond to competitive threats to our business. We had started to use SaaS versions of our standard business applications in the 2005–2010 timeframe. Our users were delighted with the rapid introduction of new functionality furnished by the SaaS vendors, and we were delighted with the lower support costs relative to our legacy systems. Over the past five years, we’ve become SaaS providers ourselves. Most of the applications that we employ in the company today are really just “mash-ups” of various business services. The majority of our users don’t know if a specific service is being delivered from our ERP system, our customer support system, our eCommerce system, etc. They merely subscribe to the services they need to do their jobs. We don’t try to push functionality at them anymore. They just pull what they want and need, and then they share it more broadly across different functional teams and geographic groups than we ever had before.

One of the big payoffs of cloud computing for our organization is that it freed up a lot of the time we used to spend managing hardware. We redirected that time to managing the data and information that feeds our business applications, and this has had a much bigger impact on the effectiveness of our day-to-day business operations.

We live in a world today that is quite radically different from the world that existed just five years ago. We no longer spend 60 cents of every IT dollar on application maintenance, data center operations, facility expenses, etc. Instead, we spend 60 cents of our IT dollars on delivering new application services and new forms of “clean data” to our end users. The IT staff is actually a lot happier — they realize that we are generating real value for our business partners and not just running around frantically trying to maintain a bunch of underutilized hardware and software. We’re also a lot closer to achieving the mythical state of “IT and Business Alignment” that still shows up on Gartner’s Top Research Topics for 2015.

Now if they could just build a coffee cup warmer into my tablet …

Fazit: Vielleicht kann sich ja jemand schon heute um das “Coffee Cup Warmer”-Problem kümmern und wird damit in fünf Jahren zum “Unternehmer des Jahres” … Die Redaktion des Cloud Computing Report steht dann für entsprechende Provisionsverhandlungen für die Geschäftsidee gerne zur Verfügung 🙂


Zurück zur Startseite…

Schreibe einen Kommentar

*