Cloud Management Demands an Organizational Shake-Up

Cloud Management Demands an Organizational Shake-Up

(flickr.com/George Thomas)

The cloud is here. Most organizations now have contracts that allow for the construction of applications in a cloud environment. The cloud has promised lower costs, great efficiencies, and greater security. But these cost savings cannot be realized without simplifying the organizational structure.

The Rules Have Changed

Every new technology creates new processes. When the personal computer came around, it forced the established information technology department to change or die. The departments that would not change had their work supplanted by other departments that found lower cost and more efficient ways to work. Why pay your big IT department for time on a mainframe, when you can just go buy a PC that does it cheaper?

The same scenario is happening again with the cloud. Cloud computing provides the ability to only pay for the processing and storage you need on demand. In addition, it does not require staff to install and configure servers since all cloud services provide a web interface to instantly use a new server.

We do not manage client/server networks in the same way that we managed mainframes. So depending on old processes and labor categories will greatly hamper the cloud and ultimately make it as inefficient as the old mainframe.

Let’s use companies that build internet applications as guide. It’s possible for a small department to create an innovation so powerful that it can supersede much larger organizational groups. This can occur when those small departments use the cloud, which allows them to purchase only the computing power needed. The first group that figures out how to bypass the old process, policies, and rules is able to build something so important that policies, rules, and organizational structures are redefined.

I have fallen into the trap of rigidly following old policies myself. There is a great new technology called Hadoop, which is a way to process big data over many servers by dividing it into small pieces. But Hadoop requires code and data to be distributed on many computers. So I rejected Hadoop because of an organizational policy against automatic remote code execution. But it turns out that there were other divisions building tools with Hadoop and proving that the technology could change the way data analysis happens. After they showed the power of Hadoop, the policy changed.

For revolutionary technology, there is a way to mitigate risk and modify policies. The adoption of Hadoop has revolutionized the way big data is processed in my organization and made many small groups the most powerful and efficient in meeting new mission sets.

Change the Roles

Right now, many teams are divided into these groups: users (analysts, statisticians, etc.), decision makers, requirements, system administrators, database administrators, systems engineers, programmers, testers, project managers, and security engineers. With the new cloud systems, where almost anyone in this list can learn how to start up a new virtual machine and install software on it, why are all these roles needed?

I have found that it works best to employ one or more technical generalists who know system administration, databases, programming, systems engineering, testing, and requirements. This technical generalist can get a new application running very easily. After sitting with the actual users, technical generalists can collect ideas, build, and show progress quickly.

The cloud is part of what makes this possible. People who build the solution that users need can instantly start up new servers and compute clusters on demand. If the application is successful, they can instantly scale up without having to wait on another department to start up the servers.

Teamwork?

DevOps

Consider combining development with operations and maintenance. The cloud forces this issue anyway, and you gain great efficiencies.

In the cloud, developers and testers are concerned about real production issues. The cloud makes it easier to deploy new software because the developers think of it up front and build code to automatically deploy the software. Developers need to learn how to do system administration in order to write better code, and system administrators need to learn how to code to deploy applications better.

If a developer or analyst can make the decision to implement a new service on a new virtual machine, why should he wait on someone else to click a few buttons on a web interface to start up a new instance? In the worst cases, there is a long list of departments and boards that need to approve the action.

It’s even better if you are able to find a motivated user who can actually build or prototype the application. They know exactly what is needed, and if they have the right technical team supporting them, they will find a way to get it done. I have seen analysts and mathematicians with access to computational power manipulate it on demand as the mission shifted. They were able to gain the technical title that allowed them to be analyst, system administrator, and programmer all in one; they changed the way things work.

Automate

Instead of hiring more people, make sure the people you have are writing code and scripts to automate the work.

  • Testers should be using code to test applications repetitively.
  • System administrators can write code to deploy patches and code.
  • Programmers can write code to test and deploy systems and set up monitoring tools to watch everything.
  • Analysts and mathematicians can write code to filter and sift the data.

Everyone can rely on others in the group to come up with solutions together. But if the key players are in different departments, they will not be able to work together effectively. In my experience, it’s better to have a small team together than a large, disparate one.

With cloud computing, it is possible to automate the scaling up and down of servers. The right code makes it possible to have servers deploy themselves automatically as load goes up or when there are server failures. Work toward building systems that can utilize this feature of the cloud.

Accomplish the Mission

The key is to get everyone closer to the actual mission – solving the problem with software. Too often, work is tightly controlled by functional divisions. In these cases, the system administrator is able to keep the server running but has no power or responsibility to keep the network, database, or application up. But a running server with no network isn’t very useful to the mission.

The use of the cloud puts more power than ever before in the hands of people with technical skills. Anyone with an internet connection can write an application and deploy it on a cloud server for almost nothing. But within large organizations, we get stymied by processes and labor categories. We lack the access to develop and deploy new technology without impediments.

My recommendation is to find ways to collapse job roles and allow technical generalists to gain direct access to the necessary resources needed. The cloud will only live up to its promise if we can control it directly.

What is your experience in deployment of solutions to the cloud? Does your bureaucracy get in the way?

To learn more about Volume Labs and Volume Integration, please follow us on Twitter @volumeint and check out our website.

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *