Predictions come true

Now, it’s all very easy to claim things that have happened as something you predicted. But that’s exactly what I’m about to do. I ask your indulgence to suspend disbelief and take a leap of faith as you read the rest of this blog, past and future, and trust that you the reader will understand that what I’m about to share were absolutely things I predicted.

First up, virtualisation and why I say Vmware is legacy.

“Vmware is legacy virtualisation” – REALISED

Now, before I start, Vmware vSphere is a decent virtualisation platform.  But that’s all it is.  It isn’t cloud, it isn’t Mode 2.  So that you understand I’m not being biased here because I work for Red Hat, let me share some of my history with Vmware’s virtualisation suite.

I first started using Vmware virtualisation back in 2004.  It wasn’t until 2005 that I really started using Vmware at scale in an enterprise environment.  Rolling out hundreds of Vmware ESX hosts with version 2.5 across a number of government departments, both in remote branches and centralised datacentres.

Post that, the next major use of Vmware’s virtualisation was in 2006 using Vmware GSX on Windows 2003 virtualising Windows NT 4 across hundreds of bank branches.

Following on from those two primary projects, I designed, built and sold various sized Vmware virtualisation platforms over the next 6 years.  These included technology refreshes of aged hardware to virtual deployments, cross site disaster recovery before and after SRM with numerous storage vendors, ASD level secure virtualisation platform standards as well as significant Microsoft SQL Server 2005 migrations.  I even spent 3 years working for a reseller that was included in the prestigious Vmware Partner Council.

So, when I label Vmware’s virtualisation products as legacy, it’s not through ignorance or disdain.  It really is a decent virtualisation solution.  But as I said, that’s all it is.

When comparing Vmware vSphere to Red Hat Enterprise Virtualization, Red Hat OpenStack or Red Hat Cloud Infrastructure you can really see the difference.  While Red Hat Enterprise Virtualization is still a traditional virtualisation solution, it includes significant additional functionality that lends itself to being next generation rather than legacy virtualisation.  Specifically, the User Portal which is included rather than an add-on as it is for Vmware, enterprise-grade Virtual Desktop Infrastructure (VDI) as well as the integration of Neutron software-defined networking and Glance image repositories when combined with Red Hat OpenStack Platform via Red Hat Cloud Infrastructure.

If that isn’t enough, when you consider that x86 virtualisation is now commoditised with KVM, Red Hat Enterprise Virtualization, Microsoft Hyper-V, Xen and Oracle OVM all being readily available one must ask the question, “If all I’m getting is a basic virtualisation hypervisor, why am I paying so much for it?

The catch-cry of Red Hat Enterprise Virtualization is “double the performance at half the cost”.  The performance claims are backed up by various specvirt_sc2013 results.  Not only does KVM and Red Hat Enterprise Virtualization significantly outperform Vmware vSphere on similar hardware, Vmware don’t even submit results for 8-way servers.

These and more are all reasons Red Hat Cloud Infrastructure was placed in the Visionaries quadrant of Gartner’s April 2016 Magic Quadrant

These are just a few reasons why I claim that Vmware vSphere is “legacy virtualisation”.

But don’t just take my word for it.  Recently with Vmworld 2016, various press stories were making similar claims.

http://www.theregister.co.uk/2016/08/31/thoughts_from_vmworld_2016_is_vmware_becoming_synonymous_with_legacy/

screenshot-2016-09-24-at-5-30-49-pm

 

 

http://diginomica.com/2016/09/02/it-diehards-seek-shelter-at-vmworld-as-the-future-rushes-past/

Screenshot 2016-09-24 at 7.43.21 PM.png

 

 

Avoid the risk of public cloud lock-in, be agnostic – REALISED

Public cloud vendors offer some amazing advantages across a wide range of dimensions.  There is also quite a bit of differentiation between the leading public cloud vendors.  Organisations are rapidly adopting public cloud vendors, both startup and established enterprises alike.

With so many advances available from public cloud providers in both the services they offer as well as easily adopted APIs, it’s understandable that developers and operations staff alike are tempted to consume them directly.

But that’s where you run into trouble.  Because all of those public cloud vendors are trying to differentiate themselves with their services – and they want you to tightly couple your applications with their interfaces.  It makes the public cloud vendor sticky and that much harder for the customer to extricate themselves from such a tangled web.

The pointy end of this problem arises when the chosen public cloud vendor no longer meets the customer’s requirements.  It may be related to cost, missed SLAs, reliability, geographic availability, change in public cloud vendor direction, functionality, a change in customer management or direction, commercial terms, market changes or simply a decision to make a change.

If your application stack and cloud implementation is directly based on native consumption of the public cloud vendor’s services and interfaces and you’re moving to another vendor, the customer is likely about to enter a world of pain.  If the alternate public cloud vendor doesn’t have fidelity and 100% compatibility with the incumbent public cloud vendor’s functionality and APIs, then the customer is going to need to rewrite a considerable amount of the application implementation.  This potentially may negate the realisation of the benefits expected by adding or changing to the new public cloud vendor.

This is where using a cloud agnostic approach will allow for easy migration between clouds, both public and private.

Think of it another way.  When you change your electricity provider or telephone company, you don’t rip out all of your wiring.  So why would you do it for your cloud implementations?

We broke the lock-in stack of proprietary hardware, proprietary operating systems and proprietary business software.  IBM, Intel, Microsoft and Linux allowed us to break free from the shackles of vendor lock-in with open systems and standards.

I touched on this in my post on contestability and I’ll write more on it in the near future.

At the recent Oracle World event, king of lock-in Larry Ellison ironically opined that using native public cloud services amounts to lock-in;

“Amazon is more closed than an IBM mainframe”

http://www.businessinsider.com.au/larry-ellison-on-oracle-cloud-and-amazon-web-services-2016-9

“Build an app on Redshift and you will be running it forever on Amazon – you are locked in, baby”

http://www.theregister.co.uk/2016/09/21/larry_ellison_amazon_databases/

Which to be fair, is very different to Larry’s September 2008 three minute rant on “What the hell is cloud computing?”

Public cloud landrush will recede and balance out – IN PROGRESS

Ever since the first public cloud vendor sprang up, there have been calls that public cloud will swallow all customer data centres and all services shall be hosted by the public cloud.

Of course, public cloud vendors tout this as self-evident.  And why wouldn’t they, they’re the primary beneficiaries and their business model depends on it.

By no means do I begrudge them advocating this position, after all they’re a business.

The reality however is far from peachy for full public cloud adoption.

It definitely makes sense for start-ups to use a pure public cloud approach until they reach a critical mass where they re-evaluate their strategy.

However the story is vastly different for enterprises with significant investment in existing technology.  Aside from technical challenges around endianness, organisations just aren’t going to move their large POWER and Z series applications to a public cloud running on x86.  IBM Softlayer does have an advantage here, but they’re more of a managed service provider than a true public cloud.  The number of times I’ve worked with IBM Cloud Architects and they’ve explained delays in customer deployments because, “we haven’t ordered the servers yet” always astounds me.

What I do see as highly likely, at least for the foreseeable future, is that low-rent services such as web services, the presentation layer and some simple business logic systems as well as cloud native applications make sense to place in a public cloud.

Larger systems such as those running on POWER and Z Series, or even those running on Intel x86 platforms that are large, monolithic beasts do not make economic or technical sense in moving to a public cloud in the majority of cases.  At least not until they’re redeveloped to be more friendly to a cloud architecture.

What I also see as highly likely is that organisations will increasingly move out of their own built and managed facilities and into a 3rd party managed facility.  The costs of building and running your own data centres makes little sense for the majority of enterprises, so why try and re-invent the wheel for something someone else can do better and cheaper.  But those organisations will still run their own hardware and systems.

Jason Forrester, former Global Datacenter Network Manager for Apple, echos these sentiments.  It is one of the reasons he went out on his own and took most of his infrastructure team with him to start a software defined network company.

“Most enterprise applications are highly customized for the company’s needs, which means they don’t fit neatly into the public cloud mold”

Cloud doesn’t require virtualisation – REALISED

For such a very long time there was this false reality pushed upon everyone that to have cloud, you also had to have virtualisation.  This, quite frankly, drove me nuts.  No where in any of the leading definitions of cloud did it say the compute resource had to be virtualised.

The accepted authority for the definition of cloud is the National Institute of Standards and Technology (NIST) and published here.

On-demand self-service

A consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with each service provider.

Broad network access

Capabilities are available over the network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, tablets, laptops, and workstations).

Resource pooling

The provider’s computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to consumer demand. There is a sense of location independence in that the customer generally has no control or knowledge over the exact location of the provided resources but may be able to specify location at a higher level of abstraction (e.g., country, state, or datacenter). Examples of resources include storage, processing, memory, and network bandwidth.

Rapid elasticity

Capabilities can be elastically provisioned and released, in some cases automatically, to scale rapidly outward and inward commensurate with demand. To the consumer, the capabilities available for provisioning often appear to be unlimited and can be appropriated in any quantity at any time

Measured service

Cloud systems automatically control and optimize resource use by leveraging a metering capability1 at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported, providing transparency for both the provider and consumer of the utilized service.

As you can clearly see, the one and only mention of virtual in the above definition is in reference to resource pooling where it also lists physical resources as equally valid.

Nowadays a number of public cloud vendors offer dedicated physical hosts with all the benefits of their cloud service offerings.  Amazon Web Services and IBM Softlayer are two easy examples.

Wrap up

So, there’s quite a few items here.  It feels good to finally get these written down.  In future, I’ll post predictions ahead of time to make this easier.

By my score, that’s 3.5 / 4 or 87.5% accuracy.  Not too bad.

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Create a free website or blog at WordPress.com.

Up ↑

%d bloggers like this: