<![CDATA[DS Galecki Enterprises Ltd - Blog]]>Sun, 23 Jun 2024 01:54:31 -0400Weebly<![CDATA[Can I avoid Cloud Vendor lock-in?]]>Tue, 17 Nov 2020 16:58:09 GMThttp://galecki.ca/blog/can-i-avoid-cloud-vendor-lock-inOne of the most contentious topics when talking about adopting Cloud Services is around Vendor Lock-in.  Most discussions revolve around how to avoid Vendor Lock-in. So, is Vendor Lock-in a myth or reality?

A History Lesson:
Before I answer this question, let's take a look at history... Vendor Lock-In is not a new concept - it has been a concern and reality for many years.  When companies make investments in hardware and, especially, in software, organizations have, over time become dependent on that vendor's technology. This is not as big a deal when it comes to hardware, although even in traditional data centers, some Vendor Lock-In occurs.  It definitely occurs with use of software.  When companies adopt a particular database or operating system, they effectively begin to lock themselves in with those technologies and therefore become dependent on those vendors. I have talked to many organizations who over time wanted to move away from Oracle DB or IBM DB2, but found the transition very difficult and costly in terms of time and resources.  This is because IT used those technologies to built IT Services.  And, in many cases, in order to move away from those services, they have to re-write existing applications.
The same happened with other technologies.  As new technologies were adopted (e.g. OS, Virtualization, etc.) they promised new capabilities, but often resulted in lock-in with a particular technology.  In case of Virtualization, for example, while the workloads themselves became portable, the overall skillsets and toolsets resulted in IT becoming locked into particular virtualization technologies.

Reality in Cloud Era:
As companies adopt Cloud Serviecs, they often think of Vendor Lock-In and ways to avoid it.  It's easy to get started with a Cloud.  All you need is an email address and a credit card. You can spin up a virtual server in seconds and start using it in minutes.  Easy! As you spend more time in a Public Cloud environment, you start taking advantage of more capabilities.
What about Vendor Lock-In? In my opinion, Vendor Lock-In is a reality - even more so than in the past.  And, it's a view that is shared by others.  I spoke with a product manager from one of the major Public Cloud Vendors and he agreed - people who think they won't get locked-in with their chosen Cloud Providers will find that their expectations aren't met. Sure, there are ways to minimize Vendor Lock-In, but at what cost?.
Should I Fear Vendor Lock-In?
Short answer is NO! Vendor Lock-In is a reality.  Let's accept it.  Let's understand it. Let's PLAN how to deal with it.
You can minimize Vendor Lock-In, but that may not be your best option.  Why? If you want to minimize Vendor Lock-In, you will need to restrict the users from consuming some of the Cloud-native capabilities, like PaaS, Code-as-a-Service or Machine Learning. 
I propose that you should not try to PREVENT Vendor-Lock-In, but rather PLAN for how to deal with Vendor Lock-In by defining end of service requirements for every Cloud-based Service. Every IT Service has an expected lifespan.  It's true that in some instances IT Services last long past their expected lifespan, but we need to treat those as exceptions, rather than a rule. In the age of Cloud, it is imperative that we define requirements for end of service life as part of the core product requirements. This will allow organizations to use the best available technologies to create a service today, but as the available capabilities of Cloud (and beyond) evolve, IT can plan to take advantage of them and properly transition to "next generations" of IT Services. Of paramount importance is making sure that you plan how to preserve any data at the end of service life.  This is true for IT-built services on Public and Private Cloud platforms as well as SaaS services.
In summary, I believe Vendor Lock-In is inevitable. It is a consequence of adopting advanced technologies available. It allows us to take advantage of the best capabilities available to help drive business advantage.  But, we need to make sure that we understand areas of lock-in and plan an exit strategy even as we prepare to create the current set of services.
]]>
<![CDATA[Minimizing Cloud spend requires change of habits]]>Tue, 17 Nov 2020 16:43:41 GMThttp://galecki.ca/blog/minimizing-cloud-spend-requires-change-of-habitsPicture
One of the most often discussed topics around Cloud Management is Spend Optimization.  It's covered by major analyst firms like Gartner and there are many products out in the market that can help you control Cloud spend.  Public Cloud vendors also typically offer tools that help organizations track their spend.
So it seems the topic is well understood and there is no problem, right?

Not quite...

Analyzing your bills is reactive - you don't get the information until after the expense has already occurred.
Many Cloud Spend Management tools allow you to setup policies that help reduce unnecessary spending.  But again, this is something that happens after the fact and after the expense has already been incurred.

You may ask - isn't that the best we can do?  Some "waste" will always occur.
You are partially correct.  But, while I believe some "waste" will always happen, we can minimize it.

Tools are great to have, but what about human behaviour?  Tools by themselves will not solve the problem.   They will show you waste and work to minimize it.  But, you also need to educate people about efficient use of Cloud resources  and define and apply Policies.
Remember the old concept of using a combination of "People-Process-Technology"?  It still applies.

You need to have Technology to ensure that spend data is collected and broken down on a regular basis.  You will use Technology to ensure that excessive waste does not happen, by applying automation Processes to prevent excessive waste.
That will take care of much of the problem, but you can do much better, without creating an impression of "always watching" users.

You also need to ensure that you have appropriate processes to PREVENT waste from happening.  Technology will minimize waste that is already happened.  Having defined Processes will allow you to not only minimize waste that is already happening, but it can also allow you to prevent the waste from happening.
By defining and implementing standard Processes, you can ensure that users follow a standard approach to request and obtain a Cloud Service (whether it's a virtual machine, access to Machine Learning/AI capabilities or a Cloud DB or any other service).  By doing so, the company will be aware of all activities and can quickly catch unexpected, or unauthorized usage attempts.
Companies should employ a standard catalog to provide both ease of access to Authorized Cloud Resources to users, and control over which resources employees are able to access.  This will help ensure that the company can control who and what they are using.
Service catalogs can also ensure that Cloud Services are decommissioned easily - preventing unnecessary expenses.

But, perhaps the most important aspect of minimizing waste is user behaviour.
People can ensure that they use Cloud Services appropriately, in a way that minimizes waste.  But, we all have habits that we have built over the years.  Historically, we used company owned resources - they were acquired and available to be used by anyone that has the authority to do so.  Having a VM or a server that was idle wasn't a big deal.
But, in the age of Cloud Services, this behaviour will result in waste.  We need to learn to:
  • Select right-sized instances,
  • Shut down services when not in use,
  • Delete any resources that are not needed
If I have a Cloud Instance that I use for my own testing, shutting it down when I go for lunch or into meetings WILL save money.  Deleting the instance after I complete my testing WILL save money.  Deleting no longer needed Storage WILL save money.

Education is a big part of move to Cloud Services.  Companies need to ensure that users are aware of when expenses occur.  Companies should provide users with tools to make it easy to stop or delete Cloud Services when they are not needed.

With Cloud, more than your own "traditional" data center, it is critical that you provide users with:
  • Education on how to responsibly use Cloud Services and when costs are incurred
  • Policies governing responsible use of Cloud Resources
  • Technology to ensure that users can access Cloud Services easily and to monitor usage of resources to minimize waste and to identify areas of waste

In my opinion, education is critical.  We all must change how we use Cloud Services compared to "traditional" IT resources.  The fact that Cloud employs pay-as-you-use model is a fundamental shift from the old buy-then-use model and it will take time for our behaviours to change.

]]>
<![CDATA[What do you need to do to track Containers?]]>Mon, 26 Oct 2020 20:16:15 GMThttp://galecki.ca/blog/what-do-you-need-to-do-to-track-containersContainers are being used in companies – I can bet on it, and you should too.

How do you track them to make sure use of Containers doesn’t create license compliance risk?
How and where are containers deployed?

Let me start by describing how Containers can be deployed before talking about different ways to track the software running in them.

Did I mention there are different Container technologies?  Yes, there are, although we can focus on Docker as it has the lion share of the market.  If you want to learn more about the options, this article in Container Journal describes some alternatives.

Containers themselves come from multiple sources.  There are public depots (like dockerhub or Red Hat Marketplace) from where you can download ready to deploy images in your organization (think about it as downloading media – license is NOT included for commercial software).  Companies also create their own Container images using Docker tools.

They can be deployed in your own environment (private datacenter) or in Public Cloud environments.

When an organization is deploying Containers in their private datacenter, they are likely to use an orchestration engine like OpenShift, Docker Swarm or Kubernetes (but that’s not a requirement!).  Images can be deployed just like any other file and executed using “docker run” command (meaning they can only run wherever docker service is installed).

Containers can also be deployed in Public Cloud (e.g. AWS, Azure, GCP).  Here, you have a choice as well.  You can deploy a Container into Cloud Instances – essentially treating Public Cloud like an extension of your private datacenter.  You can also use Containers as a Service – where you simply deploy containers, but don’t have underlying infrastructure that you have to manage separately (like VMs or Cloud Instances).

So, when it comes to tracking Containers, no single approach is foolproof.

You can depend on inventory tools (ones that can collect software information from Containers), but that will only cover your private datacenter and Cloud Instances where the agent is deployed.  If the Containers are “short lived”, the agent may not be able to collect the information.  Having said that, the approach I am familiar with “talks” to the Container (Docker) service and will know about every Container that is deployed.

But, what about the Container as a Service scenario?

In this situation, you need to connect to the Container service to get information about what Containers are deployed.  You can also use the same approach in your private datacenter, connecting to the orchestration engine in your environment.  Good news is the orchestration engine will know about every container and image that it has deployed.

But you will get different information based on the approach you use.

When you use an agent that is capable of scanning the Container file system, you will collect inventory just as you do on any host machine.  Raw inventory will be collected and normalized by the inventory or asset management tool you have.
When you are talking to the orchestration engine, you get image information – something like this:

REPOSITORY                TAG                 IMAGE ID            CREATED             SIZE
<none>                    <none>              77af4d6b9913        19 hours ago        1.089 GB
committ                   latest                  b6fa739cedf5        19 hours ago        1.089 GB
<none>                    <none>              78a85c484f71        19 hours ago        1.089 GB
 
Can you tell what software is in each image?  Neither can I.

The good news is that image IDs are static – for example, if I have an image with Oracle DB 12c R2 Enterprise Edition, the image ID for that image is static.  If I apply patches to that image, I need to save it as a new image, resulting in 2 images (with 2 image IDs) for Oracle DB 12c R2 – one for the base package, another for the base + patches.

This provides a way to use that information for asset management.  But you need to have a lookup table for image IDs to match them with the software running in those images. At this time, I am not aware of any IT Asset Management/Software Asset Management tools can provide that mapping out of the box, so at least in the short term you will have to roll up your sleeve to manually allocate license consumption based on Image IDs of the Containers.

​Containers are here to stay – I believe they will become the norm for deploying software ((Microsoft’s MSIX packaging format is a form of Container as well) as we continue to evolve towards software-defined future and devops becomes the standard way to deliver updates.  They create new challenges, but the industry as a whole is evolving to support them.
]]>
<![CDATA[Public Cloud requires we change our mindset]]>Tue, 20 Oct 2020 15:55:59 GMThttp://galecki.ca/blog/public-cloud-requires-we-change-our-mindset​For decades, we have become used to buying IT assets and using them whenever we want.  Does it really matter the server we have isn’t running at full capacity 50% of the time?  In terms of overall efficiency, yes.  But, we already bought it, so the only thing we pay for is electricity, which is not top of mind for almost anyone.
Cloud computing changes that.
When you start using Cloud computing services (e.g. AWS, Azure, Google, etc.), you pay for what you use.  This means, anytime the compute resource is not running at full capacity, you are overpaying.  Yet, this is not something that we are used to tracking and unless your organization deployed Cloud Spend Management and/or Cloud Management Platform solutions, something we are probably unaware of.
But, what’s the big deal?  Aren’t these cloud compute services really inexpensive?  That depends on what specific Cloud Instance you are subscribing to and how long it’s running.
Let me give you a couple of examples.
Azure General Purpose B1S instance (1 vCPU, 1GB RAM, 4GB temporary storage) costs $0.0104 per hour (that’s less than a quarter per day!) running Linux or Windows.  Even after a year, that’s only $91.104.  So, how much can I save?  Not much.  If I have a hundred of these running that’s still not a lot.
But, how many applications can you run on such a system?
OK, let’s beef up the specs.
How about B4MS (4 vCPU, 16GB RAM, 43GB temporary storage)?  The price goes up to $0.166 per hour, or $1,454.16 per year.  If we multiply this by a hundred, the numbers are starting to get bigger.  But, that’s a hundred virtual machines, and the cost is less than having multiple physical servers + VMs (and virtualization software licenses and OS licenses).  But let’s say you can save 20% of that cost – for a hundred of these Cloud instances that’s about $29,000 per year.  What if you have a thousand of these?  Numbers start adding up quickly.
My recommendation is to get a Cloud Spend Management tool.  They really aren’t that expensive and they will pay for themselves.  There are many choices in the market.  You can even use native tools offered by Cloud Providers, but they are not designed to manage “other clouds”, so not a great choice, unless you are using a single Public Cloud vendor.
]]>
<![CDATA[Aren't Containers great?!]]>Sat, 19 Sep 2020 15:53:56 GMThttp://galecki.ca/blog/arent-containers-greatI love new technologies.  I also loath the impact they have on IT programs.
Software Containers are great - I believe they will become standard way to deploy software and applications, just as virtual machines became the standard way to deploy computing in organizations.
But, not all is well in the world of Containers.
Operationally, IT loves them.  I won't go into their benefits, but they are a really great way to deploy applications.
But, if I put my IT Asset Management hat on, they quickly turn into a nightmare.  As an IT Asset Manager, I need to know what software is installed on what computer.  Millions of dollars could be at risk here.  But, when IT deploys software in Containers, standard inventory tools cannot tell me what software is in those Containers.  This can create financial exposure from license compliance and create security risk if the software has vulnerabilities.
When I first heard about Containers, I was at Oracle Open World and Java conferences (I believe it was in 2017).  I asked one of the architects of Docker Containers about collecting inventory of software in a Container (unfortunately I do not recall his name).  His eyes glazed over and he started thinking about why I would ask such a question...  I explained how deploying, let's say, Oracle DB in a container can quickly cause millions of dollars in potential license compliance fees. Once he understood the problem, he gave me a technical explanation - if I provide a local connection from host OS to the Container (not necessarily they way this is typically done), then I can use Linux commands to get list of files in the Container.
Needless to say, I wasn't thrilled with that answer.
IT Asset Managers continue to struggle with collection of inventory from Containers even today (late 2020).  IT Asset Management tools are slowly starting to provide some capabilities in this space, but examples are still few and far between.
So, if you are an IT Asset Manager, go talk to your internal development teams, software distribution teams (some commercial vendors are beginning to ship their software in Containers), so you can start to at least manually track software deployed using Containers.
And, reach out to your inventory tool vendors - push them to provide solutions, before you are faced with large compliance fees because the person managing OpenShift or Kubernetes (or whatever Container Orchestration tool your organization has) decided to deploy Oracle DB, or IBM DB2 or some other application to hundreds of devices in your datacenter.

]]>