The Cloud offers literally all of your current IT services, plus tons more, some of which most of your IT department has never heard of before. You can quickly and easily move all your existing workloads there, but you'll pay dearly, as many companies are finding out. You can take the time to train your IT staff and meticulously plan for the most efficient way to the cloud, but that's not quick.
Moving to the cloud correctly truly requires rethinking how you do everything in IT. In all cases, the best route is to only move a subset of workloads or capabilities to the cloud, and different clouds may be better for different workloads. Some things are easier and cheaper to run on-prem. For example, in many cases it can be cost effective to arm your IT and development staff with laptops with 64GB of RAM. Doing so allows each one to run their own private multi-cloud in which they can test away. A brand new laptop with warranty with 64GB of RAM and 6 cores (12 threads) can be found for under $1700 on eBay and has a useful life of 4 years. Such a VM in the cloud (AWS EC2 r5.2xkarge) costs $.20 per hour, which is $5,300 for a three-year term, and doesn’t allow the flexibility of a local system running VMWare Workstation.
That's a very specific example, but it illustrates why each and every workload needs to be analyzed or audited before simply moving it to the cloud.
Some workloads are more suited to specific public clouds. WebSphere applications are a big example. If you want to “lift and shift” these workloads to the cloud, the IBM Cloud should be your first choice. If you have apps that run under Sharepoint, you should absolutely run those applications on Microsoft’s Azure cloud. There are many other workloads that may run equally well on any cloud, and for those, the analysis needs to take other factors into account.
The point I want to get across is that moving to the cloud requires analysis by a qualified team of experts. I think the best approach is to hire one or more experts and simultaneously train your own team to help them get up to speed. The combination of those two is extremely important, because you don’t want newly-trained people responsible for your entire cloud migration. You want an expert who can guide the team, allowing them to take over responsibilities over time.
Showing posts with label Cloud. Show all posts
Showing posts with label Cloud. Show all posts
Monday, November 4, 2019
Thursday, March 7, 2019
Full service cloud providers deliver better, faster, cheaper services (and more of them) than your IT department
Isn't The Cloud just "running my stuff on someone else's hardware"?
In a word, NO! If you're in IT at any level, from individual contributor all the way up to CIO, you need to take the time to really look in-depth at the cloud offerings available today. If you aren't blown away by what's available, then you need to spend more time looking at the capabilities on offer. That's not meant as an insult - it's my honest opinion as a deeply technical consultant who has been in IT for a very long time. Personally, I would recommend that you look at AWS because they're the leader and have been since Cloud became a thing. They simply have more resources available (documents, videos, tutorials, use cases, case studies, third party tools, etc.) to really show you what they offer, and it's absolutely incredible.
OK, I'm impressed, but my IT department can provide everything we need
You may be getting everything you currently think you need, but I promise you there are more capabilities out there that you just haven't thought of yet because of limitations that exist in your current environment. For example, what reply would you get from this request:
I would like to see a topology graph of all of the server, network, database and security resources associated with Application X.
I've worked with thousands of companies of all sizes, and I've only seen this question answered a couple of times, and only for very well-known, small applications. The tools to answer it are out there, and many companies own several of them, but there are technical and political obstacles that prohibit this information from being displayed on a single pane of glass. However, with AWS, there are several third party vendors that offer products that can provide this information within minutes. Specifically, all of the components are registered centrally within AWS, so their metadata can be retrieved with the AWS API. These third party tools pull the data and display it on a graph to make it easier to consume (with filtering so you can include/exclude the appropriate components based on name or tag).
This central repository of configuration information is, basically, a built-in CMDB. There are companies that have been working for years and years to eventually have a partial CMDB, and the big cloud providers offer it from day one. And in AWS, this central repository is audited BY DEFAULT. That means you can see exactly who changed exactly what and when. That's incredible.
I still don't see what's so great about The Cloud
That means you still haven't spent enough time trying to understand what's available. What I would recommend is that you go through one of the AWS workshops available on Github. Specifically, go through the Website workshop available here:
It will step you through the creation of a serverless web application, including a user self-registration component. This is something that's normally a HUGE obstacle in enterprise application creation, and AWS offers it directly via their IAM service. And, as I mentioned, everything is defined centrally, so you can see what your applications look like.
Now I'm impressed, but it looks pretty difficult to set up
It is definitely complicated, but it can be done. You do need to define policies around things like naming, tags and usage, and you need to restrict who can perform which actions, and there are a multitude of other policies that must be defined for your specific enterprise. The good news is that there are several education and certification tracks available to get people certified as AWS architects. Additionally, there are lots of AWS architects available for hire. It's like any new initiative - it just needs to be approached incrementally.
Wednesday, January 2, 2019
Integrating systems today is both easier and more complex than ever
Integrating IT systems used to require a LOT of sweat and tears just to get the plumbing configured (think of updating a SharePoint site when a new z/OS dataset is created). Today, thankfully, all of the plumbing is available and there are tons of different options for integrations. So the problem now is surveying your specific environment to identify all of the tools that people use and then architecting and implementing a solution that works well for everyone.
As an example, you may use SalesForce for CRM, ServiceNow for service desk, Maximo for asset management, Oracle Cloud for financials, AWS for some applications, Grafana for operations dashboards and Sharepoint for internal web sites (just to name a few). All of these solutions have workflow engines and connectors that can allow you to integrate them all together. But you first need to answer a couple of questions that are similar to those associated with custom application development:
The eventual solution depends on a large set of factors, and the solution is often complex. That's why we always document our solutions in a format that's easily consumed. Sometimes that means it's a Word document with Visio diagrams, and other times it's a full Sharepoint site with attached documents - it really depends on the client.
As an example, you may use SalesForce for CRM, ServiceNow for service desk, Maximo for asset management, Oracle Cloud for financials, AWS for some applications, Grafana for operations dashboards and Sharepoint for internal web sites (just to name a few). All of these solutions have workflow engines and connectors that can allow you to integrate them all together. But you first need to answer a couple of questions that are similar to those associated with custom application development:
Who are the people and personas that we're trying to help?
This is the most important question because the personas you identify will directly shape the solution you're implementing. And answering this question with specific personas, like "Nancy the regional sales manager" will allow you to refine additional data down the road.What data am I interested in and which systems are the golden sources of record for that data?
We spend quite a bit of time with customers simply finding all of the systems that are being used. Normally we start small, maybe with a single department, and then we work on getting a larger and larger picture. All of our clients use numerous systems that usually have some number of overlapping functions. We try to find everything in use so we can intelligently identify the ones that may be best suited to different tasks, also taking into account the number of users who have familiarity with the different applications.Now that you've got some questions answered, what are the options available?
This is where things get messy in a hurry, and why you want to enlist the help of an experienced enterprise architect. It used to be that you could only get a workflow engine from an expensive enterprise application. Now, most companies are already paying for multiple workflow engines and they aren't using them. For example, Microsoft offers several: Flow, Business Process Flows (in Dynamics365), and Azure Logic Apps. Those are all separate (though very similar and intertwined) workflow engines just from Microsoft. AWS has Simple Workflow Service and Step Functions. And IBM has Business Process Automation or the workflow engine in Maximo. ServiceNow has a workflow component. (As of this writing, Google Cloud doesn't offer a generic workflow engine; they have Cloud Composer, but that's a completely different animal.) And each of those has a large set of connectors, triggers and actions that allow you to automate anything you need.So which components do you use?
This is where knowledge, experience and collaboration come together. There is no one answer that generically fits the requirements for all customers. The answer has to be developed and refined based on the needs of the customer and the project. We use an iterative approach to our implementations, where we develop/customize a little at a time, while gathering feedback from stakeholders. This is commonly referred to as the Agile Methodology, and we've found that it works very well, especially for complex integrations.
The eventual solution depends on a large set of factors, and the solution is often complex. That's why we always document our solutions in a format that's easily consumed. Sometimes that means it's a Word document with Visio diagrams, and other times it's a full Sharepoint site with attached documents - it really depends on the client.
What's the point of this post?
While it's easier than ever to connect systems together, there's still a lot of hard work that has to go into implementing solutions. And this is exactly what we at Gulfsoft Consulting do: we help customers solve complex business problems by leveraging the appropriate knowledge, processes, people and tools. No matter what software you're working with, if you need help solving a complex problem, contact us. We've got decades of experience and we keep up to date on the latest technologies, patterns and strategies.Tuesday, December 4, 2018
If you run Kubernetes in the cloud, the first major vulnerability found isn't a huge issue
The first major Kubernetes (aka K8s) vulnerability was found yesterday:
https://www.zdnet.com/article/kubernetes-first-major-security-hole-discovered/
It's a pretty big deal and quite scary, but patches were immediately available upon disclosure. What's even better is that the managed Kubernetes services running onAWS, Azure and Google Cloud Platform have all been patched already. If you're managing your own K8s clusters, however, you need to patch it yourself, which just takes time and know-how.
In my eyes, this is another data point that shows how proper use of cloud resources can be extremely beneficial to a company. Specifically, the big cloud players, especially AWS, are very similar to a highly competent and agile outsourced IT department. They have offerings that are years ahead of services that you would want to have onsite, and they've got testing methodologies in place to ensure that they're available 99.9% of the time.
It's true that there can be some issues in moving to the cloud, but many of the problems of the past now have very robust solutions that are included in the offerings. And those offerings are available on a pay-as-you-go basis in many cases. So you can easily keep tabs on exactly how much you're spending even on a per-application basis.
To ensure a successful digital transformation, contact us to get the experienced help that will put you on the right path.
https://www.zdnet.com/article/kubernetes-first-major-security-hole-discovered/
It's a pretty big deal and quite scary, but patches were immediately available upon disclosure. What's even better is that the managed Kubernetes services running onAWS, Azure and Google Cloud Platform have all been patched already. If you're managing your own K8s clusters, however, you need to patch it yourself, which just takes time and know-how.
In my eyes, this is another data point that shows how proper use of cloud resources can be extremely beneficial to a company. Specifically, the big cloud players, especially AWS, are very similar to a highly competent and agile outsourced IT department. They have offerings that are years ahead of services that you would want to have onsite, and they've got testing methodologies in place to ensure that they're available 99.9% of the time.
It's true that there can be some issues in moving to the cloud, but many of the problems of the past now have very robust solutions that are included in the offerings. And those offerings are available on a pay-as-you-go basis in many cases. So you can easily keep tabs on exactly how much you're spending even on a per-application basis.
To ensure a successful digital transformation, contact us to get the experienced help that will put you on the right path.
Tuesday, November 10, 2015
ICO 2.5: Azure deployment
Here's a video I created going over some of the details of deploying through IBM Cloud Orchestrator 2.5 to the Microsoft Azure cloud.
Thursday, September 24, 2015
There's an updated cloud in town Part 2: Still Installing ICO 2.5
A few more hurdles overcome as I get closer to getting ICO 2.5 installed.
systemctl stop firewalld
systemctl disable firewalld
In my case, it was blocking port 53 (dns), which I needed open to configure the vCenter server (next section). I first just used Applications->Sundry->Firewall to open port 53, then realized that I could just turn it completely off in my test environment so I don't hit any more problems with it.
I didn't want to get a Windows machine involved if at all possible, and it turns out to be fairly straightforward to do this. You will find 99% of the instructions in this great article:
http://www.unixarena.com/2015/05/how-to-deploy-vcsa-6-0-on-vmware-workstation.html
Specifically, the .OVA file can be found in the .ISO file that you download from VMWare. It just doesn't have a .ova extension. So you need to extract the file, change the name to include the .ova extension, and then you're mainly off to the races. HOWEVER, you have to do ONE MORE THING to actually get it working. Specifically, you need to add this additional line to the end of the .vmx file after you import the .ova file:
guestinfo.cis.appliance.net.dns.servers="172.16.30.8"
Some RHEL 7 notes
The firewall in RHEL7 (and 7.1) is not iptables. Instead, it's the firewalld service that's controlled by systemd. I'm not sure which install option causes it to be configured because it wasn't running on all of my RHEL 7.1 systems. Anyway, to turn it off, you can run:systemctl stop firewalld
systemctl disable firewalld
In my case, it was blocking port 53 (dns), which I needed open to configure the vCenter server (next section). I first just used Applications->Sundry->Firewall to open port 53, then realized that I could just turn it completely off in my test environment so I don't hit any more problems with it.
Installing vSphere 6.0 Without a Windows Machine
I decided to also install vSphere 6.0 to use that as a testbed, and that has a few challenges. Specifically, the vCenter Server Appliance (VCSA) no longer ships directly as a .OVA file. It is now an ISO file that you're supposed to mount and run on a Windows machine to remotely install the vCenter Server Appliance on a remote ESXi server.I didn't want to get a Windows machine involved if at all possible, and it turns out to be fairly straightforward to do this. You will find 99% of the instructions in this great article:
http://www.unixarena.com/2015/05/how-to-deploy-vcsa-6-0-on-vmware-workstation.html
Specifically, the .OVA file can be found in the .ISO file that you download from VMWare. It just doesn't have a .ova extension. So you need to extract the file, change the name to include the .ova extension, and then you're mainly off to the races. HOWEVER, you have to do ONE MORE THING to actually get it working. Specifically, you need to add this additional line to the end of the .vmx file after you import the .ova file:
guestinfo.cis.appliance.net.dns.servers="172.16.30.8"
Set the value appropriately for your network. If you don't add this, the VM will start up, but will have the error:
Failed to configure network
And I couldn't find a way to fix that in the VM as it stood. I updated the DNS settings, rebooted the server, did lots of other things, etc., and it still just showed that error. So I knew I would have to recreate the VM from the OVA file, but needed to figure out how to set the DNS server of the VM from the VMX file.
So I mounted the VCSA ISO file on Linux and ran the following command at the root of it:
grep -r guestinfo.cis *
Somewhat amazingly, that came back within seconds and I found all of the settings from the linked article, and then I searched for "dns" and found the above REQUIRED setting.
And I couldn't find a way to fix that in the VM as it stood. I updated the DNS settings, rebooted the server, did lots of other things, etc., and it still just showed that error. So I knew I would have to recreate the VM from the OVA file, but needed to figure out how to set the DNS server of the VM from the VMX file.
So I mounted the VCSA ISO file on Linux and ran the following command at the root of it:
grep -r guestinfo.cis *
Somewhat amazingly, that came back within seconds and I found all of the settings from the linked article, and then I searched for "dns" and found the above REQUIRED setting.
I didn't have a "good" DNS server on my network, so I quickly created a DNS server on one of my RHEL7.1 systems. It's REALLY easy to do this if you have all of your hosts in the /etc/hosts file. You just need to run the command:
service dnsmasq start
systemctl start dnsmasq
(Edit 9/26: I changed the above command to use the systemd mechanism for starting the service)
And that's it. You now have a DNS server.
I thought that the vSphere Web Client would allow me to just use a browser, but that's not quite right. the web interface requires Flash, and really only supports Windows or MacOS clients. So I've had to bring a Windows machine into the mix anyway.
systemctl start dnsmasq
(Edit 9/26: I changed the above command to use the systemd mechanism for starting the service)
And that's it. You now have a DNS server.
You still NEED a Windows machine for vSphere
I thought that the vSphere Web Client would allow me to just use a browser, but that's not quite right. the web interface requires Flash, and really only supports Windows or MacOS clients. So I've had to bring a Windows machine into the mix anyway.
Wednesday, September 23, 2015
There's an updated cloud in town Part 1: Installing ICO 2.5
IBM Cloud Orchestrator 2.5 was recently released, so I'm installing it in my lab. This new release is based on IBM Cloud Manager with OpenStack 4.3, so quite a bit has changed. This series of posts will discuss some of the issues with installation and overall thoughts as I go through.
If your target machines don't have access to the Red Hat Subscription Network, you can get around this by downloading Fedora 19 and add it as a Yum repository. But the Fedora ISO doesn't include the python-pyasn1-modules rpm. So you'll need to download that and add it to a local repository on the ICM Deployer machine. I used the 'createrepo' command to create the repository under /opt/ibm/cmwo/yum-repo/operatingsystem/redhat7.1/x86_64/optional .
I downloaded the Fedora 19 ISO from here: http://www.itsprite.com/free-linux-download-fedora-19-iso-cddvd-images/
And I downloaded the python-pyasn1-modules RPM from here: ftp://ftp.icm.edu.pl/vol/rzm5/linux-fedora-secondary/releases/19/Everything/armhfp/os/Packages/p/python-pyasn1-modules-0.1.6-1.fc19.noarch.rpm I think a better link is: https://archives.fedoraproject.org/pub/archive/fedora/linux/releases/19/Everything/x86_64/os/Packages/p/python-pyasn1-modules-0.1.6-1.fc19.noarch.rpm
I didn't need any other packages - the Fedora 19 ISO and the python-pyasn1-modules RPM gave my install everything it needed.
Wrong Version of IBM Cloud Manager
IBM Cloud Manager is included in the eAssembly for ICO, but it's not the required level! The Cloud Manager files included with ICO are JUST version 4.3. However, version 4.3 with at least fixpack 2 is what's required. So after installing Cloud Manager, you will need to go to IBM Fix Central to download Cloud Manager 4.3 fixpack 3, which was released after ICO 2.5.Cloud Manager: Additional YUM Repository Needed for KVM
If you're deploying a KVM cloud to Red Hat 7 or 7.1, you will need to enable the Red Hat "Optional" repository to have access to several python packages, including python-zope-interface and python-pyasn1-modules.If your target machines don't have access to the Red Hat Subscription Network, you can get around this by downloading Fedora 19 and add it as a Yum repository. But the Fedora ISO doesn't include the python-pyasn1-modules rpm. So you'll need to download that and add it to a local repository on the ICM Deployer machine. I used the 'createrepo' command to create the repository under /opt/ibm/cmwo/yum-repo/operatingsystem/redhat7.1/x86_64/optional .
I downloaded the Fedora 19 ISO from here: http://www.itsprite.com/free-linux-download-fedora-19-iso-cddvd-images/
And I downloaded the python-pyasn1-modules RPM from here: ftp://ftp.icm.edu.pl/vol/rzm5/linux-fedora-secondary/releases/19/Everything/armhfp/os/Packages/p/python-pyasn1-modules-0.1.6-1.fc19.noarch.rpm I think a better link is: https://archives.fedoraproject.org/pub/archive/fedora/linux/releases/19/Everything/x86_64/os/Packages/p/python-pyasn1-modules-0.1.6-1.fc19.noarch.rpm
I didn't need any other packages - the Fedora 19 ISO and the python-pyasn1-modules RPM gave my install everything it needed.
Subscribe to:
Posts (Atom)