Wednesday, January 2, 2019

Integrating systems today is both easier and more complex than ever

Integrating IT systems used to require a LOT of sweat and tears just to get the plumbing configured (think of updating a SharePoint site when a new z/OS dataset is created). Today, thankfully, all of the plumbing is available and there are tons of different options for integrations. So the problem now is surveying your specific environment to identify all of the tools that people use and then architecting and implementing a solution that works well for everyone.

As an example, you may use SalesForce for CRM, ServiceNow for service desk, Maximo for asset management, Oracle Cloud for financials, AWS for some applications, Grafana for operations dashboards and Sharepoint for internal web sites (just to name a few). All of these solutions have workflow engines and connectors that can allow you to integrate them all together. But you first need to answer a couple of questions that are similar to those associated with custom application development:

Who are the people and personas that we're trying to help?

This is the most important question because the personas you identify will directly shape the solution you're implementing. And answering this question with specific personas, like "Nancy the regional sales manager" will allow you to refine additional data down the road.

What data am I interested in and which systems are the golden sources of record for that data?

We spend quite a bit of time with customers simply finding all of the systems that are being used. Normally we start small, maybe with a single department, and then we work on getting a larger and larger picture. All of our clients use numerous systems that usually have some number of overlapping functions. We try to find everything in use so we can intelligently identify the ones that may be best suited to different tasks, also taking into account the number of users who have familiarity with the different applications.

Now that you've got some questions answered, what are the options available?

This is where things get messy in a hurry, and why you want to enlist the help of an experienced enterprise architect. It used to be that you could only get a workflow engine from an expensive enterprise application. Now, most companies are already paying for multiple workflow engines and they aren't using them. For example, Microsoft offers several: Flow, Business Process Flows (in Dynamics365), and Azure Logic Apps. Those are all separate (though very similar and intertwined) workflow engines just from Microsoft. AWS has Simple Workflow Service and Step Functions. And IBM has Business Process Automation or the workflow engine in Maximo. ServiceNow has a workflow component. (As of this writing, Google Cloud doesn't offer a generic workflow engine; they have Cloud Composer, but that's a completely different animal.) And each of those has a large set of connectors, triggers and actions that allow you to automate anything you need.

So which components do you use?

This is where knowledge, experience and collaboration come together. There is no one answer that generically fits the requirements for all customers. The answer has to be developed and refined based on the needs of the customer and the project. We use an iterative approach to our implementations, where we develop/customize a little at a time, while gathering feedback from stakeholders. This is commonly referred to as the Agile Methodology, and we've found that it works very well, especially for complex integrations.

The eventual solution depends on a large set of factors, and the solution is often complex. That's why we always document our solutions in a format that's easily consumed. Sometimes that means it's a Word document with Visio diagrams, and other times it's a full Sharepoint site with attached documents - it really depends on the client.

What's the point of this post?

While it's easier than ever to connect systems together, there's still a lot of hard work that has to go into implementing solutions. And this is exactly what we at Gulfsoft Consulting do: we help customers solve complex business problems by leveraging the appropriate knowledge, processes, people and tools. No matter what software you're working with, if you need help solving a complex problem, contact us. We've got decades of experience and we keep up to date on the latest technologies, patterns and strategies.

Sunday, December 9, 2018

JIRA can easily be used incorrectly

This is a great article about how JIRA can easily be weaponized for all the wrong purposes:

TechCrunch: JIRA is an antipattern. https://techcrunch.com/2018/12/09/jira-is-an-antipattern/

Like all things related to Agile, it needs to be used at the appropriate stage(s), otherwise it is just wrong.

Someone needs to have a view of the overarching goal, and that's where we fit in. Gulfsoft Consulting is a group of people who have decades of experience dealing wit it all of the details of data centers and application development, and we can help you make the right decisions. Contact us to start the conversation about your digital transformation.

Wednesday, December 5, 2018

With new avenues to make money come new ways for others to steal that money

I just read this article about Defy Media abruptly closing:

https://www.theverge.com/2018/12/5/18125657/defy-media-youtube-logan-paul-ryland-adams-anthony-padillo-smosh-network

I wanted to share this as a warning to all entrepreneurs out there to be diligent in vetting your partners and backers. Make sure you know what you're getting into before signing anything. And try to find a trusted adviser who you can turn to with questions about business and finances.

Tuesday, December 4, 2018

If you run Kubernetes in the cloud, the first major vulnerability found isn't a huge issue

The first major Kubernetes (aka K8s) vulnerability was found yesterday:

https://www.zdnet.com/article/kubernetes-first-major-security-hole-discovered/

It's a pretty big deal and quite scary, but patches were immediately available upon disclosure. What's even better is that the managed Kubernetes services running onAWS, Azure and Google Cloud Platform have all been patched already. If you're managing your own K8s clusters, however, you need to patch it yourself, which just takes time and know-how.

In my eyes, this is another data point that shows how proper use of cloud resources can be extremely beneficial to a company. Specifically, the big cloud players, especially AWS, are very similar to a highly competent and agile outsourced IT department. They have offerings that are years ahead of services that you would want to have onsite, and they've got testing methodologies in place to ensure that they're available 99.9% of the time.

It's true that there can be some issues in moving to the cloud, but many of the problems of the past now have very robust solutions that are included in the offerings. And those offerings are available on a pay-as-you-go basis in many cases. So you can easily keep tabs on exactly how much you're spending even on a per-application basis.

To ensure a successful digital transformation, contact us to get the experienced help that will put you on the right path.

Thursday, November 29, 2018

A really interesting AWS DevOps job opening

I just received this email, and the job looks incredibly interesting to me. If you've got AWS and DevOps experience, please contact Bhaskar directly (contact details below):

Direct Client:: In person interview is needed. No skype/WebEx/Phone.

Location: Boston, MA
Duration: 12+ months (37.5 hrs/week)
Rate: Open

Responsibilities:
•             Help with production issues and deployments and any analytics/data science products
•             Move the team  closer to continuous deployment, improving tooling (i.e. automation) and use of infrastructure (i.e. try-test-iterate faster)
•             Setup deployment infrastructure for Mayflower, our style guide and visual component library
•             Setup deployment infrastructure for analytics and data science products, that AWS Lambda and Docker based (Goal: Reduced time to go from engagement to receiving data)
•             Create and implement a strategy to monitor Digital Services supported applications and accurately notify engineers of problems
•             Construct and maintain a Threat Model for Digital Services supported applications, and implement solutions for gaps in our security based on it
•             Develop a reusable infrastructure playbook and tooling for rapidly deploying Data Team packages to internal customers
o             Quickly standup standard environment and/or distribute or handoff work deliverables
•             Mentor the team on good DevOps practices
•             Development and deployment of RESTful APIs, including documentation
•             Setup and facilitate processes and environments for the creation of new services; this would include the creation of processes and deployments to dev, test and prod environments for the proof of concepts services developed throughout the agency.

Skills Needed
•             5-8 years of experience in the below categories
•             Experience with AWS cloud platform specifically with the following services (or equivalent services within alternative cloud-based platforms)
o             AWS CLI
o             Cloud Formation
o             Cloud Front
o             S3 management
o             RDS
o             DynamoDB
o             SNS & SQS
o             EC2 management
o             Elastic beanstalk and other auto-scaling services
o             Lambda Function (python & node)
o             API Gateway
o             AWS Route 53 and AWS Cert Manager
•             Linux terminal
•             Experience with an IaaS (preferably: Amazon Web Services)
•             Virtual machines
•             Monitor production web applications
•             People and technical process improvement/re-engineering
•             Communicating effectively
•             Continuous integration/deployment
•             Conducting technical and behavioral interviews
•             Infrastructure security practices
•             Documenting in plain language
•             RESTful APIs
•             Infrastructure automation tools
•             Amazon Web Services
•             “Serverless” architecture such as AWS Lambda
•             Microservices architecture
•             Bonus points for experience with:
o             Acquia Cloud
o             PHP
o             Drupal
o             Agile/iterative development
o             User Experience (UX) practice
o             Python
o             Ansible
o             Docker
o             Other Coding Experience


Thanks
Bhaskar

Bhaskar Nainwal
Software People Inc.
bhaskar.nainwal@softwarepeople.us
Ph: 631-739-8915 © Fax: 631-574-3122

Wednesday, November 28, 2018

QRadar has a low cost Data Store option that lets you store and search as much data as you want

It looks like this has been around since April, but I just ran across it today. The QRadar Data Store option allows you to store as much log data as you want, without having to pay the normal EPS price. Here's more information on it:


And a short video that talks about it:


It does have a cost, but it's MUCH cheaper than the normal QRadar cost, and it allows you to use the same QRadar interface to search all of your log data (rather than only your security related information).

Tuesday, November 27, 2018

Istio and transaction topology for serverless applications

just watched this short video on Istio:


Basically it's microservice plumbing for Kubernetes that adds security and telemetry. So I wondered if that telemetry included topology information and found that it DOES (or it can with a plugin):


So in some way off future, you'll be able to "automatically" obtain topology data without having to install and manage data collectors and agents (also without asking developers to instrument their code). Kinda neat.