Tuesday, December 4, 2018

If you run Kubernetes in the cloud, the first major vulnerability found isn't a huge issue

The first major Kubernetes (aka K8s) vulnerability was found yesterday:

https://www.zdnet.com/article/kubernetes-first-major-security-hole-discovered/

It's a pretty big deal and quite scary, but patches were immediately available upon disclosure. What's even better is that the managed Kubernetes services running onAWS, Azure and Google Cloud Platform have all been patched already. If you're managing your own K8s clusters, however, you need to patch it yourself, which just takes time and know-how.

In my eyes, this is another data point that shows how proper use of cloud resources can be extremely beneficial to a company. Specifically, the big cloud players, especially AWS, are very similar to a highly competent and agile outsourced IT department. They have offerings that are years ahead of services that you would want to have onsite, and they've got testing methodologies in place to ensure that they're available 99.9% of the time.

It's true that there can be some issues in moving to the cloud, but many of the problems of the past now have very robust solutions that are included in the offerings. And those offerings are available on a pay-as-you-go basis in many cases. So you can easily keep tabs on exactly how much you're spending even on a per-application basis.

To ensure a successful digital transformation, contact us to get the experienced help that will put you on the right path.

Thursday, November 29, 2018

A really interesting AWS DevOps job opening

I just received this email, and the job looks incredibly interesting to me. If you've got AWS and DevOps experience, please contact Bhaskar directly (contact details below):

Direct Client:: In person interview is needed. No skype/WebEx/Phone.

Location: Boston, MA
Duration: 12+ months (37.5 hrs/week)
Rate: Open

Responsibilities:
•             Help with production issues and deployments and any analytics/data science products
•             Move the team  closer to continuous deployment, improving tooling (i.e. automation) and use of infrastructure (i.e. try-test-iterate faster)
•             Setup deployment infrastructure for Mayflower, our style guide and visual component library
•             Setup deployment infrastructure for analytics and data science products, that AWS Lambda and Docker based (Goal: Reduced time to go from engagement to receiving data)
•             Create and implement a strategy to monitor Digital Services supported applications and accurately notify engineers of problems
•             Construct and maintain a Threat Model for Digital Services supported applications, and implement solutions for gaps in our security based on it
•             Develop a reusable infrastructure playbook and tooling for rapidly deploying Data Team packages to internal customers
o             Quickly standup standard environment and/or distribute or handoff work deliverables
•             Mentor the team on good DevOps practices
•             Development and deployment of RESTful APIs, including documentation
•             Setup and facilitate processes and environments for the creation of new services; this would include the creation of processes and deployments to dev, test and prod environments for the proof of concepts services developed throughout the agency.

Skills Needed
•             5-8 years of experience in the below categories
•             Experience with AWS cloud platform specifically with the following services (or equivalent services within alternative cloud-based platforms)
o             AWS CLI
o             Cloud Formation
o             Cloud Front
o             S3 management
o             RDS
o             DynamoDB
o             SNS & SQS
o             EC2 management
o             Elastic beanstalk and other auto-scaling services
o             Lambda Function (python & node)
o             API Gateway
o             AWS Route 53 and AWS Cert Manager
•             Linux terminal
•             Experience with an IaaS (preferably: Amazon Web Services)
•             Virtual machines
•             Monitor production web applications
•             People and technical process improvement/re-engineering
•             Communicating effectively
•             Continuous integration/deployment
•             Conducting technical and behavioral interviews
•             Infrastructure security practices
•             Documenting in plain language
•             RESTful APIs
•             Infrastructure automation tools
•             Amazon Web Services
•             “Serverless” architecture such as AWS Lambda
•             Microservices architecture
•             Bonus points for experience with:
o             Acquia Cloud
o             PHP
o             Drupal
o             Agile/iterative development
o             User Experience (UX) practice
o             Python
o             Ansible
o             Docker
o             Other Coding Experience


Thanks
Bhaskar

Bhaskar Nainwal
Software People Inc.
bhaskar.nainwal@softwarepeople.us
Ph: 631-739-8915 © Fax: 631-574-3122

Wednesday, November 28, 2018

QRadar has a low cost Data Store option that lets you store and search as much data as you want

It looks like this has been around since April, but I just ran across it today. The QRadar Data Store option allows you to store as much log data as you want, without having to pay the normal EPS price. Here's more information on it:


And a short video that talks about it:


It does have a cost, but it's MUCH cheaper than the normal QRadar cost, and it allows you to use the same QRadar interface to search all of your log data (rather than only your security related information).

Tuesday, November 27, 2018

Istio and transaction topology for serverless applications

just watched this short video on Istio:


Basically it's microservice plumbing for Kubernetes that adds security and telemetry. So I wondered if that telemetry included topology information and found that it DOES (or it can with a plugin):


So in some way off future, you'll be able to "automatically" obtain topology data without having to install and manage data collectors and agents (also without asking developers to instrument their code). Kinda neat.

Monday, November 26, 2018

Every enterprise is already using serverless applications in some form or another

If you have an application that makes a call to an external application, then you're on the calling side of a serverless application. Here's a high level graphic to illustrate my point:
You essentially have no insight into how the Results are generated by the "cloud" you're accessing via IP address or hostname. So you're accessing a service, but the actual server part of that interaction is abstracted from you.

Here's a great article on the concept of "Servicefull Serverless" to go into more detail about this:

https://www.infoq.com/articles/serverless-sea-change

Now, the current definition of "serverless" leverages all kinds of possible technologies like AWS Lambda or Whisk or even Cloudflare Isolates, on top of containers and Kubernetes running in VMs (or bare iron in the case of Isolates). So it's extremely important for you to understand those components at some point, but from your view as a consumer, you're already using serverless technology.

Wednesday, November 7, 2018

Why employees hate their computers

I just read this article in slashdot about why doctors hate their computers:

https://science.slashdot.org/story/18/11/06/162201/why-doctors-hate-their-computers

The article really shows JUST how much it can cost do implement software incorrectly. Specifically, the process we follow includes the following questions/components to ensure that our customers have useful software once it's in production:

- Identification of ALL users of the system and their frequency of use. Once we know all of the users and how often they interact with the system, we can define priorities for each use case. For example, we would have identified doctors as high priority frequent users and ensured that their interactions with the system were the smoothest possible. There are several ways to ensure this, but one that we always require is an actual run-through of the screens with the user. This is normally difficult to schedule with the busiest users, but it MUST be done or you'll simply be burning money.

- Identification of all data to be migrated. In the case of moving to a new system (whether it's medical records, insurance claims, or anything else), ALL of the existing data must be found and must be made available in the new system in some way or another. This normally takes time, but that time is a lot less expensive BEFORE a new system goes live. Issues in a software implementation get more and more expensive to fix the farther along in the implementation, so they need to be caught early.

- For enterprise applications, "good enough" isn't. Some of the current thinking in application development and deployment says that you should get something in front of users and fix problems as they arise. This attitude is fine for a new game or small application, but it can cost money and lives in enterprise software. The people leading the implementation need to have experience in business critical applications to truly understand the cost of even a minor failure. When the cost of one minute of downtime can be measured in tens of thousands of dollars (or more!), every possible scenario has to be addressed before a production rollout.

At Gulfsoft, all of our consultants have over 15 years of experience in mission critical situations. We've worked with 911 emergency systems, satellite communications companies, large financial companies and everything in between. We know how to successfully implement large scale enterprise solutions to ensure that your employees and customers are delighted, and we can help you.

Tuesday, October 16, 2018

You can now use Vega to create custom graphs in Kibana

Prior to Kibana 6.2, you had to create a custom plugin to create custom visualization types. Now, however, support for Vega is included. Vega is a JSON (HJSON, actually) language that you can think of as a wrapper around the D3 visualization toolkit to allow it to display in Kibana. Here's a video with the highlights:

https://www.youtube.com/watch?v=lQGCipY3th8