Using the standard configuration for the QRadar/ServiceNow integration gives you some great capabilities, but some of our customers have asked for more information in the generated ServiceNow incidents. Specifically, they've asked to have the payloads from the events associated with the offense to be added to the Description of the incident in ServiceNow. This provides extensive details about the events that triggered the offense in one pane of glass so the SOC engineer doesn't have to separately open QRadar to get this information.
This can be accomplished my making some configuration changes in both QRadar and ServiceNow. I'll provide the overview here. If you would like more details, please contact me.
1. Add the offense start time to the incident description in the mapping within QRadar.
2. Create a ServiceNow business rule to parse the offense id and start time from the description whenever a new incident is created from QRadar.
3. In that same business rule, use the offense id, start time and a stop time (equal to start time +1) to submit an Ariel query to QRadar via REST to have the query run.
4. In that same business rule, parse the results of the previous REST call to get the results id, then make a second REST call to obtain the actual results, which will be the payloads of the events that caused the offense (and resulting incident) to be created.
The solution doesn't tax either system very much at all and makes life easier for the security engineer researching the issue.
Wednesday, January 16, 2019
Thursday, January 10, 2019
Install IBM's QRadar Community Edition 7.3.1 on CentOS 7.5 instead of RHEL 7.5
IBM offers a QRadar Community Edition for free available here:
https://developer.ibm.com/qradar/ce/
The documentation states that it runs on "CentOS or Red Hat 7.5 with a Minimal install". If you're installing the OS from scratch, I would recommend that you use CentOS 7.5 (officially CentOS 7 1804) because it works much better than Red Hat. Specifically, I downloaded CentOS 7.5 from here:
http://repos-lax.psychz.net/centos/7.5.1804/isos/x86_64/CentOS-7-x86_64-Everything-1804.iso
There are smaller downloads in that same directory, but I wanted to get everything I might need. I then installed it with 16GB RAM and 8 cores and selected the "Minimal Install" option (this is the default option). I did this install under VMWare Workstation 14 Pro running on a Windows 10 laptop.
I could then directly follow the install instructions from IBM:
https://developer.ibm.com/qradar/wp-content/uploads/sites/89/2018/08/b_qradar_community_edition.pdf
The QRadar install will 100% fail if you try to install it on CentOS 7.6 (1810). The prerequisite checker will tell you that 7.5 is REQUIRED.
Trying to install on CentOS 7.5 using the "Server with GUI" option fails on glusterfs* package problems.
Installing on RHEL 7.5 requires that you configure your RHEL instance to be registered with the Red Hat Subscription Manager
https://developer.ibm.com/qradar/ce/
The documentation states that it runs on "CentOS or Red Hat 7.5 with a Minimal install". If you're installing the OS from scratch, I would recommend that you use CentOS 7.5 (officially CentOS 7 1804) because it works much better than Red Hat. Specifically, I downloaded CentOS 7.5 from here:
http://repos-lax.psychz.net/centos/7.5.1804/isos/x86_64/CentOS-7-x86_64-Everything-1804.iso
There are smaller downloads in that same directory, but I wanted to get everything I might need. I then installed it with 16GB RAM and 8 cores and selected the "Minimal Install" option (this is the default option). I did this install under VMWare Workstation 14 Pro running on a Windows 10 laptop.
I could then directly follow the install instructions from IBM:
https://developer.ibm.com/qradar/wp-content/uploads/sites/89/2018/08/b_qradar_community_edition.pdf
What doesn't work very well or at all:
(Guess how I know these)The QRadar install will 100% fail if you try to install it on CentOS 7.6 (1810). The prerequisite checker will tell you that 7.5 is REQUIRED.
Trying to install on CentOS 7.5 using the "Server with GUI" option fails on glusterfs* package problems.
Installing on RHEL 7.5 requires that you configure your RHEL instance to be registered with the Red Hat Subscription Manager
Wednesday, January 2, 2019
Integrating systems today is both easier and more complex than ever
Integrating IT systems used to require a LOT of sweat and tears just to get the plumbing configured (think of updating a SharePoint site when a new z/OS dataset is created). Today, thankfully, all of the plumbing is available and there are tons of different options for integrations. So the problem now is surveying your specific environment to identify all of the tools that people use and then architecting and implementing a solution that works well for everyone.
As an example, you may use SalesForce for CRM, ServiceNow for service desk, Maximo for asset management, Oracle Cloud for financials, AWS for some applications, Grafana for operations dashboards and Sharepoint for internal web sites (just to name a few). All of these solutions have workflow engines and connectors that can allow you to integrate them all together. But you first need to answer a couple of questions that are similar to those associated with custom application development:
The eventual solution depends on a large set of factors, and the solution is often complex. That's why we always document our solutions in a format that's easily consumed. Sometimes that means it's a Word document with Visio diagrams, and other times it's a full Sharepoint site with attached documents - it really depends on the client.
As an example, you may use SalesForce for CRM, ServiceNow for service desk, Maximo for asset management, Oracle Cloud for financials, AWS for some applications, Grafana for operations dashboards and Sharepoint for internal web sites (just to name a few). All of these solutions have workflow engines and connectors that can allow you to integrate them all together. But you first need to answer a couple of questions that are similar to those associated with custom application development:
Who are the people and personas that we're trying to help?
This is the most important question because the personas you identify will directly shape the solution you're implementing. And answering this question with specific personas, like "Nancy the regional sales manager" will allow you to refine additional data down the road.What data am I interested in and which systems are the golden sources of record for that data?
We spend quite a bit of time with customers simply finding all of the systems that are being used. Normally we start small, maybe with a single department, and then we work on getting a larger and larger picture. All of our clients use numerous systems that usually have some number of overlapping functions. We try to find everything in use so we can intelligently identify the ones that may be best suited to different tasks, also taking into account the number of users who have familiarity with the different applications.Now that you've got some questions answered, what are the options available?
This is where things get messy in a hurry, and why you want to enlist the help of an experienced enterprise architect. It used to be that you could only get a workflow engine from an expensive enterprise application. Now, most companies are already paying for multiple workflow engines and they aren't using them. For example, Microsoft offers several: Flow, Business Process Flows (in Dynamics365), and Azure Logic Apps. Those are all separate (though very similar and intertwined) workflow engines just from Microsoft. AWS has Simple Workflow Service and Step Functions. And IBM has Business Process Automation or the workflow engine in Maximo. ServiceNow has a workflow component. (As of this writing, Google Cloud doesn't offer a generic workflow engine; they have Cloud Composer, but that's a completely different animal.) And each of those has a large set of connectors, triggers and actions that allow you to automate anything you need.So which components do you use?
This is where knowledge, experience and collaboration come together. There is no one answer that generically fits the requirements for all customers. The answer has to be developed and refined based on the needs of the customer and the project. We use an iterative approach to our implementations, where we develop/customize a little at a time, while gathering feedback from stakeholders. This is commonly referred to as the Agile Methodology, and we've found that it works very well, especially for complex integrations.
The eventual solution depends on a large set of factors, and the solution is often complex. That's why we always document our solutions in a format that's easily consumed. Sometimes that means it's a Word document with Visio diagrams, and other times it's a full Sharepoint site with attached documents - it really depends on the client.
What's the point of this post?
While it's easier than ever to connect systems together, there's still a lot of hard work that has to go into implementing solutions. And this is exactly what we at Gulfsoft Consulting do: we help customers solve complex business problems by leveraging the appropriate knowledge, processes, people and tools. No matter what software you're working with, if you need help solving a complex problem, contact us. We've got decades of experience and we keep up to date on the latest technologies, patterns and strategies.Sunday, December 9, 2018
JIRA can easily be used incorrectly
This is a great article about how JIRA can easily be weaponized for all the wrong purposes:
TechCrunch: JIRA is an antipattern. https://techcrunch.com/2018/12/09/jira-is-an-antipattern/
Like all things related to Agile, it needs to be used at the appropriate stage(s), otherwise it is just wrong.
Someone needs to have a view of the overarching goal, and that's where we fit in. Gulfsoft Consulting is a group of people who have decades of experience dealing wit it all of the details of data centers and application development, and we can help you make the right decisions. Contact us to start the conversation about your digital transformation.
Wednesday, December 5, 2018
With new avenues to make money come new ways for others to steal that money
I just read this article about Defy Media abruptly closing:
https://www.theverge.com/2018/12/5/18125657/defy-media-youtube-logan-paul-ryland-adams-anthony-padillo-smosh-network
I wanted to share this as a warning to all entrepreneurs out there to be diligent in vetting your partners and backers. Make sure you know what you're getting into before signing anything. And try to find a trusted adviser who you can turn to with questions about business and finances.
https://www.theverge.com/2018/12/5/18125657/defy-media-youtube-logan-paul-ryland-adams-anthony-padillo-smosh-network
I wanted to share this as a warning to all entrepreneurs out there to be diligent in vetting your partners and backers. Make sure you know what you're getting into before signing anything. And try to find a trusted adviser who you can turn to with questions about business and finances.
Tuesday, December 4, 2018
If you run Kubernetes in the cloud, the first major vulnerability found isn't a huge issue
The first major Kubernetes (aka K8s) vulnerability was found yesterday:
https://www.zdnet.com/article/kubernetes-first-major-security-hole-discovered/
It's a pretty big deal and quite scary, but patches were immediately available upon disclosure. What's even better is that the managed Kubernetes services running onAWS, Azure and Google Cloud Platform have all been patched already. If you're managing your own K8s clusters, however, you need to patch it yourself, which just takes time and know-how.
In my eyes, this is another data point that shows how proper use of cloud resources can be extremely beneficial to a company. Specifically, the big cloud players, especially AWS, are very similar to a highly competent and agile outsourced IT department. They have offerings that are years ahead of services that you would want to have onsite, and they've got testing methodologies in place to ensure that they're available 99.9% of the time.
It's true that there can be some issues in moving to the cloud, but many of the problems of the past now have very robust solutions that are included in the offerings. And those offerings are available on a pay-as-you-go basis in many cases. So you can easily keep tabs on exactly how much you're spending even on a per-application basis.
To ensure a successful digital transformation, contact us to get the experienced help that will put you on the right path.
https://www.zdnet.com/article/kubernetes-first-major-security-hole-discovered/
It's a pretty big deal and quite scary, but patches were immediately available upon disclosure. What's even better is that the managed Kubernetes services running onAWS, Azure and Google Cloud Platform have all been patched already. If you're managing your own K8s clusters, however, you need to patch it yourself, which just takes time and know-how.
In my eyes, this is another data point that shows how proper use of cloud resources can be extremely beneficial to a company. Specifically, the big cloud players, especially AWS, are very similar to a highly competent and agile outsourced IT department. They have offerings that are years ahead of services that you would want to have onsite, and they've got testing methodologies in place to ensure that they're available 99.9% of the time.
It's true that there can be some issues in moving to the cloud, but many of the problems of the past now have very robust solutions that are included in the offerings. And those offerings are available on a pay-as-you-go basis in many cases. So you can easily keep tabs on exactly how much you're spending even on a per-application basis.
To ensure a successful digital transformation, contact us to get the experienced help that will put you on the right path.
Thursday, November 29, 2018
A really interesting AWS DevOps job opening
I just received this email, and the job looks incredibly interesting to me. If you've got AWS and DevOps experience, please contact Bhaskar directly (contact details below):
Direct Client:: In person interview is needed. No skype/WebEx/Phone.
Location: Boston, MA
Duration: 12+ months (37.5 hrs/week)
Rate: Open
Responsibilities:
• Help with production issues and deployments and any analytics/data science products
• Move the team closer to continuous deployment, improving tooling (i.e. automation) and use of infrastructure (i.e. try-test-iterate faster)
• Setup deployment infrastructure for Mayflower, our style guide and visual component library
• Setup deployment infrastructure for analytics and data science products, that AWS Lambda and Docker based (Goal: Reduced time to go from engagement to receiving data)
• Create and implement a strategy to monitor Digital Services supported applications and accurately notify engineers of problems
• Construct and maintain a Threat Model for Digital Services supported applications, and implement solutions for gaps in our security based on it
• Develop a reusable infrastructure playbook and tooling for rapidly deploying Data Team packages to internal customers
o Quickly standup standard environment and/or distribute or handoff work deliverables
• Mentor the team on good DevOps practices
• Development and deployment of RESTful APIs, including documentation
• Setup and facilitate processes and environments for the creation of new services; this would include the creation of processes and deployments to dev, test and prod environments for the proof of concepts services developed throughout the agency.
Skills Needed
• 5-8 years of experience in the below categories
• Experience with AWS cloud platform specifically with the following services (or equivalent services within alternative cloud-based platforms)
o AWS CLI
o Cloud Formation
o Cloud Front
o S3 management
o RDS
o DynamoDB
o SNS & SQS
o EC2 management
o Elastic beanstalk and other auto-scaling services
o Lambda Function (python & node)
o API Gateway
o AWS Route 53 and AWS Cert Manager
• Linux terminal
• Experience with an IaaS (preferably: Amazon Web Services)
• Virtual machines
• Monitor production web applications
• People and technical process improvement/re-engineering
• Communicating effectively
• Continuous integration/deployment
• Conducting technical and behavioral interviews
• Infrastructure security practices
• Documenting in plain language
• RESTful APIs
• Infrastructure automation tools
• Amazon Web Services
• “Serverless” architecture such as AWS Lambda
• Microservices architecture
• Bonus points for experience with:
o Acquia Cloud
o PHP
o Drupal
o Agile/iterative development
o User Experience (UX) practice
o Python
o Ansible
o Docker
o Other Coding Experience
Thanks
Bhaskar
Bhaskar Nainwal
Software People Inc.bhaskar.nainwal@ softwarepeople.us
Ph: 631-739-8915 © Fax: 631-574-3122
Bhaskar
Bhaskar Nainwal
Software People Inc.bhaskar.nainwal@
Ph: 631-739-8915 © Fax: 631-574-3122
Subscribe to:
Posts (Atom)