What I have learned over the last 40 years…

  • My hope is that you will read these and either relate to them or learn from and apply them in the future. They aren’t listed in any particular order.
  • 1) Trust and follow your own hunches and personal intuition, they are normally right
  • 2) You can’t please or say yes to everyone, so stop trying
  • 3) Remove and or distance yourself from situations or people who don’t add positive value to you, your life, family, career, etc
  • 4) Don’t measure your success by comparing yourself to others. We are all at different places in our lives
  • 5) Each of us has a special talent or ability that’s unique to us
  • 6) Always take the time to pay it forward to someone at some point someone took time for you
  • 7) Having grit can be the difference between your success or failure
  • 8) One of your greatest skills to develop and master is ownership and responsibility
  • 9) Some of the most rewarding situations and experiences will require some level of risk and personal sacrifice
  • 10) Time, not money is the most valuable thing
  • 11) Whether people acknowledge it or not, there is a little truth in every joke
  • 12) Whether it’s said or not, people do things because they have something personally to gain from it
  • 13) When faced with a challenge or other dilemma, start by asking why
  • 14) When faced with an ethical dilemma, always try and choose a solution or an alternative that offers the greater good or the lesser evil
  • 15) Everyone is busy with life, always find time for your family, especially your spouse and kids
  • 16) If you made a wrong along the way, make a right
  • 17) As an individual person, be real, be consistent
  • 18) Start with the end state in mind, before you start something new
  • 19) Be self-taught, learn how to be a continuous learner
  • 20) It may not seem fair, but no matter what you do or how hard you try, there will always be people who will simply not like you
  • 21) Listen to good music
  • 22) Whether hard copies or digital, read books, new releases, and the classics
  • 23) Continous, incremental improvement say 5% daily may seem like going nowhere fast until you realize you spent 5% x 30 days x 12 months on improvement, far greater than 0
  • 24) Show up early and always be on time
  • 25) You will not always be the smartest person in a room, situation, or conversation

Cloud Security Audit using Scout Suite

ScoutSuite is a Python-based tool published and maintained by NCC Group, for use in cloud security assessments.

Install and Run Scout Suite

Depending on your own environment, you may decide to use virtualenv or in my example Docker to help avoid any type of package issues.

Docker installation via Homebrew

                                                                                                              $ brew install docker$ docker --version

Launch Docker

                                                                                                             $ open -a Docker

Running The Container

                                                                                                             $ docker run -it rossja/ncc-scoutsuite bash.

Running Scout Suite

Once the CLI for the environment has been configured and the appropriate credentials set up, you can run Scout Suite in the container.

You can verify that the installation is working by using the command scout --help, which should provide help for the tool.

Using an AWS IAM role.

If you or your team plans to use Scoutsuite against a specific AWS IAM role you’ll have to switch to that role.

                                                                                                             scout aws --profile my-aws-cli-profile

Using the default AWS CLI profile.

Check the current identity you’re on using the AWS CLI.

                                                                                                             aws sts get-caller-identity

If you need to manually restart the virtual environment, you can do this using the activate script.

                                                                                                           root@9564f9:~# source scoutsuite/bin/activate

Running a Test

                                                                                                           scout aws --profile user01 --no-browser --report-dir /root/scout-report

Running a Test — with some optional parameters

                                                                                                           scout aws --profile user01 --no-browser --report-dir /root/scout-report

Reading the HTML Report

Scout Suite does take some time to run, but while it gathers data from APIs, and pulls info on the various resources and cloud services, you will see live status logs of the activity. In this example, I am running an audit on my AWS environment, but keep in mind we could also audit Azure, GCP, or an Oracle Cloud environment.

Now once Scout Suite has finished auditing the environment, an HTML report will be available in the current working directory or perhaps another directory if you specified in the additional parameters.

HTML Report Example

If there are findings that need attention, it’s pretty simple to understand where the potential issue is and why it was flagged for review and remediation.

If you click on a service, you’ll see that Scout Suite has grouped the findings into three simple levels: GoodWarning, and Danger.

As an example My Development AWS account that was audited, the ACM service had 4 total resources which were checked, but two of the four were flagged since the Transparency Logging Preference was set to DISABLED. This makes it simple enough to go in and address the findings, then run the audit again to ensure the findings in question are closed.

You may even consider Amazon EventBridge, where you can automate your AWS services to respond automatically to various events. Rules can also be leveraged, specific to some events, where you may decide some automated actions should be taken.

Distributed Load Testing on AWS

Background
Using Distributed Load Testing on AWS can help you automate not just the testing for apps, to help find bottlenecks or other performance issues related to scale.

Use Case
I was looking for something which could be deployed using containers. I was also interested in something that I could schedule load tests in the future.

Architecture

source: aws.amazon.com

Lets Build

Launch in the AWS Console using CloudFormation


source: aws.amazon.com

Image description

I have entered my desired admin name and email address, but left everything else to use the default settings.

Once the CloudFormation stack is created, we can select the stack, then select Outputs at the top, this will have a link to access the load testing console.

Since this solution leverages AWS cognito, you can setup and configure additional user access there.

Once logged in, we will see a screen like the one below.

Image description

We can create a new load test, using either a single http endpoint or JMeter and uploading a file.

Image description

CloudFormation Template

https://solutions-reference.s3.amazonaws.com/distributed-load-testing-on-aws/latest/distributed-load-testing-on-aws.template

Additional Information
https://aws.amazon.com/blogs/architecture/ensure-optimal-application-performance-with-distributed-load-testing-on-aws

What is your AWS tagging strategy?

If this concept is new, no worries, just head over and read this.

I would also recommend that you spend some time thinking about how you want to approach this before just jumping in. It’s not impossible to change course once you start down a path, but it is much easier to take some time and think through how best to go forward. If you really aren’t sure, try and start with technical and business tags for example. A technical tag for example could be the environment, or an application ID. A business tag could be a project name, a product or type type, the actual names will be entered as values.

Some of the benefits or more practical reasons for using tags include resource organization, cost allocation (P&L, projects customer, business unit/Team), automation and access control (IAM conditions), security or risk identification.

As your environment(s) mature and grow, and become more complex it will become extremely helpful to have a solid, well thought out tagging plan.

Remember, it doesn’t need to be super complicated or complex from the start, for me this is one of the situation where something is better than nothing, and the sooner the better!

Both Dev and DevOps teams often struggle with tagging consistency, both in the frequency and the tagging content itself. The eventual state to help address this, may be using automation. A great option may be to consider using yor.

Yor is another open-source tool to automatically adds tags to infrastructure configurations. Yor currently supports Terraform, AWS CloudFormation, and server less.

By default, Yor will add a number of tags to each resource block, including the name of the git organization, the repository, and the file that contains the template that created the resource. Another really powerful feature from Yor is that it adds a unique identifier which allows for quick search in Github to locate the code for example.

Your Best Engineers Can Also Be Great Business People

It can take some people longer than others to arrive at this conclusion, which is simply that a team’s approach matters more than the technical details themselves. If you don’t agree with that statement, that’s ok, but perhaps after reading this article you will feel differently.

Over the last 20+ years of my career, which includes Technology, Leadership, Security, and Business, I have done and still do on occasion some hands-on technical work. I have observed and over the years disagree with the idea or view that Technical leaders or heads of engineering know more or know better compared to others outside of the respective Technology or Engineering fields. Those others, mostly outside of Technology or Engineering, have brought us a slightly different mindset or angle which I think helps us leaders develop teams from good to great.

To go a little deeper, when I say good to great, what I mean is those individuals who are simply asking me, our team, or others, smarter questions, seeing what’s ahead not just what’s in front of, prioritizing, and often separate themselves more figuratively from others around them. As leaders, this is how we spot potential talent or the beginnings of a great leader. This timing can be critical, as this is when a leader needs to make themselves available to guide and support this person, so they develop and not burn out or give up and become frustrated or feel abandoned.

Let’s dive a bit deeper into the concepts I mentioned above when I said asking smarter questions, evaluating not only what’s in front of you, but what’s further down the road that we should prepare for, avoid or delay until we get them.

Business focused Engineers/Technical people are thinking about the following:

  • Focused, aligned, and prioritized on the work that will pay off sooner than later. With any type of work, there is a cost for doing that work, the sooner we see a return or reward the better.
  • Before jumping into a new project, upgrade or migration, take the time to estimate or calculate whether the effort is worth an individual or team’s time.
  • No organization, team, or individual has an infinite budget, we often need to look at work or projects in terms of opportunity costs. In other words, the same person or resources cant be used for multiple deliverables at the exact same time, to DO one task will always mean we are NOT doing another task. This is perfectly fine, but this is where it’s critical that the task we decide to do is aligned with business outcomes and has the expected financial return or reward.

Looking at the first bullet item, business-minded engineers and Tech staff should be constantly asking when will this work pay off, and when and where is the return.

Engineering and technology work has a time value, projects that are done sooner are worth much more sooner than later. So we try and avoid the work items that pay off too far into the future.

Engineering and Tech projects such as upgrades and migrations for example carry a huge burden of guaranteed upfront costs, and honestly, the rewards of these efforts are usually unknown or unclear and often a way out into the future. The other reality is that the returns or rewards are often longer than a business or its leaders may want or even realize. In fact, sometimes a typical one-year upgrade or migration may not provide business owners or stakeholders with any reward or return until the second year or longer! This may seem obvious, but another consideration is the return or reward itself from the work done with the upfront investment must exceed the costs for the initial work. No business wants to spend one year on some work or project, only to save one year, the return or reward needs to be more compelling than that!

Looking at the next bullet item, the business-minded engineer or Technical leader evaluates whether the work, project, or opportunity is worth the time.

This is often difficult, but where time and attention deciding what work should be done, and what provides the most value to an organization. Every project or potential work has value to someone else, which is why there is an almost nonstop, continuous inflow of requests for some output or deliverable.

As another rule, leaders and staff should be continuously asking themselves whether a project is worth the time required to complete it. There will always be exceptions or special situations such as Info security or end-of-life/support scenarios, I am referring to everything else outside of that.

Let me share a quote from Warren Buffett that says, “A good management record is far more a function of what business boat you get into than it is how effectively you row. There is no extra credit for the degree of difficulty, lower your degree of difficulty.”

The same thing applies to engineering and technology. Our teams working on the right project work is more important than the Minutia such as the tech stack we select or the lines of code we write.

We need to be able to decide when it makes sense for us to be builders, vs when we should simply purchase something off the shelf that is ready to go or that requires some lite integrations with existing business processes. If off-the-shelf doesn’t fit, and the amount of customization needed outweighs building, then maybe building makes sense. This is often another area, where a business may take the time and evaluate the business requirement or processes to decide whether those can change and be more flexible vs building a system or solution around them.

Switching gears a little, more into infrastructure, whether we decide to host apps/systems in the cloud or keep them on-premises, we need hard data to look at, and we need to calculate the costs and benefits before deciding to go either way. Some questions we may want to ask are below, there are probably many more:

  • If we purchased a solution off the shelf, can the team we have today onboard, integrate, and maintain it?
  • Here it is important we accurately estimate the costs of any building project to help ensure the expected return is greater than the building effort and time.

The last bullet or concept is whether the project or work will move the organization forward.

This is related of course to the other concepts discussed above, but it’s important the teams and resources are spent on those things which move the business forward and are aligned with other business priorities and objectives. As already mentioned, we all have a continuous flow of requests, but often we don’t have the proper justification or the proper financials to show us the expected return on the work done and from what upfront cost.

With many engineering and technology projects, we often have or observe a level of technical debt that needs to be considered in terms of our opportunity cost. In other words, we will need to decide whether we are willing or able to give up having or doing something else. We may find cleaning up one system, app, or database means we simply will not be able to clean up another. This is not unique or special by any means, all businesses and tech teams are faced with these challenges at some point, but it does become important to focus on the specific cleanup efforts, and even more critical, which one helps move the business forward by providing the most value or the greatest impact. Updating or cleaning up a critical business system that most an organization utilizes is a specific Tech debt to focus resources and time on, vs a smaller internal app or system that is rarely used and or is used by a smaller internal group.

As a rule, teams should always consider the opportunity cost of their project work. Remember, that by doing one thing, we are always making a choice and not doing another thing. Our time is very limited we can’t go back and reclaim time lost, so we need to be aware of what, where, and how we are spending it.

Are you an Owner or a Contributor?

Maybe said differently, do you find yourself more in leadership or individual contributor situations?

I think either of these is perfectly fine, but take time to reflect, and ask yourself. If you are still unsure, look at some of the primers below.

  • Do you or the team need specific direction for all or most of the work or projects?
  • When you or the team get stuck, can you keep going or do you stop and wait to be told what to do next, what to try, what to stop doing?
  • Do you do only what you are asked, any extra work, do you go above and beyond?
  • If you have concerns, or respectfully disagree with an approach or idea do you keep quiet and keep your head down?
  • Do you propose solutions?
  • Are you the one executing, deploying, implementing, and testing solutions?

From an engineering, DevOps, and support perspective, especially at the senior or principal levels, staff members all know the work is never done, there is always a backlog, and there is always something that needs a little improvement, review, updates, etc. The owners know this, and they automatically move on and work on this extra work when their scheduled sprint, project work is complete.

Sometimes within DevOps, and other areas of support, we find situations where there are gaps in ownership, or the classic case where every owns or shares something, when it reality no one actually owns it. I think we see even more of this in small, lean companies or teams, maybe even in proof of concepts or early stage products, where much of the focus is on continuous customer deliveries, and feedback loops to ensure solid traction and value. When we get to production or to more mature, and crtitical workloads, these ownership issues or lack of ownership period creates a huge risk for the organization and its teams.

Going back to the start of this post, if you went through the list I had above, and found you answered Yes to all or most, it may mean right now you are more comfortable as an individual contributor and maybe less as an owner. Depending on your own personal or career journey, this maybe what you want, or this may be something you wish to change going forward. The good news, the decision and the change is yours to make, but you must first have the awareness to see and acknowledge it, then put in the time and make the necessary changes!

Business/Enterprise Spending on Cybersecurity

Organizations have continued to invest in Cybersecurity, aside from the budget or the actual amounts, the focus needs to be on whether the funds were properly allocated for a particular year. The security investment made in 2021 or 2022 may look much different from what businesses have planned and budgeted for in 2023. Organizations of all sizes will either maintain or increase their security budgets for 2023.

Business verticals, industries, and sectors are concerned about cybersecurity breaches, but compliance and risk management, and other mandates are additional areas where focus, priority, and budget are increasing.

With the start of the global pandemic in 2020, organizations rethought their overall cybersecurity and technology investment priorities. With some projects and innovations being pushed out again for months, and even years. Organizations have a finite pool of resources, whether that’s people, software, cloud, or cybersecurity, the pandemic effects and other ongoing economic and political issues force many organizations to prioritize the operational and support ongoing remote work, keep our customer deliverables, to protect and retain our company brand and overall reputation. Cybersecurity spending for some may still take a backseat even if it’s temporary.

Cybersecurity attack strategies and vectors continue to evolve, and threat actors continue to also have access to the same cloud technologies that many businesses have or will leverage which allows them to also evolve and expand their capability. Even with the increased cybersecurity budgets, some organizations continue to use the same tools, techniques, and software to defend their systems and data. There have been so many recent advancements with AI/ML, and the security solutions that leverage this technology help position a business to keep pace with today’s threats.

Technology leaders with a business background who head Cybersecurity and or IT understand these are both cost centers and not revenue generators for companies. The goal for some organizations is to effectively manage risk, to satisfy all security compliance and mandates, but to be thoughtful and prescriptive about what and where they spend those security budget dollars. As I have said in other articles, our IT and Cybersecurity budgets are not infinite, it’s critical to allocate the proper budget allocations thoughtfully and on the specific roadmap work, aligned with business objectives that protect and move the organization forward. If not managed well, the budget, especially the cybersecurity budget could be easily overrun on objectives and initiatives which don’t reduce exposure or risk when compared to others which as I mentioned could have had a much greater impact.

Organizations, both large and small typically think about cybersecurity as software, tools, services, etc. Don’t forget the human elements, such as security awareness training and continuing education for employees. Even with all the latest security and technology in place, there will exist very low-tech entry points opportunity threat actors will take advantage of.

The pillars of a “good”, that lead us to eventual great software and products.

I have a pretty long list of books to read this year, along with a never-ending stack of books on my nightstand to get through.

One of the books I started reading is by Dr. Martin Kleppmann and it is called Designing Data-Intensive Applications. Amazon Web services focus on the various pillars of a well-architected framework. With some overlap, I want to cover pillars such as reliability, scalability, maintainability, security, and why it’s critical to get them right in an organization’s software and products for example. To be clear, there are so many important pillars, this is by no means an exhaustive list, just what I am choosing to focus on in this specific writing. Let’s go through each pillar, starting with reliability and so now.

Reliability

We as businesses and our customers need our systems to be available, responsive, working correctly all the time, and working even when there is some unplanned or unexpected situation with infrastructure, software, process, people, security, or region-specific with a public cloud provider. The bottom line here is that it’s critical to get ahead of these inevitable things and to be thoughtful in the design of systems that need to be fault-tolerant or otherwise resilient.

Team reviews or other more random methods such as using Chaos Monkey may help teams find areas where there may be opportunities to help ensure a system is as resilient as it can be provided budget and other potential requirements and constraints allow.

Infrastructure/Hardware

One of the flexibilities and benefits realized with virtualization, cloud, and containers among the many benefits was the ability for a system to remain online, even in a possibly degraded state. As the abstracted hardware or physical layers are remedied behind the scenes, apps and services can be restored to a normal operational state or origin and ultimately stay resilient and online. Expanding and extending this idea further, the concept of redundancy takes the stage. With redundancy, we are talking about building and deploying an app, service, or capabilities on multiple target machines or nodes. The idea again is if one node or machine has an issue an app or service doesn’t go offline or dark. For me, when I think of some examples of critical services this could be orders or authentication for example.

Software/Coding

Having clear visibility into logs/events is critical, allowing engineers, other stakeholders insights on commits, or identifying, correlating, and hopefully resolving errors or other conditions which may be the cause of errors or poor performance for example.

As systems become more distributed and are designed with scale in mind, it may become much more difficult to find and correlate issues with and to other services and technology in a stack which may be leading to errors or other app and service failures.

If an app or service does go down or offline, the specific request(s) need to be stored/saved elsewhere off a server so that it can be handled eventually when the system comes back online and to prevent the request from being lost or having someone manually reenter at a later point. One possible solution is to use a messaging queue to address this type of thing.

The human element

There have been plenty of cases where a person has updated or pushed a change and later finds it contributed to an app or service going offline or running in an otherwise degraded state. People make mistakes, we are human after all, and not machines. Even after an entire team has reviewed an update or change we still manage at times to skip over something.

One opportunity, method, or practice is for a Dev, Ops, or (InsertTeamName or separate platform TeamName) to build and deploy systems using Infrastructure as Code rather than building and deploying system manually. One other quick note here regarding whether a team follows Agile, Scrum, or Kanban, if the chosen method is too restrictive, doesn’t work well, or there is simply a lack of training for one individual or group, team members may resort to manual updates or changes to the infrastructure, which then means the running, Infrastructure Code will be out of sync with code that has been checked into the code repository.

Scalability

When we think about scalability, we may think about how a system will respond both positively or negatively when we increase the load of the number of requests, or the number of users accessing an app or service for example. During load testing, a team may set the number of total users to simulate, set the spawn rate, etc. The teams can take the data from a load-test, and analyze it to identify potential failures and make updates as needed to allow for and accommodate scaling up.

Some systems have batch and queue processing for jobs that may need to complete quickly or interactively and be tightly aligned with a business process, while other jobs may run longer or even be scheduled to run overnight. With either job type, we are interested in the amount of time it takes to process the job(s), how many jobs can be completed per min or per hour, does the number of jobs decreases over time as the number of concurrent jobs increases? Maybe you have business requirements where specific financial jobs must be finished within 24 hours for month-end processing for example. No matter what other ad-hoc jobs run, those scheduled or overnight jobs must complete within 24hours, perhaps they need their own dedicated queue or system resources to be allocated at specific times.

Locust.io is an open-source Python tool that allows a website, API, APP to be tested for performance. Locust provides statistics in terms of the type of request, name, # fails, as well as the median, average, Min and Max represented in (ms).

Maintainability

The maintainability pillar should be about following best practices which includes documentation that is stored in a central location, current SMEs are identified assigned to various portions of a system. This is true of critical and often with legacy systems, many if not all of the original team may no longer be there or they have been reassigned to other projects and work. The Dev team or DevOps teams still need to keep the system up and running and keep the performance at an optimal and desired level. I mentioned prior when I covered logs, that monitoring the health of the system is really important, sometimes you can predict, then be alerted to a situation that is developing and could cause some unplanned outage or downtime. We also want to be proactive which includes keeping a system up to date with regular security patches and updates.

Security

The Security pillar is a very important one, in fact, it’s part of the Amazon Web Services Well-Architected Framework. This pillar focuses heavily on protecting information, systems, and assets, but while still delivering business value and various mitigation strategies.

Taking a deeper dive into the AWS Security Pillar, there are five main areas covering Identity & Access Management, Data Protection, Detective Controls, Infrastructure Protection, and Incident Response. I’m focusing heavily on Amazon Web Services Services here in this section, but some of the fundamentals apply outside of AWS.

Starting with Identity & Access Management- This covers AWS services such as AWS IAM, AWS Directory Service, and AWS Organizations.

Data Protection- Covers AWS KMS and AWS HSM.

Detective Controls- This covers AWS CloudTrail, AWS Config, AWS Security Hub, and Amazon GuardDuty.

Infrastructure Protection- Covering Amazon VPC, AWS WAF, and AWS Systems Manager.

Incident Response- Includes AWS CloudTrail, Amazon SNS, and Amazon CloudWatch.

Expanding further on the five areas of the AWS Security pillar, each has some important, but simple best practices.

Identity & Access Management

  • When an AWS account is established, there is a root account created. The AWS root account should not be used, this reduces the overall attack surface.
  • Enable MFA on the root account and on all IAM user accounts
  • Using IAM permission boundaries regardless of permissions assigned to roles, the IAM permission boundaries restrict the effective permissions.

Detective Controls

  • Helps identify security misconfigurations
  • Identify threats, threat actors, or other unexpected behavior
  • Alerting, Metrics and event notifications

Enabling and using the AWS Security Hub to collect security data from across AWS accounts, services, supported third-party partner products, and help analyze the findings to find trends. The AWS services include the following

  • Amazon GuardDuty
  • Amazon Macie
  • AWS Firewall Manager
  • AWS Config
  • AWS Inspector
  • IAM Access Analyzer

Incident Response on AWS

Subscribe to get access

When it comes to services, more specifically security services, AWS delivers many. With an incident response (IR) plan or strategy, you may be considering many AWS services, but maybe aren’t sure which services provide what benefit or help you or your organization with your own security journey.

Challenge/Objective:

As an AWS customer, I was looking for a kind of overview or mapping of each AWS service and where it could be used within a people, process, technology, and partners framework, the primary focus or need is more around IR. I am a visual learner, someone who learns best by doing, not just seeing or hearing others do and explain.

Solution(s):

After some reading and further research, I will walk you through the various AWS services, mapped against some functions which are part of the NIST cybersecurity framework in terms of identification, detection/analysis, responding/containing/eradication, and recovery. I haven’t included all of the NIST CSF functions, my focus here will be more on IR.

These are the various steps, or functions I will be taking you through, mapping back to and including some AWS services. As I gave example actions, remember, this isn’t an exhaustive list, but just some ideas to get you thinking of what you may do in your own environments or AWS accounts.

Preparing

  • People: Does the team have the proper training in terms of AWS or the cloud in general? Performing a skill gap analysis per engineer or team will provide you with the areas which may need some additional training.
  • Process: Create, and iterate on an IR plan and a strategy that goes out for two or three years. With any good plan, it only becomes better and then maybe even best with continuous testing, reviewing, and updating. IR plans, like Disaster Recovery or business continuity plans, are not plans we create once and then store away for the eventual need.
  • Technology: From an AWS cloud security perspective, have you followed the AWS well-architected framework, specifically the security pillar. Enabling various AWS security services, using security findings and other security data to make data-driven informed decisions, or taking actions swiftly based on some alerts, or other trends. Following AWS security best practices, but setting up and configuring various AWS accounts for logging, and log archives or an account to be used for security operations for example.
  • Partners: I included this, those familiar with ITIL will relate, various AWS Services including the security services have integrations that can be leveraged. You need to be thinking about what existing AWS or third-party services you already own and use today. Back to your IR plan and strategy, do you want to continue to use a mix of services and partners, or are you looking for more of an “ALL IN” state? What security findings or security data are you sending to AWS native security services, are you looking to send Security findings or data out to third parties?

Identification

  • What systems, accounts, data, processes, and regions do we need to protect. Think in terms of the various processes and assets if you aren’t sure and are just starting to develop and plan and strategy

Detection

This is probably one of my favorite functions, with AWS Security Services, I have a few that I enable and use in just about every AWS account that I set up and configure. Of course, AWS services do cost money, so it’s important to have a clear understanding of your own or your customer’s requirements. You are looking for the right balance, where you are looking at quantitative risk for example. We have secure data, security findings, we have verifiable data, and we can analyze the effects of the risk. By ranking the severity of security findings, we can use qualitative risk analysis to try and determine the probability and prioritize the various risks that people can understand outside of security and technology for example. Consider potential opportunities where automation can safely be used, maybe that’s in AWS Cloudwatch with Lambda or AWS Security Hub Findings with remediation actions for example.

Certifications, who really benefits and why?

Subscribe to get access

As head of a Global Technology and Cybersecurity team, I am no longer as hands-on as I once was as an engineer. In fact, most of my time is spent on planning, strategy, forecasting, budgets, motivating, securing, removing blocks, hiring, communicating, and being available to help teams and individuals solve unexpected tech and non-tech issues.

For me in my role(s), having a general view of what the teams I lead are working with is truly essential. If you are working with any of the three public cloud providers, SaaS solutions, etc you see all the new technologies being pushed out every day.

I had someone tell me last year (2020), “At your level as a director, why would you take Amazon Solutions Architect associate level or Microsoft Azure fundamental level certifications, aren’t those for front/backend engineers or for software engineers or those trying to break into IT and Cybersecurity careers? My answer was simple…