Skip to content

Tyto Athene on The Ex Terra Podcast: Securing Space Operations Through IT Modernization

Cybersecurity, network modernization, and systems integration are essential across every industry and space is no exception. In a recent episode of The Ex Terra Podcast, host Tom Patton sat down with Victoria Da Poian, Tyto Athene’s Lead Data Scientist, and Peter O’Donoghue, Chief Technology Officer at Tyto Athene, to explore how Tyto’s capabilities are shaping the future of space operations.

“Everything is a set of either always connected or ephemerally connected things that actually produce or create data,” says Peter. The key to mission success is mastering how to turn that complex data into clear, actionable decisions, leveraging real-time insights and advanced analytics to gain a strategic advantage.

As O’Donoghue explains, “We need to be able to understand that data and exploit it to actually generate better mission outcomes.”

With a team of highly cleared experts and cutting-edge modeling & simulation (M&S) capabilities, Tyto Athene enhances satellite operations, situational awareness, and mission execution. By integrating robust cybersecurity, AI-driven analytics, and automation, Tyto accelerates innovation in space operations, helping to strengthen national security and maintain U.S. dominance in the final frontier.

Listen to the full conversation on The Ex Terra Podcast

Finding Agility in Post Quantum Encryption

Read the latest from Tyto Athene CTO, Peter O’Donoghue, in Cyber Defense Magazine, featuring insights on the urgent need for organizations to embrace post-quantum cryptographic solutions. In his article, “Finding Agility in Post-Quantum Encryption,” Mr. O’Donoghue explores how these advanced encryption methods are essential for future-proofing cybersecurity.

Organizations today face the very real threat of “harvest now, decrypt later,” where adversaries collect encrypted data now to decrypt it once quantum computing becomes viable. Mr. O’Donoghue explains how the latest NIST PQC standards are paving the way for stronger cybersecurity by providing a robust framework for implementing post-quantum cryptographic algorithms. These standards are crucial for safeguarding national security and ensuring compliance with evolving cybersecurity regulations.

Mr. O’Donoghue also addresses the practical challenges of implementing PQC in complex systems and offers strategies to overcome these hurdles. From integrating new cryptographic solutions into existing infrastructures to managing the transition period, his insights are invaluable for organizations looking to stay ahead of the curve.

Read more in Cyber Defense Magazine.

EDR and Cyber Logging: Preparing for the Next Big Cybersecurity Guidance

Tyto Athene’s Group President of Federal Civilian, Patti Chanthaphone, details how cyber logging and EDR can prepare organizations for the next phase of federal compliance requirements on Nextgov/FCW.

Federal agencies are under immense pressure to modernize their cybersecurity defenses. One of the key aspects of this modernization is the implementation of comprehensive enterprise logging. However, the costs associated with hardware, software, and labor for effective logging can be staggering, with estimates reaching nearly $200 million for large agencies. Despite these challenges, proper log retention and analysis are indispensable for detecting intrusions, mitigating ongoing threats, and conducting thorough post-incident investigations.

Automation plays a pivotal role in enhancing EDR processes. By integrating automated triggers and sophisticated data management platforms, agencies can significantly improve their incident response capabilities. This proactive approach not only strengthens cybersecurity defenses but also ensures timely information sharing, which is crucial for staying ahead of potential threats and complying with evolving government mandates.

As the cybersecurity threat landscape continues to grow in complexity, federal agencies must prioritize the adoption of advanced logging and EDR solutions. By doing so, they can enhance their ability to detect and respond to cyber threats effectively, ultimately protecting sensitive information and maintaining the integrity of federal networks.

Read more on Nextgov/FCW.

Learning Automation Via Compliance

Well-orchestrated automated cybersecurity dramatically increases efficiency and reduces human error, especially when it comes to adhering to regulatory standards such as CMMC, HIPAA, and SOC-2. These compliance frameworks have clear, structured processes that can be optimized and consistently applied across various systems, making them the perfect hands-on lesson for mastering automation tools. 

Why automation matters in meeting compliance frameworks

Security compliance frameworks are ideal for automation. Government contractors are up against the rigorous standards of CMMC or FedRAMP; financial systems dealing with payment information must adhere to PCI-DSS. Meeting these requirements can be extremely complex, involving frequent audits, up-to-date documentation, and strict adherence to protocols and procedures. Failure to comply risks the loss of cyber insurance, damage to an organization’s reputation, and hefty fines.

Automated compliance processes allow organizations to continuously monitor and enforce security policies. Manual processes are prone to inconsistency and human error, leading to compliance drift as systems gradually deviate from established security baselines over time. Automated tools, on the other hand, consistently apply and verify compliance standards across all systems. This combination of requirements and benefits makes well-automated compliance a crucial element of resilient cyber security.

The path to automation through compliance

Setting compliant configurations involves numerous repetitive tasks: imagine needing to maintain patch updates to hundreds of servers to adhere to the stringent security standards of CMMC or SOC-2. Traditional infrastructure teams would manually check and update each server, a time-consuming and error-prone process. Today, tools like Ansible can automate this task, consistently applying controls, configurations, and patches across the entire environment.

For individual automation experts, experience in compliance automation offers a wealth of learning opportunities. For example:

  • Working with scripting languages such as PowerShell and Bash to automate workflows
  • Mastering configuration management tools like Ansible for server management
  • Using Terraform to automate infrastructure provisioning

This multi-disciplinary approach not only enhances technical skills but provides a comprehensive understanding of how different automation tools can be integrated to achieve compliance.

Successfully automating compliance controls has an immediate business impact, as well. Automation saves time, reduces errors, and enables quicker audits. It also simplifies the process of demonstrating compliance to regulators, requiring minimal manual intervention. This efficiency translates into faster, more reliable compliance—making automation a valuable skill directly contributing to your organization’s success. Learning to streamline high-volume tasks and managing compliance drift via automation enhances operational efficiency and minimizes risk across the organization. That’s a win-win, in our book.

Resources, tools, and technologies for learning automation

Tools and technologies that make automation of compliance processes efficient and reliable are essential for automation experts looking to develop their skills.

  • Ansible is a powerful tool for configuration management and automation. It handles repetitive, manual tasks such as patch management, system configuration, and enforcing security policies across infrastructures.
  • Lockdown provides pre-built security-focused playbooks to automate compliance with standards such as CIS and STIG. This open-source software tool with enterprise-level support is particularly useful for organizations who need to achieve compliance quickly, or who want to dive deeper into enhanced compliance automation. Lockdown simplifies the process of implementing and maintaining compliance controls, making it easier for teams to meet regulatory requirements.
  • CI/CD pipelines can play a critical role in automating compliance standards. By integrating compliance checks directly into these pipelines, security policies can be enforced automatically with each deployment. This prevents non-compliant code from reaching production environments, enabling consistent compliance without manual intervention.
  • Splunk and Qualys offer deep monitoring and vulnerability management solutions. Splunk collects and analyzes security logs from your infrastructure in real-time, helping to detect compliance violations and security incidents as they occur. Qualys automates in-depth vulnerability scanning and provides remediation solutions for endpoints, keeping systems secure and up to date.

Together, these tools create a comprehensive ecosystem for compliance, automating everything from server management and control enforcement to real-time monitoring and vulnerability management. Learning to leverage these technologies empowers automation experts to significantly enhance their skills and contribute meaningfully to their organization’s overall cybersecurity strategy.

Case study: Automating CMMC 2.0

To illustrate the process, let’s consider automating a compliance standard like CMMC 2.0, which is essential for government contractors. CMMC 2.0 requires strict security controls over systems that handle sensitive government data, including server patching, correct configuration, and continuous monitoring. Manually maintaining compliance across numerous servers is time-consuming and prone to errors, making automation an effective solution.

Using Ansible, we can create playbooks to automatically check if all servers meet specific CMMC 2.0 configurations, such as disabling weak encryption protocols, enforcing strict password rules, and setting default permissions for users and folders. Lockdown further enhances this capability by providing pre-configured security policies aligned to CMMC 2.0 standards. These playbooks can be executed across the organization’s entire infrastructure, automatically updating configurations, applying patches, and reporting back on compliance status.

Learning to automate these tasks empowers infrastructure teams to focus on improving the overall security posture rather than wasting time on manual, repetitive work.

Next steps to learning automation via compliance frameworks

If you’re ready to dive into the inner workings of automation in your environment, there’s no better time to begin than now, when many learning tools and resources are freely available. Online courses and certifications in cybersecurity and automation are available via Coursera and (ISC)2, while articles on AI-driven compliance and cybersecurity automation appear on reputable sites like CSA and Springer.

Additionally, automation experts may benefit from reaching out to communities such as Reddit and LinkedIn groups for one-on-one troubleshooting and mentoring. Developing relationships with more skilled automators can have benefits ranging from infrastructure review and brainstorming to crisis assistance and problem-solving.

And finally, engaging with automation tools can help you access detailed knowledge and improve your capabilities, all while benefitting the continued compliance of your organization. Automating tasks with Ansible and taking advantage of the hands-on support from Lockdown is one way to start streamlining compliance efforts, reducing errors, and saving time. Learning new skills and developing your expertise in automation via compliance is a powerful way to add immediate business value and minimize risk. Start small in a testing environment or sandbox to experiment with controls, build successful automation trees, and set your team up for future success.

4 Types of Phishing and How to Protect Your Organization


What is Phishing?

Phishing is a prevalent type of social engineering that aims to steal data from the message receiver. Typically, this data includes personal information, usernames and passwords, and/or financial information. Phishing is consistently named as one of the top 5 types of cybersecurity attacks.  So just how does phishing typically work?

When executing a phishing attempt, attackers send a message where the authenticity of that message is spoofed. The message (whether via email, phone, SMS, etc.) is successful when it is trusted by the user to be a valid request from a trustworthy sender. The attacker’s objective is to get their target to click on a link that redirects the user to a fake website or forces a malicious file to be downloaded. An illegitimate link will try to trick users into handing over personal information such as account credentials for social media or online banking. 

The majority of phishing attempts are not targeted but rather sent out to millions of potential victims in hopes that some will fall for the generic attack. Targeted phishing attempts are a bit more complex and require that the bad actor plan the attack and strategically deploy the phishing attempts.  Below we look at a few types of phishing attacks and the differences between them.

Types of Phishing Attacks

Spear Phishing

A Spear Phishing attack occurs when a phishing attempt is crafted to trick a specific person rather than a group of people. The attackers either already know some information about the target, or they aim to gather that information to advance their objectives. Once personal details are obtained, such as a birthday, the phishing attempt is tailored to incorporate that personal detail(s) in order to appear more legitimate. These attacks are typically more successful because they are more believable. In other words, this type of attack has much more context (as outlined by the NIST Phish Scale) that is relevant to the target.

Whaling

Whaling is a sub-type of Spear Phishing and is typically even more targeted. The difference is that Whaling is targeted to specific individuals such as business executives, celebrities, and high-net-worth individuals. The account credentials of these high-value targets typically provide a gateway to more information and potentially money. 

Smishing

Smishing is a type of phishing attack deployed via SMS message. This type of phishing attack gets more visibility because of the notification the individual receives and because more people are likely to read a text message than an email. With the rising popularity of SMS messaging between consumers and businesses, Smishing has been increasingly popular.

Vishing

Vishing is a type of attack carried out via phone call. The attackers call the victim, usually with a pre-recorded message or a script. In a recent Twitter breach, a group of hackers pretending to be “IT Staff” were able to convince Twitter employees to hand over credentials all through phone conversations. 

How to avoid attacks on your organization

Organizations cannot assume users are knowledgeable and capable of detecting these malicious phishing attempts — especially as phishing attacks continue to get more sophisticated. Users should be regularly trained on the types of attacks they could be susceptible to and taught how to detect, avoid and report the attacks. The following are two simple methods of educating employees and training them to be more vigilant.

  1. Regular Security Awareness & Phishing Training
  2. Internal Phishing Campaigns and Phishing Simulations

Tyto Athene has extensive experience in both training areas.  Our team of experts can help your organization fully understand what types of attacks they are most vulnerable to, who in the organization might need additional phishing training, and additional best practices you can implement to improve your overall cybersecurity posture. We focus on helping you understand the vulnerabilities your organization faces and identify areas for improvement BEFORE they become an issue. Contact Tyto Athene to learn more.

Sources:

What is phishing? How this cyber attack works and how to prevent it

Data Breach Investigation Report (Verizon-DBIR)

8 types of phishing attacks and how to identify them

The Attack That Broke Twitter Is Hitting Dozens of Companies

12 Most common types of Cyberattacks

FedRAMP, FISMA, and SOC 2… What’s the Difference?

FISMA, FedRAMP, and SOC 2 are foundational cybersecurity compliance frameworks, often misunderstood or used interchangeably by those unfamiliar with their specific requirements, scopes, and implications. Many people want to understand the differences between these laws and accreditations. The audits are somewhat similar at face value, but the target audience, requirements, and procedures are substantially different

While they each serve distinct audiences and regulatory needs, all three frameworks share the core objective of safeguarding sensitive information and ensuring trust in digital environments.

What is FedRAMP?

Purpose

FedRAMP provides a standardized approach to security assessment, authorization, and continuous monitoring for cloud services. FedRAMP is a mandatory security authorization program for cloud services used by Federal agencies. It is both a compliance benchmark for Cloud Service Providers (CSPs) and a gateway to market access within the federal cloud ecosystem. The initiative encourages government agencies to move from traditional datacenter applications into cloud services wherever possible. Through its “Do Once, Use Many Times” principle, FedRAMP streamlines reusability of security assessments across agencies, reducing duplication of effort for CSPs and accelerating government-wide cloud adoption.

Target

Cloud Service Providers for the United States Federal Government.

History

In December of 2010, the Office of Management and Budget (OMB) released the 25 Point Implementation Plan to Reform Federal Information Technology Management, which established the Cloud First policy requiring federal agencies to use cloud-based solutions.

FedRAMP Certification Requirements

The FedRAMP Security Assessment Framework (SAF) is based on the Risk Management Framework (RMF) that was developed by the National Institute of Standards and Technology (NIST). The only real difference is that the six steps outlined by NIST combine into four process areas: 

  • Document 
  • Assess
  • Authorize
  • Monitor

The Document process area combines steps 1 through 3 of the NIST RMF, and the rest of the process areas are a direct mapping to process steps outlined by NIST. FedRAMP compliance involves tailored implementation of NIST 800-53 controls, supported by tools like the Control Tailoring Workbook (CTW) and System Security Plan (SSP) templates. Additionally, the FedRAMP Baseline (Low, Moderate, High, and LI-SaaS) defines the specific controls applicable based on data sensitivity.

What is FISMA?

Purpose

FISMA (or the Federal Information Security Modernization Act) requires every federal agency to develop, document, and implement an agency-wide program to provide information security for the data and systems that support the operations and assets of the agency. These include those provided or managed by another agency, contractor, or other sources. This means that if you sell services to the Federal Government, your services will need to satisfy their FISMA compliance as well.

Target

The US Government.

History

With 9/11 and a rapid acceleration in security incidents, the Federal Government signed the E-Government Act in 2002 to provide a small fragment of guidance for securing its IT systems. That law was updated to create the FISMA Act of 2014, with the more robust reporting requirements which federal agencies must comply.

FISMA Certification Requirements

The Risk Management Framework (RMF) you must follow will depend on if you’re an agency or a contractor supporting that agency. Contractors handling Controlled Unclassified Information (CUI) must comply with NIST SP 800-171 under FAR 52.204-21 and DFARS 252.204-7012, whereas federal agencies are required to follow NIST SP 800-53 for internal systems. FISMA compliance is verified through annual audits, reporting to OMB, and increasingly leverages Continuous Diagnostics and Mitigation (CDM) tools.

What is SOC 2?

Purpose

SOC 2 is a framework for information security that organizations willingly submit to prove to their clients that they have an acceptable level of internal security when it comes to storing sensitive customer information. SOC 2 is frequently aligned with regulatory requirements such as HIPAA, GDPR, and CCPA, offering assurance to clients and stakeholders about an organization’s data handling and privacy practices.

Target

SaaS vendors and any other organization storing customer data in the cloud

History

Originating from the American Institute of CPAs (AICPA) Trust Services Criteria, SOC 2 evolved to evaluate non-financial controls around security, availability, processing integrity, confidentiality, and privacy (the five TSCs).

SOC 2 Certification Requirements

SOC 2 compliance centers around the Trust Services Criteria (TSC), which govern the required policies, procedures, and operational controls. While less prescriptive than FedRAMP or NIST frameworks, SOC 2 requires regular independent audits to maintain certification and instills confidence in commercial clients.

Security and Compliance Expertise

Understanding the terminology is the first step to getting started with compliance certifications and frameworks. With over a decade of experience supporting federal agencies and commercial enterprises, Tyto Athene provides compliance-driven cybersecurity solutions that align with frameworks like FISMA, FedRAMP, SOC 2, and CMMC 2.0. Our team specializes in Gap Assessments, Control Implementation, Documentation Support, and Continuous Monitoring, helping you achieve and sustain compliance while elevating your security posture. 

Need compliance support? Tyto Athene’s Risk-Based Compliance experts can help you navigate certifications with confidence and efficiency. Connect with us to get started.

A Quick Guide to NIST SP 800-53, NIST SP 800-171, CMMC, and FedRAMP

Before I started working cybersecurity, more than a decade ago, I had no idea what NIST (National Institute of Standards and Technology) was, what risk management frameworks were, who they applied to, or what distinguished one set of standards from another. That changed quickly. Today, individuals working in cybersecurity know that NIST policies heavily dictate your daily activities. If you are new to cybersecurity or are looking to build a risk management program, this article will provide some guidance to some of the basics of federal cybersecurity frameworks and the programs to be on the lookout for. 

Risk Management Frameworks (RMF)

A Risk Management Framework (RMF) is a roadmap and set of instructions used to continually minimize security risks. When it comes to an organization’s digital footprint and those that service IT systems, NIST’s 800 Special Publication (SP) series provides an unequivocal source of truth for cybersecurity best practices. This third-party guidance from NIST is used by government programs like FedRAMP and CMMC to certify their constituents.

Here is a quick-hit reference guide and mapping of NIST SP’s to the government programs that rely on them so you can understand what RMF to follow for the certification you’re seeking for your organization. 

NIST SP 800-53 

What is NIST? 

Who is NIST SP 800-53 intended for? 

  • Originally, federal government agencies and their IT systems.
  • Companies who may be required to meet many of the controls to work as a contractor (Rev 5 removed the word “federal” to indicate that the controls should be applied for all organizations).
  • FedRAMP CSP’s (Cloud Service Providers) are required to provide a NIST SP 800-53 compliant service (plus cloud-specific overlay controls) to federal agencies.

How is NIST SP 800-53 enforced? 

  • NIST 800-53 is enforced primarily through compliance requirements for federal agencies and contractors. Organizations must implement its security controls as part of their risk management framework.
  • FISMA – Federal Information Security Management Act of 2002 is legislation that relies on NIST special publications to enforce its mandate.
  • Federal government agencies and CSPs are required to assess their compliance with the NIST 800-53 controls and obtain authorization to operate (ATO) from designated officials. This involves a rigorous evaluation of whether the implemented controls are effective.
  • Federal government agencies and CSPs may also integrate NIST 800-53 controls into their broader organizational policies, including incident response plans, security policies, and risk management strategies

What sets NISTSP 800-53 apart? 

  • NIST SP 800-53 is the most technical and prescriptive RMF (Risk Management Framework) of the bunch. If you have never thought about security before and face NIST SP 800-53 compliance requirements, buckle up. It is broken up into 18 control families that dictate everything from the way your systems must be configured to the processes and procedures that make up your organization’s risk management program. 

CMMC 

Why does CMMC exist? 

  • The CMMC (Cybersecurity Maturity Model Certification) program evolved as part of DOD efforts to enforce effective measures set out in the Defense Federal Acquisition Regulation Supplement (DFARS). CMMC requires that government contractors protect their Controlled Unclassified Data (CUI) by implementing the NIST SP 800-171 controls and having them verified by a 3rd Party Assessment Organization (3PAO).
  • CMMC exists to enhance the cybersecurity posture of organizations within the Defense Industrial Base (DIB) and to ensure that sensitive government information is protected. It aims to standardize cybersecurity practices acro b bb ss contractors, improve the security of defense supply chains, and mitigate the risks of cyber threats.
  • CMMC aims to safeguard CUI that is shared with contractors and subcontractors in the defense supply chain, helping to mitigate the risk of data breaches and cyber threats. Prior to CMMC, there was no uniform standard for cybersecurity across the DoD supply chain, leading to inconsistencies in how organizations approached cybersecurity. CMMC provides a standardized framework that organizations must adhere to, ensuring a baseline level of security.
  • CMMC establishes a trust framework between the DoD and its contractors, ensuring that organizations are held accountable for their cybersecurity practices. This fosters a culture of security within the defense industrial base. By implementing CMMC, the DoD aims to deter cyber threats and reduce the likelihood of successful cyber-attacks against defense contractors and their systems. Overall, CMMC exists to create a more secure environment for handling sensitive defense information and to ensure that all entities within the supply chain are equipped to handle and protect that information effectively.

Who is CMMC intended for? 

The Cybersecurity Maturity Model Certification (CMMC) is specifically intended for organizations that are part of the Department of Defense (DoD) supply chain and within the Defense Industrial Base (DIB).

These organizations handle Controlled Unclassified Information (CUI) related to U.S. Department of Defense (DoD) contracts. This includes prime contractors and subcontractors at all levels who provide products or services to the DoD. Here are some categories of organization types:

  1. Prime Contractors: Companies that have direct contracts with the DoD to provide products or services. They are required to comply with CMMC to protect sensitive information.
  2. Subcontractors: Organizations that provide goods or services to prime contractors. They must also meet CMMC requirements, as they may handle Controlled Unclassified Information (CUI) related to DoD contracts.
  3. Defense Industrial Base (DIB) Companies: This encompasses a wide range of companies that support defense efforts, including manufacturers, software developers, logistics providers, and other service providers.
  4. Organizations Handling CUI: Any organization that processes, stores, or transmits Controlled Unclassified Information as part of their work with the DoD must comply with CMMC requirements to ensure the protection of that information.
  5. Foreign Entities: In some cases, foreign companies that work with the DoD or its contractors may also need to comply with CMMC if they handle sensitive information related to defense contracts.
  6. Vendors – Defense Department Contractors and Subcontractors
  7. Purchasers – Defense Department Agencies 

NIST SP 800-171 

What is NIST SP 800-171? 

  • NIST SP 800-171 is another SP (Special Publication) developed by NIST to standardize how federal agencies define Controlled Unclassified Data (CUI) and the IT security standards for those that have access to it. 

Who is NIST SP 800-171 intended for? 

  • CMMC requires Government contractors, their third-party vendors, and service providers who store and share classified and unclassified Federal Government data to comply with NIST SP 800-171 guidance.  

How is NIST SP 800-171 enforced? 

  • In order to do business with the federal government, the Defense Federal Acquisition Regulation Supplement (DFARS) Clause 252.204-7012 now requires that defense contractors show proof of compliance with NIST SP 800-171 

What sets NIST SP 800-171 apart?

  • Compared to other SPs, NIST SP 800-171 is more high-level and less prescriptive. Therefore, there is more latitude on behalf of the organization to defend their control environment.

FedRAMP

Why does FedRAMP exist?

  • Each Federal Agency must grant an Authority to Operate (ATO) to utilize a CSP. The FedRAMP program provides authorized cloud services which Federal Agencies can browse and select from an online marketplace. If a CSP is on the FedRAMP marketplace, then an Agency shopping for a particular technology can be assured that the CSP has complied with the NIST SP 800-53 RMF with additional overlay controls.

Who is FedRAMP intended for? 

  • Vendors – Any Cloud Service Provider (CSP) who sells SaaS, PaaS, or IaaS products to the United States Federal Government. 
  • Purchasers – United States Federal Government 

Compliance with a NIST RMF at your organization is voluntary unless you are a Federal Government agency or working with the Federal Government. That said, I would highly recommend striving for NIST compliance because it is the foundation that all major regulatory bodies adhere to. If you can prove you are compliant with all the major NIST publications, you will not have any problems satisfying an audit later down the road. 

If you need an experienced cybersecurity consultant to assess your cybersecurity posture and advise you on your security program, Tyto Athene is here to help you. Our cybersecurity experts hold over a decade of experience helping Federal Government Agencies deploy secured software solutions on-prem and in the cloud. Contact us to learn more.


10 Steps to Successful Privileged Access Management

Privileged Access Management (PAM) is a field of growing concern for IT security professionals as the threat posed by trusted insiders is on the rise. When granting privileged access, we’re effectively opening the door to our most sensitive infrastructure and data, and the potential for data breaches, espionage, and accidental damage can be tremendous. Privileged access is a vital requirement for any IT system as there must be users with the ability to troubleshoot, maintain, and deploy new hardware and software. Rather than accept the risk of insiders with elevated access, a successful PAM strategy can provide your administrators the access they need without exposing your organization to undue risk. In this article, my colleagues and I identify strategies and safeguards organizations can employ to reduce the risks posed by users with elevated permissions.

1. Least Privilege Principle

The basis of PAM is the principle of least privilege, which is defined as the practice of reducing the rights of an agent – whether a human user or non-human account. These rights within the system should be the absolute minimum necessary for the system to operate and the agent to complete its tasks. This principle requires the designers and managers of a system to determine an appropriate level of access for each user and to only adjust the access as absolutely necessary. Enforcing this principle, however, requires a careful balance between operational efficiency and security.

Consider that granting an individual full administrative rights or root access would allow the individual to operate at peak efficiency but with unlimited access to the system. Then consider revoking all privileges and requiring that individual to request access on a case-by-case basis. The first case demonstrates a complete lack of the least privilege principle, while the latter is the most restrictive case of no privilege. A high level of scrutiny is required to find a place between these two extreme cases to allow for operational efficiency while maintaining an acceptable level of risk. This scrutiny would involve a risk assessment of the privileges, the data, code, files, and resources being accessed, as well as the frequency of use. The assessment must also consider the impact of the privileges on the operation of the system. As an example, if an individual needs to update a file daily, it may be cumbersome to require a daily request for access. Alternative solutions could include giving the user the ability to change the file without approval if the risk from changes is sufficiently low, assigning the update task to someone else with a more privileged account, or redesigning the system.

Common industry solutions for least privilege include implementing Role-Based Access Control, which is the grouping of individuals who share the same logical access attributes based on a common role or responsibility. Another solution is to incorporate a privileged access tool to allow decisions about privileged and privileged access to be made in a partially- or fully-automated manner, based on predetermined criteria (see Password Vaulting & Emergency Access Procedures).

2. Planning for Privileged Access Management at the Enterprise Platform Level

Companies often find it advantageous to centralize responsibility for the management of their servers and databases into one internal infrastructure operations team rather than several stove-piped teams spread around the organization. Creating a central team encourages consistency in the configurations of these resources and fosters consistency in the level of service that is delivered. This can also increase the risk from privileged access violations as administrators now can have privileged access across an entire enterprise rather than just within an individual business unit.

A successful PAM program needs to understand the potential risks of this increase in access by classifying the types of agents that exist within the enterprise, the scope of their access, the operational processes used to manage the accounts for these agents, and the potential impact if their accounts should be compromised. This holistic view of the privileged accounts on an enterprise’s application-hosting platforms can help the organization determine the best places to apply corporate resources and implement security controls.

3. Planning for Privileged Access Management at the Application Level

Consideration for how privileged access will be granted, revoked, and monitored should be done as early as possible in the design phase of the SDLC. The benefit is most easily recognized by observing the maintenance phase for applications and software that failed to do so. Extensive redevelopment and sometimes complete re-engineering of the application is often required if privileged access controls are applied too late.

Consider as an example an application that requires users to manually drop files into a common share location, which in turn requires the user to have update permission (write/change/full, etc.). From a development perspective this may be the easiest method for the application to function, but after risk assessing the privileged access, it may be determined that a more restrictive process is required. This process might entail building a method for uploading files into the application via a web front end. This requires an extended level of development but reduces the ability of application users to purposefully or accidentally misuses their privileges. Clearly, it is much easier to explore these logical access controls in pre-production, as building a web-enabled application GUI requires drastically more effort than leveraging existing OS filesystem abilities but may be required to meet the organization’s risk management goals.

There are many alternatives that can be explored to reduce the requirement for privileged access. Examples include:

  • Use of Emergency Access procedures instead of permanent update access for non-administrators.
  • Use of a Production Turnover/Promotion Management program to move changes, patches, and updates into production without granting developers permanent update access to production.
  • Using service accounts with restricted access or disabled direct login ability to automate tasks like data transfers and scheduled tasks.
  • Building functions into the Front End or User Interface of the application, allowing for privileges to be controlled at the Application layer and removing any user interaction at the OS layer or infrastructure backend.

4. Control Selection and Layering

The defense-in-depth model is commonly used when choosing security controls at an enterprise level and can be applied for PAM as well, since controls can be applied at both the platform and application levels. Access management does not have to be all or nothing affair, but many organizations assume access must be binary: privileged or non-privileged. Administrators may need elevated access to do their jobs, but they likely don’t need unlimited access to all the information stored on the systems for which they are responsible. Hence some partitioning is necessary to keep administrative duties and information processed separate.

To achieve layered security in your access management model, consider the value of the information privileged users might be able to access and the potential risk if this information were compromised. If the risk of a breach is too high, additional technical or operational controls can be added, such as more frequent activity monitoring, use of a rights management system, or even file-level encryption. As an example, organizations using SharePoint as a collaboration platform might consider Microsoft’s Information Rights Management (IRM) to provide this second level of control. Administrators can be given elevated permission on the Windows servers supporting SharePoint but should not be in an IRM group which grants them the ability to see document content.

File-level encryption is even simpler and provides an easy way to obfuscate the contents of documents even if a user has the ability to download them. To encrypt a Microsoft Office 365 document simply click the File tab at the top left of the page, then choose Info –> Protect –> Encrypt with Password. Now your SharePoint administrator will be able to see the file, but without the password its contents remain invisible.

5. Account Provisioning

It is important that all of the preparation during the planning phase is properly implemented on an organization’s systems to ensure access controls operate as intended. In less controlled environments, users are often given privileges on an ad hoc basis. The user may start out in an overly general user role and then have additional rights and privileges added to their account over time. Unfortunately, when privileges are granted in this fashion they are rarely removed; individual users can accumulate nearly administrator-level access over time without anyone being aware.

To combat this issue, account provisioning must be conducted in a regimented process:

  1. Roles must be clearly defined at the platform and application level to help determine appropriate privileges for users within those roles. The roles should clearly align with major functions within the system, with an understanding that some duties may also need to be spread across multiple roles for security reasons.
  2. A management authority should be notified of all new user requests and provide an approval for those requests. These approvals are necessary to ensure all requests are properly vetted and are a key artifact for auditors governing the PAM process.
  3. Users should be assigned only to appropriate roles. If a user’s privileges need to change, they should be moved to the role that grants those privileges only with the proper approvals (management, system owner, etc.).
  4. Users and system managers should be required to review access privileges and attest to the need for access so that unnecessary access does not linger and accumulate.

The preceding steps can all be conducted through manual processes, but as the complexity of an organization increases it can also be increasingly hard to keep track of access requests and necessary privileges across the enterprise. Privileged identity management (PIM) tools can aid in the provisioning and management of accounts in complex environments by providing automated mechanisms for correctly provisioning access. The tools can be integrated with the organization’s platforms and applications so that new accounts are created consistently and uniformly. Workflows can be generated to automatically compile access requests and collect necessary approvals. Finally, these tools can provide metrics for analysis. A manual audit of accounts provisioned is still necessary, though it can be done on a sample of systems just to verify the PIM tool is properly configured and working consistently.

6. Implement Password Vaulting

Password management can be a chore for complicated environments and can motivate some users to circumvent rules by writing down their passwords or sharing them with others. This is of particular concern for privileged users as they may have privileged access to numerous systems. In this situation a password manager/vault may be necessary. The password manager tool can simplify user interaction by holding the password for target systems and issuing some type of token/ticket for the target system when administrative duties need to be performed. The user may or may not even see the password, as some tools can login on behalf of the user. Either way, this places a gate between users and their use of elevated permissions on a system, which can reduce unintended actions and provide another hurdle to a malicious user.

To further control the use of privileged access the password vault may integrate with a work ticketing system and require a cross reference or pre-authorization from that ticketing system before granting a user access. The vault could also be the method of implementing an emergency access procedure; if no work ticket exists, users could be presented an option to gain emergency access that would trigger alerts that emergency privileged access has been used.

A password vault can also have the added benefit of reducing user complexity by eliminating the disparate passwords required for various platforms. The vault is a single point of sign on that handles authentication to the various backend systems without requiring the user to memorize multiple passwords. Less passwords to remember means less chance of passwords being written down, enhancing overall password security.

7. Utilize an Emergency Access Process

Always-on privileged access poses two threats. First, a user could accidentally perform an action without realizing they’ve logged in using their privileged credentials. Second, and more dangerous, a privileged user could be taking malicious action on a system such as downloading copies of important documents for exfiltration, and this action likely wouldn’t raise any type of alarm.

An emergency access process can be helpful in addressing this threat. A break-the-glass style procedure where users must formally request access to privileged credentials provides a useful gate for controlling privileged access. The formal request can be used as an alert trigger: when a user initiates the procedure (breaks the glass, so to speak) an email is automatically generated to the user’s manager, relevant IT support department, etc. If the access is part of planned maintenance or known issue troubleshooting/support, the worst that’s happened is an unneeded email. If the access is not legitimate, the alert is a useful tool to kickstart an incident response and hopefully limit the impact of a malicious inside user.

8. Managing Production Code Promotion

Production promotion is the process of moving changes to an application or software into the operational environment and is often the source of unrealized privileged access to applications. A developer does their coding in an offline version of the application or software that resides in a test or quality assurance environment. In these environments, developers require the highest level of privilege, and rightfully so. Considering that developers can build into the application almost any function or process imaginable, it is extremely important to manage the process of promoting their code to production. A disgruntled employee could build logic bombs or backdoors or build processes that quietly steal information or even fraudulently change data.

A properly implemented Secure Development Life-Cycle is the first line of defense against this type of threat, and properly implemented procedures for promoting code into the production environment are an important component.

The first step in Production Promotion Management is to ensure that developers do not retain their level of privileged access in the production environment. Any and all privileged access for developers in the production environment should be heavily scrutinized and eliminated where possible. Code reviews should also be conducted to specifically target the “service hooks” which developers often use to legitimately test their own code but can act as backdoors if not removed when testing is complete.

Promoting code, like all other IT system duties, should be properly separated among multiple users/roles. This might involve requiring system administrators to implement new code or run promotion scripts, rather than granting developers privileges in the production environment. It could also be implemented via temporary privileged access for members of a dedicated Application Support/Development team to promote the code or implement the changes, provided the access is removed immediately after use.

The important principle is that the person who developed the change should not be the same person who promotes this code to production.

9. Audit, Audit, Audit

Access controls are only as good as the oversight you have to ensure they’re working properly. Periodic reviews or audits are an essential part of any organization’s security governance; because proper access management is crucial to safeguarding data, it should be an integral part of your audit program. The scope of audits should include the following at a minimum:

  • Review a random sample of access authorizations: To ensure users’ access is being properly reviewed and approved, pull the access request forms/tickets that were submitted to gain access. Improperly authorized users can present a serious risk if they’re given access beyond what’s required for their job duties.
  • Review all access to critical infrastructure: It can be an arduous task, but it’s crucial that key servers and network devices get a full access review to ensure all users still have a valid access need and that they’ve got appropriate permissions.
  • For large environments using groups can help to cut down the administrative and audit burden (e.g. creating a “Windows Production Support” group would allow you to review the users in the group just once and then review that group’s permissions on various servers). Truly complex environments with a heterogeneous infrastructure could benefit from an automated access management tool capable of generating audit reports or even possibly performing automated checks such as identifying users whose accounts are deactivated but still have access provisioned.
  • Review a sample of privileged access to non-critical infrastructure: Given limited resources, auditing every server or network device could be an impossible task. Prepare a representative sample to review and look for any trends that could be extrapolated back to your infrastructure as a whole.

10. Integrating PAM Into Other Parts of Enterprise Operations

PAM is vital in combating insider threat as well as reducing the impact of intrusive malware, system infiltration, and account compromise. Looking at the big picture of an organization’s cyber security program there are many areas where Privileged Access Management should be integrated or at least considered:

  • Configuration Management: The goal of configuration management is to reduce known vulnerabilities in an information system by implementing a standard set of controls, ensuring security patches are implemented, and ensuring all changes to hardware and software conform to these standards. By monitoring compliance and conducting vulnerability testing, this process results in a relatively accurate picture of the vulnerabilities present in a given information system
  • Incident Response: It is vital to any forensic endeavor that access at any level to logs, monitoring software, and forensic software resides solely with authorized personnel. Any administrator having access to these logs and software could partially or completely hamper an incident response or investigation, so privileged access to log files or logging functions should receive extra scrutiny.
  • Awareness and Training: As mentioned before, training and awareness on the topic of PAM can be vital. A developer who is aware of the driving policies behind their organization’s Privileged Access Management program can engineer applications or software to comply with the program’s mandates. Managers and information owners should be aware of their responsibilities with regards to authorizing different levels of access.
  • Risk Assessment: The PAM program needs to address the subset of risks related to access controls, but it should also be aware of other risks to the organization as they are all interconnected. Business impacts of greater or lesser levels of access control should be evaluated as part of the overall assessment. An internet-based retailer has a greater need for rapid response to production application issues, and therefore requires a larger administrative team, than does a consulting company with an internal collaboration platform with highly confidential client data.