As server administrators install security patches and other measures, it is essential that security testing is performed periodically. The NIST guide for general server security recommends vulnerability scanning, which assists server administrators as the most common security testing. The automated scanning tools help identify active hosts, active ports, applications, operation systems, vulnerabilities, misconfigurations of hosts. Because policies arise from new regulations and compliance rules, vulnerability scanning helps organizations test security policies.
Section 6.4.1 discusses vulnerability scans and how they can help to identify vulnerabilities and misconfigurations of hosts. I thought the discussion of vulnerability scan weaknesses to be interesting. The publication notes that scans identify only the surface vulnerabilities and often have a high false positive error rate. Scans are good at identifying well known vulnerabilities and providing an overview of the environment. Administrators must review the results of scans to interpret the true risks facing the system.
The weaknesses in vulnerability scans is a reminder of why defense in depth is important when administering servers. Combining scans with penetration testing, frequent log review, and effective patching provides a multi-layer approach that helps to cover the shortcomings of each area of focus.
The purpose of the NIST 800-123 Guide to General Server Security is to help organizations understand how to secure their primary servers. Protecting the operating system includes patches, configuring strong authentication, and host enhancements. When a patch is released, it means that older systems may be more vulnerable to attackers, so administrators should update the patch as soon as possible. Maintaining a server requires regular backups, audit logs, and regular testing of server security.
Key takeaway in this reading material is to secure a server, it is essential to first define the threats that must be mitigated and how the aforementioned in-cure cost from negligence to be proactive. Knowledge of potential threats is important to understanding the reasons behind the various baseline technical security practices presented in the NIST SP 800 -123 documentation.
However, determining how strongly a system needs to be protected is based largely on the type of information that the system processes and stores and implementing the federal information processing standards (FIPS) publication (PUB) 199, defines three security categories—low, moderate, and high—based on the potential impact of a security breach involving a particular system standards for Security categorization of federal information and information system establishes criteria for determining the security category of a system.
Explanation on the planning stages of a server, the following items should be considered:
How to identify the network services that will be provided on the server, such as Hypertext Transfer Protocol (HTTP), File Transfer Protocol (FTP), Simple Mail Transfer Protocol (SMTP), Network File System
How to determine which server applications meet the organization’s requirements. Consider servers that
may offer greater security, albeit with less functionality in some instances. Some issues to consider
include—
– Cost
– Compatibility with existing infrastructure
– Knowledge of existing employees
– Existing manufacturer relationship
– Past vulnerability history
– Functionality.
Reading through NIST special publication 800-123, I found a key takeaway in the section on maintaining the security of the server. I specifically found the first part on logging to be very important. Having correct and up to date logs is vital in ensuring the security of the server. Although reviewing logs can be mundane, it is often the only record of suspicious behavior. Being that the review of logs is so important to ensuring a server’s security, it is also equally important to maintain the logs for an appropriate amount of time. Firms can better manage the reviewing and maintain of logs by using a log analysis tool. Due to the number of logs that one would have to parse through, it is extremely advisable for a large firm to have these tools in place. Even one system can generate many logs per day. Utilizing a SIEM for log analysis makes it more likely that noteworthy events in the logs are seen and recognized.
In this reading section 2.4 Server Security Principles stood out to me.
It explained when addressing server security issues, it is an excellent idea to keep in mind the general information security principles
I found Psychological Acceptability to be very interesting, it read “Users should understand the necessity of security. This can be provided through training and education. In addition, the security mechanisms in place should present users with sensible options that give them the usability they require on a daily basis. If users find the security mechanisms too cumbersome, they may devise ways to work around or compromise them. The objective is not to weaken security so it is understandable and acceptable, but to train and educate users and to design security mechanisms and policies that are usable and effective.”
I found this to be very true, in all the years I have worked in this field I have to say I see this on a daily basis, especially the part about users finding the security too cumbersome so they ways to work around it. I sincerely believe it is crucial that users know and understand why security measures are in place.
I also found the Psychological Acceptability to be interesting. If users do not understand the necessity of security then it is likely that they will see security measures as an inconvenience. If it is seen this way then users are more likely to try and circumvent these controls, leading to an increased chance that a user causes a security breach. It is important for users to be involved in the security process and to make sure they know they are part of the “security team”.
This reading highlights the importance of patch management on servers and upgrading the operating system on a timely basis. It is typically the administrator’s responsibility to make sure the servers are adequately protected during the patching process. We typically notice every so often when Microsoft releases a new patch after identifying a vulnerability in their systems to make sure they are protected against the latest threats. Should a vulnerable service get exploited, the attacker could acquire administrative privileges on the machine. This reminded me of the Equifax Data Breach in which the hackers exploited a software vulnerability in the Apache Struts web application. Despite being aware of the risks associated with not updating their software, Equifax failed to apply the patch that would have eliminated the vulnerability.
You are right, operating system maintenance of the server, patching vulnerabilities, fixing security bugs of the operating systems bugs and vulnerabilities are key to keeping information secure. Also I think processing and analyzing log files is one way. Monitor logs to detect failed or successful intrusions from log data. Organizations should immediately initiate an alert to investigate when a problem is discovered.
The main goal of this reading is understanding the fundamental activities performed as part of securing and maintaining the security of servers that provide services over network communications as a main function. What I took away from this document is the server security planning: The most critical aspect of deploying of deploying a secure server is careful planning before installation, configuration, and deployment. Careful planning will ensure that the server is as secure as possible and in compliance with all relevant organizational policies. Many server security and performance problems can be traced to a lack of planning or management controls. The importance of management controls cannot be overstated. In many organizations, the IT support structure is highly fragmented. This fragmentation leads to inconsistencies, and these inconsistencies can lead to security vulnerabilities and other issues.
This guideline describes how to secure an organization’s servers and how to assist organizations in installing, configuring, and maintaining secure servers. Most times attackers target the organization’s server due to the value of the data and the services provided. Organizations need to have a security plan to address the security aspects of the deployment of a server. NIST always recommends the use of secure hash algorithms and recommends organizations stay aware of Cryptographic requirements, and plan to date the organization’s servers. Information security defines three objectives of organization security for maintaining confidentiality, availability, integrity. Basic server security steps:
• The organization should ensure that the server application is deployed, configured, and managed to meet the security requirement of the organization’s activities.
• Ensure the patch and upgrade the server applications regularly.
• Back up critical information security.
• Configure server user authentications and access controls.
• Testing and applying patches in a timely manner.
• Testing security periodically.
• Monitor and maintain.
NIST 800-123 Guide to General Server Security provides assistance to understanding fundamental activities for securing and maintaining the security of servers. Something I found interesting is the section discussing vulnerability scanning and how vulnerability scanning is surface-level and can easily identify false vulnerabilities. Nor does vulnerability scanning give context to the system such as predisposing conditions or mitigations that are in place for the system. Another thing to consider too is that vulnerability scanners generally only work with well-known operating systems/software. The use of proprietary operating systems or software will not yield any vulnerability analysis that will let the organization determine the current risk of the system. To me, vulnerability scanners are definitely a best practice – but for systems that are unclear there should be a “red team” or tabletop exercise at minimal on high impact systems to determine if additional mitigations are necessary. And to understand what is truly vulnerable within the system.
After reading NIST 800-123, one major point that I took from the reading is that it’s important to acknowledge security from the start of a project. By planning out the installation and deployment of servers at the beginning of the project, you will be saving time and money. You will have a better understanding of the location and the purpose of the servers, This is a formula to set your project up with the best possible security and the lowest potential costs. Assigning employee roles and responsibilities at the beginning of the project also avoids confusion and gives less of a chance of error.
Thanks for sharing Michael. In my personal experiences it’s pretty shocking how little security requirements are considered throughout the entirety of the project lifecycle. Typically the emphasis is on getting the system up and running in production as quickly as possible in order to add value to the business. This is especially the case when a company implements a third-party cloud provider application such as ADP or PeopleSoft. There are instances where management assumes the third-party cloud provider is responsible for managing security but this is often time not the case. An organizations security teams needs to be able to identify gaps in the cloud providers controls and apply mitigating controls internally to ensure a system is protected at an appropriate level.
I agree with you and yes servers should be protected as much as a network because they both contain sensitive information. Planning at the beginning is very important to have an idea of what you need, how much you should spend and how many people will be part of the project.
This document is used to assist system and security administrators of organizations in understanding the fundamental activities performed as a part of securing and maintaining the security of servers. Based on this document, the organization can create their security policies and system hardening checklist to secure and maintain the security of their severs. Section 4.2.1 ‘Remove or disable unnecessary services, applications, and network protocols ‘ helps me to understand how to manage data assets for increasing the security of servers.
I agree with you and sometimes companies fail to think about securing their server. They are more critical about their network and exclude servers protection which is not good and should also be secure. Failure to install patches could result in servers being compromised by unauthorized people. Always good to know what assets are in their possession and do a checklist to know what needs more security than the others.
Hi Hang,
Great analysis! I think people overlook the importance of removing or disabling unnecessary services and applications. After all, services are often used by attackers as a form of persistence. Removing unused services reduces the amount of logs generated on the machine. Services should not be running with an elevated or super user account unless absolutely necessary considering the consequence of a vulnerable service being exploited could result in the attacker gaining administrative privileges on the system.
This NIST publication reviews the importance of securing publicly accessible servers and inward-facing servers that primarily provide services over network communication. I found section 3.1 on planning to be interesting. This section highlights the importance of planning ahead rather than just paying for various security controls in an unorganized manner. You should consider the purposes of the server, Network services that will be provided on the server, users who will access the server, etc. before taking the next steps.
This article 800-123 is designed to help an organization secure their server. Servers are always exploited because they contained sensitive information (PII information) that most hackers are looking for and can affect the organization in a bad way if not protected. System administrators are the ones responsible to make sure a server has all means of protection to avoid breaches and this article helps system administrators to know what must be included in their plan in order to secure a server.
The purpose of NIST 800-123 is to help organizations understand the fundamental activities that are performed in securing and maintaining security of servers that provide services over the network as its main function.
The point of the document that I wanted to talk about was 6.3 recovering from a security compromise. Some of the key steps in responding to a successful breach include:
Report the incident to the organization.
Isolate the compromised system.
Investigate to see of the attack has spread.
Consult management and legal.
Analyze the intrusion
Restore the server before redeploying.
There a lot more steps in the this process but I thought this was a really good key point that I wanted to share.
I found the section where it discusses patching at the operating system (OS) level interesting from the reading. Administrators don’t just install an OS on a server, they are also responsible for periodically applying upgrades and security patches to ensure the system isn’t vulnerable to attacks. The security patches or upgrades shouldn’t be applied directly to the production server but rather they should first be applied to a test server that closely mirrors production to ensure nothing breaks. If the patch is applied to the test environment without issue the system administrator can gain some reasonable assurance that the same or similar result will occur in production.
Especially because quite often they cannot rollback the changes once they are implemented. I saw a bug a couple of years ago that they actually named after a coworker because they applied Microsoft security patches which bottled dozens of user laptops at my workplace. They actually had to contact customer support and explain to them the issue, and apparently it was a common problem for specific hardware related to the patch. It took weeks before the issue was fixed, which means users did not have access to their information for that period of time.
It’s crucial to test before fielding, but even then it sometimes isn’t enough when a variety of hardware can conflict – and not all hardware can be tested equally.
Looking through NIST 800-123, something that I found interesting was section 6.2.2 ‘Server Backup Types’. There are three primary types of backups, which are full, incremental, & differential. Full backups are advantageous because they allow a server to be restored fully to the state it was when the backup was performed. They are disadvantageous, however, due to the considerable amount of time required to perform them. Incremental backups, on the other hand, reduce this impact by only backing up any data that has changed since a previous backup. Finally, differential backups work by backing up all changed data since the last full backup that was performed. This results in differential backups requiring more time/resources as time progresses due to their size increase. Full backups are typically performed less often than incremental/differential backups, and backup frequency in an organization is determined by many factors such as the criticality of the data, the threat level faced by the server, among others.
What I find interesting about this reading is the overlap of resources mentioned. Such as NIST and FIPS publications. I am always intrigued when I see FIPS 199, Standards for Security Categorization of Federal Information and Information Systems. That has such a good layout of relevant information. The more the class progresses the more I see this publication referenced. Therefore, it reinforces the importance of this documentation and the other that are similar to it!
Miray Bolukbasi says
As server administrators install security patches and other measures, it is essential that security testing is performed periodically. The NIST guide for general server security recommends vulnerability scanning, which assists server administrators as the most common security testing. The automated scanning tools help identify active hosts, active ports, applications, operation systems, vulnerabilities, misconfigurations of hosts. Because policies arise from new regulations and compliance rules, vulnerability scanning helps organizations test security policies.
Matthew Bryan says
Section 6.4.1 discusses vulnerability scans and how they can help to identify vulnerabilities and misconfigurations of hosts. I thought the discussion of vulnerability scan weaknesses to be interesting. The publication notes that scans identify only the surface vulnerabilities and often have a high false positive error rate. Scans are good at identifying well known vulnerabilities and providing an overview of the environment. Administrators must review the results of scans to interpret the true risks facing the system.
The weaknesses in vulnerability scans is a reminder of why defense in depth is important when administering servers. Combining scans with penetration testing, frequent log review, and effective patching provides a multi-layer approach that helps to cover the shortcomings of each area of focus.
Yangyuan Lin says
The purpose of the NIST 800-123 Guide to General Server Security is to help organizations understand how to secure their primary servers. Protecting the operating system includes patches, configuring strong authentication, and host enhancements. When a patch is released, it means that older systems may be more vulnerable to attackers, so administrators should update the patch as soon as possible. Maintaining a server requires regular backups, audit logs, and regular testing of server security.
Jason Burwell says
Hello Yangyuan,
You managed to sum up the reading nicely for someone who may not have understood the overall content of the reading.
Oluwaseun Soyomokun says
Key takeaway in this reading material is to secure a server, it is essential to first define the threats that must be mitigated and how the aforementioned in-cure cost from negligence to be proactive. Knowledge of potential threats is important to understanding the reasons behind the various baseline technical security practices presented in the NIST SP 800 -123 documentation.
However, determining how strongly a system needs to be protected is based largely on the type of information that the system processes and stores and implementing the federal information processing standards (FIPS) publication (PUB) 199, defines three security categories—low, moderate, and high—based on the potential impact of a security breach involving a particular system standards for Security categorization of federal information and information system establishes criteria for determining the security category of a system.
Explanation on the planning stages of a server, the following items should be considered:
How to identify the network services that will be provided on the server, such as Hypertext Transfer Protocol (HTTP), File Transfer Protocol (FTP), Simple Mail Transfer Protocol (SMTP), Network File System
How to determine which server applications meet the organization’s requirements. Consider servers that
may offer greater security, albeit with less functionality in some instances. Some issues to consider
include—
– Cost
– Compatibility with existing infrastructure
– Knowledge of existing employees
– Existing manufacturer relationship
– Past vulnerability history
– Functionality.
Ryan Trapp says
Reading through NIST special publication 800-123, I found a key takeaway in the section on maintaining the security of the server. I specifically found the first part on logging to be very important. Having correct and up to date logs is vital in ensuring the security of the server. Although reviewing logs can be mundane, it is often the only record of suspicious behavior. Being that the review of logs is so important to ensuring a server’s security, it is also equally important to maintain the logs for an appropriate amount of time. Firms can better manage the reviewing and maintain of logs by using a log analysis tool. Due to the number of logs that one would have to parse through, it is extremely advisable for a large firm to have these tools in place. Even one system can generate many logs per day. Utilizing a SIEM for log analysis makes it more likely that noteworthy events in the logs are seen and recognized.
Jason Burwell says
In this reading section 2.4 Server Security Principles stood out to me.
It explained when addressing server security issues, it is an excellent idea to keep in mind the general information security principles
I found Psychological Acceptability to be very interesting, it read “Users should understand the necessity of security. This can be provided through training and education. In addition, the security mechanisms in place should present users with sensible options that give them the usability they require on a daily basis. If users find the security mechanisms too cumbersome, they may devise ways to work around or compromise them. The objective is not to weaken security so it is understandable and acceptable, but to train and educate users and to design security mechanisms and policies that are usable and effective.”
I found this to be very true, in all the years I have worked in this field I have to say I see this on a daily basis, especially the part about users finding the security too cumbersome so they ways to work around it. I sincerely believe it is crucial that users know and understand why security measures are in place.
Ryan Trapp says
Hi Jason,
I also found the Psychological Acceptability to be interesting. If users do not understand the necessity of security then it is likely that they will see security measures as an inconvenience. If it is seen this way then users are more likely to try and circumvent these controls, leading to an increased chance that a user causes a security breach. It is important for users to be involved in the security process and to make sure they know they are part of the “security team”.
Elizabeth Gutierrez says
This reading highlights the importance of patch management on servers and upgrading the operating system on a timely basis. It is typically the administrator’s responsibility to make sure the servers are adequately protected during the patching process. We typically notice every so often when Microsoft releases a new patch after identifying a vulnerability in their systems to make sure they are protected against the latest threats. Should a vulnerable service get exploited, the attacker could acquire administrative privileges on the machine. This reminded me of the Equifax Data Breach in which the hackers exploited a software vulnerability in the Apache Struts web application. Despite being aware of the risks associated with not updating their software, Equifax failed to apply the patch that would have eliminated the vulnerability.
Yangyuan Lin says
Hi Elizabeth,
You are right, operating system maintenance of the server, patching vulnerabilities, fixing security bugs of the operating systems bugs and vulnerabilities are key to keeping information secure. Also I think processing and analyzing log files is one way. Monitor logs to detect failed or successful intrusions from log data. Organizations should immediately initiate an alert to investigate when a problem is discovered.
Shubham Patil says
The main goal of this reading is understanding the fundamental activities performed as part of securing and maintaining the security of servers that provide services over network communications as a main function. What I took away from this document is the server security planning: The most critical aspect of deploying of deploying a secure server is careful planning before installation, configuration, and deployment. Careful planning will ensure that the server is as secure as possible and in compliance with all relevant organizational policies. Many server security and performance problems can be traced to a lack of planning or management controls. The importance of management controls cannot be overstated. In many organizations, the IT support structure is highly fragmented. This fragmentation leads to inconsistencies, and these inconsistencies can lead to security vulnerabilities and other issues.
Mohammed Syed says
This guideline describes how to secure an organization’s servers and how to assist organizations in installing, configuring, and maintaining secure servers. Most times attackers target the organization’s server due to the value of the data and the services provided. Organizations need to have a security plan to address the security aspects of the deployment of a server. NIST always recommends the use of secure hash algorithms and recommends organizations stay aware of Cryptographic requirements, and plan to date the organization’s servers. Information security defines three objectives of organization security for maintaining confidentiality, availability, integrity. Basic server security steps:
• The organization should ensure that the server application is deployed, configured, and managed to meet the security requirement of the organization’s activities.
• Ensure the patch and upgrade the server applications regularly.
• Back up critical information security.
• Configure server user authentications and access controls.
• Testing and applying patches in a timely manner.
• Testing security periodically.
• Monitor and maintain.
Michael Duffy says
NIST 800-123 Guide to General Server Security provides assistance to understanding fundamental activities for securing and maintaining the security of servers. Something I found interesting is the section discussing vulnerability scanning and how vulnerability scanning is surface-level and can easily identify false vulnerabilities. Nor does vulnerability scanning give context to the system such as predisposing conditions or mitigations that are in place for the system. Another thing to consider too is that vulnerability scanners generally only work with well-known operating systems/software. The use of proprietary operating systems or software will not yield any vulnerability analysis that will let the organization determine the current risk of the system. To me, vulnerability scanners are definitely a best practice – but for systems that are unclear there should be a “red team” or tabletop exercise at minimal on high impact systems to determine if additional mitigations are necessary. And to understand what is truly vulnerable within the system.
Michael Galdo says
After reading NIST 800-123, one major point that I took from the reading is that it’s important to acknowledge security from the start of a project. By planning out the installation and deployment of servers at the beginning of the project, you will be saving time and money. You will have a better understanding of the location and the purpose of the servers, This is a formula to set your project up with the best possible security and the lowest potential costs. Assigning employee roles and responsibilities at the beginning of the project also avoids confusion and gives less of a chance of error.
Bryan Garrahan says
Thanks for sharing Michael. In my personal experiences it’s pretty shocking how little security requirements are considered throughout the entirety of the project lifecycle. Typically the emphasis is on getting the system up and running in production as quickly as possible in order to add value to the business. This is especially the case when a company implements a third-party cloud provider application such as ADP or PeopleSoft. There are instances where management assumes the third-party cloud provider is responsible for managing security but this is often time not the case. An organizations security teams needs to be able to identify gaps in the cloud providers controls and apply mitigating controls internally to ensure a system is protected at an appropriate level.
Ornella Rhyne says
Hi Michael,
I agree with you and yes servers should be protected as much as a network because they both contain sensitive information. Planning at the beginning is very important to have an idea of what you need, how much you should spend and how many people will be part of the project.
Hang Nu Song Nguyen says
This document is used to assist system and security administrators of organizations in understanding the fundamental activities performed as a part of securing and maintaining the security of servers. Based on this document, the organization can create their security policies and system hardening checklist to secure and maintain the security of their severs. Section 4.2.1 ‘Remove or disable unnecessary services, applications, and network protocols ‘ helps me to understand how to manage data assets for increasing the security of servers.
Ornella Rhyne says
Hi Hang,
I agree with you and sometimes companies fail to think about securing their server. They are more critical about their network and exclude servers protection which is not good and should also be secure. Failure to install patches could result in servers being compromised by unauthorized people. Always good to know what assets are in their possession and do a checklist to know what needs more security than the others.
Elizabeth Gutierrez says
Hi Hang,
Great analysis! I think people overlook the importance of removing or disabling unnecessary services and applications. After all, services are often used by attackers as a form of persistence. Removing unused services reduces the amount of logs generated on the machine. Services should not be running with an elevated or super user account unless absolutely necessary considering the consequence of a vulnerable service being exploited could result in the attacker gaining administrative privileges on the system.
Amelia Safirstein says
This NIST publication reviews the importance of securing publicly accessible servers and inward-facing servers that primarily provide services over network communication. I found section 3.1 on planning to be interesting. This section highlights the importance of planning ahead rather than just paying for various security controls in an unorganized manner. You should consider the purposes of the server, Network services that will be provided on the server, users who will access the server, etc. before taking the next steps.
Ornella Rhyne says
This article 800-123 is designed to help an organization secure their server. Servers are always exploited because they contained sensitive information (PII information) that most hackers are looking for and can affect the organization in a bad way if not protected. System administrators are the ones responsible to make sure a server has all means of protection to avoid breaches and this article helps system administrators to know what must be included in their plan in order to secure a server.
Corey Arana says
The purpose of NIST 800-123 is to help organizations understand the fundamental activities that are performed in securing and maintaining security of servers that provide services over the network as its main function.
The point of the document that I wanted to talk about was 6.3 recovering from a security compromise. Some of the key steps in responding to a successful breach include:
Report the incident to the organization.
Isolate the compromised system.
Investigate to see of the attack has spread.
Consult management and legal.
Analyze the intrusion
Restore the server before redeploying.
There a lot more steps in the this process but I thought this was a really good key point that I wanted to share.
Bryan Garrahan says
I found the section where it discusses patching at the operating system (OS) level interesting from the reading. Administrators don’t just install an OS on a server, they are also responsible for periodically applying upgrades and security patches to ensure the system isn’t vulnerable to attacks. The security patches or upgrades shouldn’t be applied directly to the production server but rather they should first be applied to a test server that closely mirrors production to ensure nothing breaks. If the patch is applied to the test environment without issue the system administrator can gain some reasonable assurance that the same or similar result will occur in production.
Michael Duffy says
Especially because quite often they cannot rollback the changes once they are implemented. I saw a bug a couple of years ago that they actually named after a coworker because they applied Microsoft security patches which bottled dozens of user laptops at my workplace. They actually had to contact customer support and explain to them the issue, and apparently it was a common problem for specific hardware related to the patch. It took weeks before the issue was fixed, which means users did not have access to their information for that period of time.
It’s crucial to test before fielding, but even then it sometimes isn’t enough when a variety of hardware can conflict – and not all hardware can be tested equally.
Alexander William Knoll says
Looking through NIST 800-123, something that I found interesting was section 6.2.2 ‘Server Backup Types’. There are three primary types of backups, which are full, incremental, & differential. Full backups are advantageous because they allow a server to be restored fully to the state it was when the backup was performed. They are disadvantageous, however, due to the considerable amount of time required to perform them. Incremental backups, on the other hand, reduce this impact by only backing up any data that has changed since a previous backup. Finally, differential backups work by backing up all changed data since the last full backup that was performed. This results in differential backups requiring more time/resources as time progresses due to their size increase. Full backups are typically performed less often than incremental/differential backups, and backup frequency in an organization is determined by many factors such as the criticality of the data, the threat level faced by the server, among others.
Joshua Moses says
What I find interesting about this reading is the overlap of resources mentioned. Such as NIST and FIPS publications. I am always intrigued when I see FIPS 199, Standards for Security Categorization of Federal Information and Information Systems. That has such a good layout of relevant information. The more the class progresses the more I see this publication referenced. Therefore, it reinforces the importance of this documentation and the other that are similar to it!