I appreciate the amount of prep work and planning that this NIST document highlights in the hardening process. I know many people that don’t start thinking about hardening until after they are done installing software. The planning to ensure that you have the right people and skills in place is important to ensure that the system will be properly administered and maintained for the long run.
The other item that should be obvious and I see it fail often in the real world is to turn off all the features that won’t be used. If it isn’t enabled, it can’t be a vulnerability. Windows is terrible for this, it enables all sorts of processes, background applications, and protocols by default and expects the system administrator to disable them.
I agree with your sentiment about disabling/turning off functionality/features that by default are enabled/turned on. With more and more things becoming IoT devices it needs to fall on the manufacture to ensure they are delivered to the consumer as secure as possible out of the box. General consensus would likely find it’s better to have the product hardened and if the consumer wants to make the product less secure, they can. Rather than the product comes less secure and put the onus on the consumer to make it secure. Especially when a majority of the population aren’t security conscious enough to make sure their smart refrigerators or ring doorbell needs hardening like the article below highlights.
Good news, focusing on improving IoT hardening and ensuring the companies release more secure software are pillars of the National CyberSecurity Strategy for 2023. You are far from the only person that has noticed this problem. The market pushes manufacturers to only put in the bare minimum to reduce costs but that is bad for the ecosystems as a whole. Hopefully strong regulation combined with a push form the cybersecurity community can ensure that the internet doesn’t get plagued with more cheap, insecure IoT garbage.
Hi David,
I fully agree with you! It’s crucial to plan and prepare for the hardening process to guarantee that it’s done appropriately and carefully. Too frequently, people rush through setting up systems and installing software without considering security, which can result in flaws and vulnerabilities that attackers might take advantage of.
Securing an operating system initially would generally include the following steps:
-> Patch and upgrade the operating system
-> Remove or disable unnecessary services, applications, and network protocols
-> Configure operating system user authentication
-> Configure resource controls
-> Install and configure additional security controls
-> Perform security testing of the operating system
thank you for these points Aayush,Initially i did not know we have to go through these stages after securing an operating system but this documentation has brought to light so many things to do to be safe.
I like the fact that you pointed out that patching and upgrade as a very important guide to general server security because most times when organizations run vulnerability test, they ignore the patch which could further lead to loss of critical asset and vulnerabilities.
NIST 800-123 provides a comprehensive framework for securing and maintaining servers, including web, email, database, and file servers. I noted the content of vulnerability scanning and patch management on servers.
The purpose of the server security architecture included identifying potential vulnerabilities and implementing solutions to address them before they are exploited by attackers, thereby reducing the risk of data breaches, data loss, and other security incidents. Vulnerability scanning can identify misconfigurations in the host, such as open ports, weak passwords, and outdated software versions, and improve the overall security of the server. However, automated vulnerability scanning is not a complete replacement for a comprehensive security assessment. Vulnerability scans have the potential for false positives when attackers exploit complex and confusing code. Therefore, vulnerability scanning controls should be conducted in parallel with and corroborate other security controls such as penetration testing and threat intelligence.
As the infrastructure of operating systems and servers is continually updated, new security vulnerabilities may be created. One of the reasons for the Equifax data breach was poor patch management and system upgrades. Security teams should keep servers and other IT assets up-to-date with the latest security patches and updates to avoid significant financial and reputational damage. In addition, before each patch is officially applied to the operating system, the security team should test them in a non-production environment to ensure they do not cause any compatibility issues or other problems.
The server securities principles outlined in the reading such as Separation of Privilege, Least Privilege, Defense in-depth, Work Factor, etc. are all principles that are drilled into our heads for those of us with Cyber Security backgrounds. These principles are everywhere you turn in infosec, applying to most all infrastructure, architecture, networking, etc. These principles are foundational in the sense that you can apply them most anywhere in IT as best practice. It’s genuinely refreshing to have some consistency in an industry that is ever evolving.
The NIST (National Institute of Standards and Technology) Special Publication 800-123 “Guide to General Server Security” provides guidance on securing server systems in enterprise environments. To reduce the risk of a security breach and protect sensitive data.
Server Backup :
The server administrator needs to perform backups of the server on a regular basis for several reasons.
Server data should also be backed up regularly for legal and financial reasons.
Server Data Backup Policies:
Three main factors-Legal requirements , Mission requirements, Organizational guidelines and policies
Server Backup types:
Three primary types of backups exist: full, incremental, and differential
Maintain a Test Server:
Every company wish to maintain a test server for most important servers.
A key takeaway I found interesting from the NIST Publication 800-123 was that the greatest challenge and greatest expense in developing and securely maintaining a server is the necessary people and skills required to adequately perform these functions. It’s also noted that technical solutions are not a substitute for skilled and experienced personnel. The organization should consider the following factors for the human resources involved: the required personnel (what types of personnel are required?), required skills (what are the required skills to adequately plan, develop, and maintain the servers in a secure manner?), and the available personnel (who is available in the organization?).
One important lesson I learned from this article is that security must be taken into account at the start of every project, which is typically during installation and deployment (section 3.1 of the document). Your ability to maximize security and reduce cost will increase as you decide what will be stored on the servers, for what purposes, and where they will be placed. I have learned through working with numerous system administrators that it is essential to know what can be deployed first and what can be introduced later. This helps you to design the security requirements for all of the servers and services involved while saving time and money.
From this week’s publication, it is important to note that, “Vulnerability scanners are often better at detecting well-known vulnerabilities than more esoteric ones because it is impossible for anyone scanning product to incorporate all known vulnerabilities in a timely manner according to NIST. Also, they usually detect only surface vulnerabilities and are unable to address the overall risk level of a scanned server.
Vulnerability scanners capabilities:
– Identifying active hosts on a network
– Identifying active services (ports) on hosts and which of these are vulnerable
– Identifying applications and banner grabbing
– Identifying OSs
– Identifying vulnerabilities associated with discovered OSs, server software, and other applications
– Testing compliance with host application usage/security policies.
Server Security Principles:
– simplicity — security mechanisms should be as simple as possible.
– fail-safe — if a failure occurs, the system should fail in a secure manner. It is usually better to lose functionality rather than security
– complete mediation– mediators that enforce access policy should be employed. ie: file system permissions, proxies, firewalls, and mail gateways
– open design — system security should not depend on the secrecy of the implementation or its components
– separation of privilege — functions should be separate and provide as much granularity as possible.
least privilege — dictates that each task, process, or user is grated the minimum rights required to perform its job
psychology acceptability — users understanding the need for security through training and education
least common mechanism — when providing a feature for the system its best to have a single process or service gain some function without granting that same function to other parts of the system
defense in depth — organizations should understand a single security mechanism is generally insufficient
work factor — organizations should understand what it would take to break the system or networks security features
compromise recording — records and logs should be maintained that that if a compromise does occur. evidence of the attack is available to the organization
I found these principles interesting as well, Asha! If you think about these, they represent defense in depth on their own. Each of these principles are different layers of security for an individual host in addition to the ‘defense in depth’ principle that recommends creating additional layers of defense outside the host to keep it secure.
I think the interesting thing about this part is the fail-safe. Guide to General Server Security requires that we do not lose security even if the functionality is lost in the event of failure, but this is difficult for many companies to do, especially when it comes to the main industry chain.
Patch management is the process of distributing and applying updates to the software. Patching improves features, and security as well as serves adherence to compliance standards. The NIST 800-123 Guide to General Server Security Section 4.1 recommends server administrators correct any known vulnerabilities in an OS by adopting any of the activities below:
a) Install permanent fixes.
b) Identify vulnerabilities and applicable patches.
c) Create, document, and implement a patching process and/or
d) Mitigate vulnerabilities temporarily if needed and if feasible.
Appropriate management practices are critical to operating and maintaining a secure server. To ensure the security of a server and the supporting network infrastructure, organizations should implement the following practices:
1. Organizational Information System Security Policy
2. Configuration/Change Control and Management
3. Risk Assessment and Management
4. Standardized Configurations
5. Secure Programming Practices
6. Security Awareness and Training
7. Contingency, Continuity of Operations, and Disaster Recovery Planning
8. Certification and Accreditation
This document I learned that it is important to have installation and deployment planning which should be done before. This helps with maximize security and minimize costs I believe that is important because we need to have a plan ahead. A deployment plan will help the organization to maintain secure configurations which helps identify security vulnerabilities, this will also help identify the servers that are being used such as HTTP, FTP and SMTP. Next steps would be to identify users and clients, determine privilege for the each category of the user. How will the user admin would authenticated and how that data would be protected.
The NIST (National Institute of Standards and Technology) Special Publication 800-123 “Guide to General Server Security”. I noted that Logging is a cornerstone of a sound security posture. Capturing the correct data in the logs and then monitoring those logs closely is vital. Network and system logs are important, especially system logs in the case of encrypted communications, where network monitoring is less effective. Enabling the mechanisms to log information allows the logs to be used to detect failed and successful intrusion attempts and to initiate alert mechanisms when further investigation is needed. Procedures and tools need to be in place to process and analyze the log files and to review alert.
The NIST 800-123 Guide to General Server Security provides direction on how to protect servers in an organization. It presents vital security concepts such as access control, network security, and physical security and offers recommendations on implementing security measures to safeguard servers from attacks. The guide emphasizes the importance of regular security assessments, maintenance, and keeping software and hardware configurations up to date. It also addresses critical security concerns related to server administration, including the significance of robust passwords, role-based access control, and monitoring and logging server activity. Overall, the NIST 800-123 guide provides a comprehensive framework for securing servers and mitigating security threats.
NIST 800-123 describes how to plan for security between servers and applications and provides appropriate safeguards to protect the server operating system and server software. It explains that servers provide services through network communication as a primary function. As mentioned in Part 4, after the installation and deployment plan of the operating system is completed, the following steps should be completed to ensure system security:
Patch and update the OS
Harden and configure the OS to address security adequately
Install and configure additional security controls, if needed
Test the security of the OS to ensure that the previous steps adequately addressed all security issues.
Patching and updating an operating system ensure that hackers do not use known vulnerabilities as entry points to gain access.
My thoughtful read was on maintaining the security of the server. Administrations must continuously manage a server’s security after it is deployed. Handling and analyzing log data, performing routine server backups, regaining access to compromised servers, routinely testing server security, and safely managing servers remotely are all suggestions for securely operating servers. Logging is one of the techniques I’d like to share. The process of logging entails gathering the right data and then keeping track of the logs. Logs can be utilized to find unsuccessful and successful intrusion attempts as well as to start alarms when more research is needed. A program that actively monitors logs should be used by organizations to spot and notify them of security risks. In this article, SIEM software is discussed as a potential tool for centralized logging.
I appreciate the amount of prep work and planning that this NIST document highlights in the hardening process. I know many people that don’t start thinking about hardening until after they are done installing software. The planning to ensure that you have the right people and skills in place is important to ensure that the system will be properly administered and maintained for the long run.
The other item that should be obvious and I see it fail often in the real world is to turn off all the features that won’t be used. If it isn’t enabled, it can’t be a vulnerability. Windows is terrible for this, it enables all sorts of processes, background applications, and protocols by default and expects the system administrator to disable them.
Hey Dave,
I agree with your sentiment about disabling/turning off functionality/features that by default are enabled/turned on. With more and more things becoming IoT devices it needs to fall on the manufacture to ensure they are delivered to the consumer as secure as possible out of the box. General consensus would likely find it’s better to have the product hardened and if the consumer wants to make the product less secure, they can. Rather than the product comes less secure and put the onus on the consumer to make it secure. Especially when a majority of the population aren’t security conscious enough to make sure their smart refrigerators or ring doorbell needs hardening like the article below highlights.
https://www.wral.com/fbi-seeing-more-swatting-crimes-as-people-hack-into-ring-doorbell-cameras/20663487/
Good news, focusing on improving IoT hardening and ensuring the companies release more secure software are pillars of the National CyberSecurity Strategy for 2023. You are far from the only person that has noticed this problem. The market pushes manufacturers to only put in the bare minimum to reduce costs but that is bad for the ecosystems as a whole. Hopefully strong regulation combined with a push form the cybersecurity community can ensure that the internet doesn’t get plagued with more cheap, insecure IoT garbage.
https://www.whitehouse.gov/briefing-room/statements-releases/2023/03/02/fact-sheet-biden-harris-administration-announces-national-cybersecurity-strategy/
Hi David,
I fully agree with you! It’s crucial to plan and prepare for the hardening process to guarantee that it’s done appropriately and carefully. Too frequently, people rush through setting up systems and installing software without considering security, which can result in flaws and vulnerabilities that attackers might take advantage of.
Securing an operating system initially would generally include the following steps:
-> Patch and upgrade the operating system
-> Remove or disable unnecessary services, applications, and network protocols
-> Configure operating system user authentication
-> Configure resource controls
-> Install and configure additional security controls
-> Perform security testing of the operating system
thank you for these points Aayush,Initially i did not know we have to go through these stages after securing an operating system but this documentation has brought to light so many things to do to be safe.
Hi Aayush ,
I like the fact that you pointed out that patching and upgrade as a very important guide to general server security because most times when organizations run vulnerability test, they ignore the patch which could further lead to loss of critical asset and vulnerabilities.
NIST 800-123 provides a comprehensive framework for securing and maintaining servers, including web, email, database, and file servers. I noted the content of vulnerability scanning and patch management on servers.
The purpose of the server security architecture included identifying potential vulnerabilities and implementing solutions to address them before they are exploited by attackers, thereby reducing the risk of data breaches, data loss, and other security incidents. Vulnerability scanning can identify misconfigurations in the host, such as open ports, weak passwords, and outdated software versions, and improve the overall security of the server. However, automated vulnerability scanning is not a complete replacement for a comprehensive security assessment. Vulnerability scans have the potential for false positives when attackers exploit complex and confusing code. Therefore, vulnerability scanning controls should be conducted in parallel with and corroborate other security controls such as penetration testing and threat intelligence.
As the infrastructure of operating systems and servers is continually updated, new security vulnerabilities may be created. One of the reasons for the Equifax data breach was poor patch management and system upgrades. Security teams should keep servers and other IT assets up-to-date with the latest security patches and updates to avoid significant financial and reputational damage. In addition, before each patch is officially applied to the operating system, the security team should test them in a non-production environment to ensure they do not cause any compatibility issues or other problems.
The server securities principles outlined in the reading such as Separation of Privilege, Least Privilege, Defense in-depth, Work Factor, etc. are all principles that are drilled into our heads for those of us with Cyber Security backgrounds. These principles are everywhere you turn in infosec, applying to most all infrastructure, architecture, networking, etc. These principles are foundational in the sense that you can apply them most anywhere in IT as best practice. It’s genuinely refreshing to have some consistency in an industry that is ever evolving.
The NIST (National Institute of Standards and Technology) Special Publication 800-123 “Guide to General Server Security” provides guidance on securing server systems in enterprise environments. To reduce the risk of a security breach and protect sensitive data.
Server Backup :
The server administrator needs to perform backups of the server on a regular basis for several reasons.
Server data should also be backed up regularly for legal and financial reasons.
Server Data Backup Policies:
Three main factors-Legal requirements , Mission requirements, Organizational guidelines and policies
Server Backup types:
Three primary types of backups exist: full, incremental, and differential
Maintain a Test Server:
Every company wish to maintain a test server for most important servers.
A key takeaway I found interesting from the NIST Publication 800-123 was that the greatest challenge and greatest expense in developing and securely maintaining a server is the necessary people and skills required to adequately perform these functions. It’s also noted that technical solutions are not a substitute for skilled and experienced personnel. The organization should consider the following factors for the human resources involved: the required personnel (what types of personnel are required?), required skills (what are the required skills to adequately plan, develop, and maintain the servers in a secure manner?), and the available personnel (who is available in the organization?).
One important lesson I learned from this article is that security must be taken into account at the start of every project, which is typically during installation and deployment (section 3.1 of the document). Your ability to maximize security and reduce cost will increase as you decide what will be stored on the servers, for what purposes, and where they will be placed. I have learned through working with numerous system administrators that it is essential to know what can be deployed first and what can be introduced later. This helps you to design the security requirements for all of the servers and services involved while saving time and money.
From this week’s publication, it is important to note that, “Vulnerability scanners are often better at detecting well-known vulnerabilities than more esoteric ones because it is impossible for anyone scanning product to incorporate all known vulnerabilities in a timely manner according to NIST. Also, they usually detect only surface vulnerabilities and are unable to address the overall risk level of a scanned server.
Vulnerability scanners capabilities:
– Identifying active hosts on a network
– Identifying active services (ports) on hosts and which of these are vulnerable
– Identifying applications and banner grabbing
– Identifying OSs
– Identifying vulnerabilities associated with discovered OSs, server software, and other applications
– Testing compliance with host application usage/security policies.
Server Security Principles:
– simplicity — security mechanisms should be as simple as possible.
– fail-safe — if a failure occurs, the system should fail in a secure manner. It is usually better to lose functionality rather than security
– complete mediation– mediators that enforce access policy should be employed. ie: file system permissions, proxies, firewalls, and mail gateways
– open design — system security should not depend on the secrecy of the implementation or its components
– separation of privilege — functions should be separate and provide as much granularity as possible.
least privilege — dictates that each task, process, or user is grated the minimum rights required to perform its job
psychology acceptability — users understanding the need for security through training and education
least common mechanism — when providing a feature for the system its best to have a single process or service gain some function without granting that same function to other parts of the system
defense in depth — organizations should understand a single security mechanism is generally insufficient
work factor — organizations should understand what it would take to break the system or networks security features
compromise recording — records and logs should be maintained that that if a compromise does occur. evidence of the attack is available to the organization
I found these principles interesting as well, Asha! If you think about these, they represent defense in depth on their own. Each of these principles are different layers of security for an individual host in addition to the ‘defense in depth’ principle that recommends creating additional layers of defense outside the host to keep it secure.
I think the interesting thing about this part is the fail-safe. Guide to General Server Security requires that we do not lose security even if the functionality is lost in the event of failure, but this is difficult for many companies to do, especially when it comes to the main industry chain.
Patch management is the process of distributing and applying updates to the software. Patching improves features, and security as well as serves adherence to compliance standards. The NIST 800-123 Guide to General Server Security Section 4.1 recommends server administrators correct any known vulnerabilities in an OS by adopting any of the activities below:
a) Install permanent fixes.
b) Identify vulnerabilities and applicable patches.
c) Create, document, and implement a patching process and/or
d) Mitigate vulnerabilities temporarily if needed and if feasible.
Appropriate management practices are critical to operating and maintaining a secure server. To ensure the security of a server and the supporting network infrastructure, organizations should implement the following practices:
1. Organizational Information System Security Policy
2. Configuration/Change Control and Management
3. Risk Assessment and Management
4. Standardized Configurations
5. Secure Programming Practices
6. Security Awareness and Training
7. Contingency, Continuity of Operations, and Disaster Recovery Planning
8. Certification and Accreditation
This document I learned that it is important to have installation and deployment planning which should be done before. This helps with maximize security and minimize costs I believe that is important because we need to have a plan ahead. A deployment plan will help the organization to maintain secure configurations which helps identify security vulnerabilities, this will also help identify the servers that are being used such as HTTP, FTP and SMTP. Next steps would be to identify users and clients, determine privilege for the each category of the user. How will the user admin would authenticated and how that data would be protected.
The NIST (National Institute of Standards and Technology) Special Publication 800-123 “Guide to General Server Security”. I noted that Logging is a cornerstone of a sound security posture. Capturing the correct data in the logs and then monitoring those logs closely is vital. Network and system logs are important, especially system logs in the case of encrypted communications, where network monitoring is less effective. Enabling the mechanisms to log information allows the logs to be used to detect failed and successful intrusion attempts and to initiate alert mechanisms when further investigation is needed. Procedures and tools need to be in place to process and analyze the log files and to review alert.
The NIST 800-123 Guide to General Server Security provides direction on how to protect servers in an organization. It presents vital security concepts such as access control, network security, and physical security and offers recommendations on implementing security measures to safeguard servers from attacks. The guide emphasizes the importance of regular security assessments, maintenance, and keeping software and hardware configurations up to date. It also addresses critical security concerns related to server administration, including the significance of robust passwords, role-based access control, and monitoring and logging server activity. Overall, the NIST 800-123 guide provides a comprehensive framework for securing servers and mitigating security threats.
NIST 800-123 describes how to plan for security between servers and applications and provides appropriate safeguards to protect the server operating system and server software. It explains that servers provide services through network communication as a primary function. As mentioned in Part 4, after the installation and deployment plan of the operating system is completed, the following steps should be completed to ensure system security:
Patch and update the OS
Harden and configure the OS to address security adequately
Install and configure additional security controls, if needed
Test the security of the OS to ensure that the previous steps adequately addressed all security issues.
Patching and updating an operating system ensure that hackers do not use known vulnerabilities as entry points to gain access.
My thoughtful read was on maintaining the security of the server. Administrations must continuously manage a server’s security after it is deployed. Handling and analyzing log data, performing routine server backups, regaining access to compromised servers, routinely testing server security, and safely managing servers remotely are all suggestions for securely operating servers. Logging is one of the techniques I’d like to share. The process of logging entails gathering the right data and then keeping track of the logs. Logs can be utilized to find unsuccessful and successful intrusion attempts as well as to start alarms when more research is needed. A program that actively monitors logs should be used by organizations to spot and notify them of security risks. In this article, SIEM software is discussed as a potential tool for centralized logging.