Of the cloud deployment models listed in NIST SP 800-145 (private, community, public, and hybrid) which do you think is the most secure? What are some security considerations an organization should think about when deciding on a model?
Hi Matthew,
Great question. I briefly touched on this in my discussion post. I think there is no debate that private is the most secure cloud deployment model. However, determining which model works best for your organization is a more critical question and involves a number of considerations. For example, your particular business needs could include balancing short-term and long-term total costs, addressing data governance regulations, ensuring uptime for mission-critical applications, etc. I think the private cloud would be ideal for large organizations that have an underutilized QA infrastructure or small and medium-sized organizations that lack QA infrastructure assets and need them for a longer duration. On the other hand, a public cloud may be more suitable for small and medium-sized organizations that do not own any QA infrastructure and have short-term testing requirements.
Shared security responsibilities vary by service type and provider, cloud service providers will have some, but not all, and your security team will also have some. Here is a detailed definition of the Shared Responsibility Model, I hope it helps you
I don’t know if anyone has tried shopping on YEEZYSUPLY.COM, but every shopping experience is really bad. Because of every sneaker launch, I can’t enter the shopping page, and I can’t add the sneakers to the cart. One thing I know, some people use Bots to buy sneakers, is this a kind of DoS?
I would argue that this is a distributed denial of service. The service is degraded by the volume of requests being sent from various hosts. They are only associated by the shared intent to purchase something from the site.
The distinction in this case is that the DDoS is driven by demand and not malicious intent. Bots are being used to complete faster than human transactions which can overwhelm the capacity of YEEZYSUPLY.COM servers. It’s debatable if the use of Bots is malicious in general, i.e. does this go against the intent of a fair purchase? That said, I think the pattern aligns with DDoS and can be attributed as such.
I would definitely agree with Matt’s point that the driving factor here is demand and not malicious intent, and I get his reasoning for why it could be argued as DDoS. It is a similar situation to attempting to purchase a Play Station 5 from any online website. As soon as the selling company lists them, the request is instantly flooded by bots, making the purchase nearly impossible, and hence why you now see them on eBay/Mercari for 200% market price. Despite attempts to mitigate the presence of bot floods, such as private emails, captcha, & queues, there still seems to be huge issues with purchasing these high-demand products online.
Is there a good reason why most organizations do not use Honey Pots? It seems like it could be a useful strategy that would not cost an organization much to implement
I agree that more organizations should use honeypots. These can be incorporated as part of an overall intrusion detection strategy when placed inside the network. The assumption is that an attacker would target the honeypot due to its vulnerabilities once the network is breached. The honeypot would then alert on this behavior as there is no other business process that uses the device. This type of implementation addresses compromised accounts acting maliciously within the network; and, it would have been helpful with the Titan cluster incident discussed in the recent case study.
Overall, I think the use of honeypots comes down to costs and the return on investment. Depending on the situation, the value of the protected data may not require the additional investment of deploying a honeypot. I thought the following article from Attivo was interesting: https://www.attivonetworks.com/blogs/30th-anniversary-of-honeypots/ Attivo provides honeypots and the article details a brief history of the concept.
With reference to the reading “An Introduction to DDos”, which method mentioned do you think is the best for preventing / mitigating / or protecting against distributed denial of service attacks?
Black Holing seems the smartest method because it drops all packets from the attacker…but if it can be combined with Validating the handshake, these strategy would serve better until there is a new strategy in place.
I think that black holing is a convenient way for any entity to protect against DoS attacks. However, an entity needs to combine with other methods to prevent DoS attacks better.
In my opinion the most efficient method would be for more people to educate themselves. Antivirus, firewalls, & common sense when it comes to not downloading sketchy files, & visiting weird websites could all go along way in preventing these attacks. Realistically, I’d be interested in seeing more organizations utilizing honeypots, as it seems to like a good idea to get an understanding of how an attacker may go after a similar “dummy” server, but could potentially be time-consuming & cost-inefficient.
In the “An Introduction to DDoS – Distributed Denial of Service attack” reading the article suggests several preventions/mitigations against DDoS attacks. Can you identify any additional mitigations that could be used to thwart these attacks?
Hi Bryan,
The first thing that I considered regarding additional preventions and/or mitigations against DDoS attacks was creating an incident response plan. Having clear, step-by-step instructions for employees to walk through when an incident like this occurs can ensure that staff members will respond promptly and effectively. In addition, network segmentation is another option when dealing with this type of attack that separates systems into subnets with unique security controls and protocols. Instead of solely relying on firewalls for network security, an organization should also consider having data centers in different locations and networks so that your network is not affected all at once.
Suggestive from the question bank in the reading.
Question: How long do you guys think a DDoS attacks last and what are the possible mitigation in scope?
Based on Google, DDoS can last long as 24 hours. For me, the longer DDoS attacks last, the more entities damage finance and reputation. All business with websites need to prepare themselves to prevent DDoS attacks because it is not about law, but also about business continuity.
Obviously, there would be a change in speed. Most likely very it will be very slow traffic. Another symptom would be odd patterns with an increase or decrease in traffic.
I’m in agreement with Mohammed those are certainly indicators of a DDoS compromise. In addition, I’d like to add the inaccessibly of or inability to access a particular website are also symptoms of compromise.
A network has multiple layers precisely 7 layers. What are the consequences if one layer is being attacked by hackers? Would it damaged all the layers at the same time?
I think that it would depend on what mitigations are in place and what layer is being attacked. For example, if a bad actor is attacking the transport layer and there are is a SSL/TLS VPN being used it could potentially be problematic. However, if there is an IPSec VPN being used it may not even affect normal opreations as IPSec operates at the Internet layer.
Ryan I agree with you, the IPSec VPN tunnel will provide data security for insecure protocol. But if the attack is on the transport network then they aim to launch a DDoS attack on the transport layer that is. However, the attackers may get a lot of ideas about how to get into the network environment, particularly through the session layer, which is frequently targeted.
This is a great example of why having a Service Level Agreement (SLA) in place is so important. If a company’s systems go down, it is ultimately their responsibility regardless of whether they are using a cloud or on-premise solution. Having an SLA in place to ensure that the cloud provider has backup measures in place (or to know that they don’t so that you can put your own backup measures in place) could save your organization.
While cloud computing solutions can not eliminate DDoS attacks, it can definitely mitigate the threat considering the cloud has more bandwidth and servers are typically spread out throughout different locations. It also offers high levels of cybersecurity, including firewalls and threat monitoring software that can help protect your assets and network. Not to mention, reputable cloud providers offer network redundancy, duplicating copies of your data, systems, and equipment so that in the event of a DDoS attack, you can switch to secure access on backed-up versions.
Some of the most successful mitigation methods for DDoS attacks are:
Looking at statistical patterns to identify attacks early on and address them quickly
Having additional servers or available cloud resources that can handle the extra traffic while the attack is being addressed
Throttling incoming traffic to prevent the server from going down entirely in the case of a DDoS attack
Using honeypots to catch hackers before they attack important/sensitive systems
Aggressive caching so that the system can handle more requests
Using a cloud provider to handle the overflow of requests and to address the attack
Youcan protect against SYN flood attacks by:
Configuring the server to drop the oldest half-hopen connection (missing the ACK)
Configuring the firewall (usually with a paid softeare/service) to handle the handshake and only pass complete handshakes onto the server
Configuring the server to use SYN cookies, dropping the SYN requests after the SYN-ACK is sent, and then proceeding with the connection only when an ACK response is received
Often we see public cloud deployment being suitable for organizations for most uses. However, it is not designed for handling data that introduces high risk. Once the deployment model is selected, different security controls should be in place to deal with the risk.
Data and encryption: Data privacy is always a risk if data is stored in the cloud unencrypted. Also, unauthorized access by malicious employees or intruders becomes a risk for the cloud.
Compliance requirements: Different geographical areas has their own data privacy. Therefore, some public cloud providers and their missing information on the data location, organizations should consider legal requirements and deal with compliance risk.
Control and visibility: The provider is responsible for the administration of the infrastructure. However, it creates a lack of transparency.
Security responsibility: vendors and shared users are always a threat to cloud computing and its services, organization has to make sure responsibility is informed and managed by each party.
Cloud computing solutions are scalable and in most cases have a much lower startup cost than on-premise solutions. Additionally, cloud providers may include solutions for security and/or backup in the case of a disaster. Cloud customers should check SLAs to ensure that the security policies of the provider match or surpass their own security levels. Additionally, some cloud providers will include wording stating that they own any information that the user/customer has stored in their cloud. This can be problematic if the customer/user tries to switch providers. Finally, if the provider goes under and shuts down, it may be difficult to recuperate data and get operations running in a timely manner.
Deployment models are defined where the infrastructure for deployment resides and who has control over it. Therefore each of the models meets different organizational needs. The types of deployment models are public, private, community, hybrid.
Service models: SaaS, PaaS, IaaS
Architecture models: mainframe systems, client/server computing, internet computing, cloud computing
The development of IoT apps created a high demand for WiFi so that smart devices can connect to the internet wirelessly. However, there are other factors that still make Ethernet better. As a speed concern, Ethernet definitely has an advantage as it transfers data faster. Ethernet is more reliable because WiFi might suffer from signal interference, and other environmental factors can affect it. WiFi also makes it harder to secure, and its data flow needs to be encrypted to be protected in the process. However, WiFi is easy to install, whereas Ethernet will require you to install cable infrastructure.
According to “An Introduction to DDoS – Distributed Denial of Service attack” many organizations do not use honeypots, and I was wondering what you think the reasoning is behind this? Being able to study attacker’s attack patterns, intentions, & potentially sources seems like a pretty smart way to go about dealing with what is typically a difficult cyber-threat for organizations to manage.
Alexander, most network attacks are directly on the transport layer unless they intend to inflict a DDoS attack to flood the network to bring the session layer to the kneel. However, the honeypots is still practically in use but lessons learned from using honeypots to learn different attack tactics used by attackers to flood the network. Organizations learn a lot about how attackers get into the network environment using the SYN-flood attack is commonly-targeted at the session layer.
A DDoS attack involves flooding a website with requests over a short period of time with the goal of overwhelming the site and crashing it. To answer your question…no I have not been a victim. In contrast to a DDoS, which emanates from a single place, the ‘distributed’ element suggests that these attacks are coming from numerous locations at the same time. If an organization network is subjected to a DDoS assault, the network will be bombarded with thousands of requests from many sources over the course of minutes, if not hours. These requests aren’t the consequence of a website experiencing a sudden surge in traffic; instead, they’re automated and will come from a restricted number of sources, depending on the scope of the attack.
Of the cloud deployment models listed in NIST SP 800-145 (private, community, public, and hybrid) which do you think is the most secure? What are some security considerations an organization should think about when deciding on a model?
Hi Matthew,
Great question. I briefly touched on this in my discussion post. I think there is no debate that private is the most secure cloud deployment model. However, determining which model works best for your organization is a more critical question and involves a number of considerations. For example, your particular business needs could include balancing short-term and long-term total costs, addressing data governance regulations, ensuring uptime for mission-critical applications, etc. I think the private cloud would be ideal for large organizations that have an underutilized QA infrastructure or small and medium-sized organizations that lack QA infrastructure assets and need them for a longer duration. On the other hand, a public cloud may be more suitable for small and medium-sized organizations that do not own any QA infrastructure and have short-term testing requirements.
Hi Mathew and Elizabeth,
I agree with Elizabeth. All depend on how entities invest their infrastructures based on cost – effective.
What is shared security responsibility model in cloud computing ?
Hi Shubham,
Shared security responsibilities vary by service type and provider, cloud service providers will have some, but not all, and your security team will also have some. Here is a detailed definition of the Shared Responsibility Model, I hope it helps you
https://cloudsecurityalliance.org/blog/2020/08/26/shared-responsibility-model-explained/
I don’t know if anyone has tried shopping on YEEZYSUPLY.COM, but every shopping experience is really bad. Because of every sneaker launch, I can’t enter the shopping page, and I can’t add the sneakers to the cart. One thing I know, some people use Bots to buy sneakers, is this a kind of DoS?
I would argue that this is a distributed denial of service. The service is degraded by the volume of requests being sent from various hosts. They are only associated by the shared intent to purchase something from the site.
The distinction in this case is that the DDoS is driven by demand and not malicious intent. Bots are being used to complete faster than human transactions which can overwhelm the capacity of YEEZYSUPLY.COM servers. It’s debatable if the use of Bots is malicious in general, i.e. does this go against the intent of a fair purchase? That said, I think the pattern aligns with DDoS and can be attributed as such.
I would definitely agree with Matt’s point that the driving factor here is demand and not malicious intent, and I get his reasoning for why it could be argued as DDoS. It is a similar situation to attempting to purchase a Play Station 5 from any online website. As soon as the selling company lists them, the request is instantly flooded by bots, making the purchase nearly impossible, and hence why you now see them on eBay/Mercari for 200% market price. Despite attempts to mitigate the presence of bot floods, such as private emails, captcha, & queues, there still seems to be huge issues with purchasing these high-demand products online.
Is there a good reason why most organizations do not use Honey Pots? It seems like it could be a useful strategy that would not cost an organization much to implement
I agree that more organizations should use honeypots. These can be incorporated as part of an overall intrusion detection strategy when placed inside the network. The assumption is that an attacker would target the honeypot due to its vulnerabilities once the network is breached. The honeypot would then alert on this behavior as there is no other business process that uses the device. This type of implementation addresses compromised accounts acting maliciously within the network; and, it would have been helpful with the Titan cluster incident discussed in the recent case study.
Overall, I think the use of honeypots comes down to costs and the return on investment. Depending on the situation, the value of the protected data may not require the additional investment of deploying a honeypot. I thought the following article from Attivo was interesting: https://www.attivonetworks.com/blogs/30th-anniversary-of-honeypots/ Attivo provides honeypots and the article details a brief history of the concept.
With reference to the reading “An Introduction to DDos”, which method mentioned do you think is the best for preventing / mitigating / or protecting against distributed denial of service attacks?
Black Holing seems the smartest method because it drops all packets from the attacker…but if it can be combined with Validating the handshake, these strategy would serve better until there is a new strategy in place.
I think that black holing is a convenient way for any entity to protect against DoS attacks. However, an entity needs to combine with other methods to prevent DoS attacks better.
In my opinion the most efficient method would be for more people to educate themselves. Antivirus, firewalls, & common sense when it comes to not downloading sketchy files, & visiting weird websites could all go along way in preventing these attacks. Realistically, I’d be interested in seeing more organizations utilizing honeypots, as it seems to like a good idea to get an understanding of how an attacker may go after a similar “dummy” server, but could potentially be time-consuming & cost-inefficient.
In the “An Introduction to DDoS – Distributed Denial of Service attack” reading the article suggests several preventions/mitigations against DDoS attacks. Can you identify any additional mitigations that could be used to thwart these attacks?
Hi Bryan,
The first thing that I considered regarding additional preventions and/or mitigations against DDoS attacks was creating an incident response plan. Having clear, step-by-step instructions for employees to walk through when an incident like this occurs can ensure that staff members will respond promptly and effectively. In addition, network segmentation is another option when dealing with this type of attack that separates systems into subnets with unique security controls and protocols. Instead of solely relying on firewalls for network security, an organization should also consider having data centers in different locations and networks so that your network is not affected all at once.
Suggestive from the question bank in the reading.
Question: How long do you guys think a DDoS attacks last and what are the possible mitigation in scope?
Based on Google, DDoS can last long as 24 hours. For me, the longer DDoS attacks last, the more entities damage finance and reputation. All business with websites need to prepare themselves to prevent DDoS attacks because it is not about law, but also about business continuity.
What are some of the indicators that alert the IT department for DoS attacks? (symptoms of DoS attacks)
Obviously, there would be a change in speed. Most likely very it will be very slow traffic. Another symptom would be odd patterns with an increase or decrease in traffic.
I’m in agreement with Mohammed those are certainly indicators of a DDoS compromise. In addition, I’d like to add the inaccessibly of or inability to access a particular website are also symptoms of compromise.
A network has multiple layers precisely 7 layers. What are the consequences if one layer is being attacked by hackers? Would it damaged all the layers at the same time?
Hi Ornella,
I think that it would depend on what mitigations are in place and what layer is being attacked. For example, if a bad actor is attacking the transport layer and there are is a SSL/TLS VPN being used it could potentially be problematic. However, if there is an IPSec VPN being used it may not even affect normal opreations as IPSec operates at the Internet layer.
Ryan I agree with you, the IPSec VPN tunnel will provide data security for insecure protocol. But if the attack is on the transport network then they aim to launch a DDoS attack on the transport layer that is. However, the attackers may get a lot of ideas about how to get into the network environment, particularly through the session layer, which is frequently targeted.
We are discussing how great Cloud is, however, what happens if Cloud Fails?
This is a great example of why having a Service Level Agreement (SLA) in place is so important. If a company’s systems go down, it is ultimately their responsibility regardless of whether they are using a cloud or on-premise solution. Having an SLA in place to ensure that the cloud provider has backup measures in place (or to know that they don’t so that you can put your own backup measures in place) could save your organization.
How can cloud computing solutions be utilized to mitigate against certain DDoS attacks?
Hey Ryan,
While cloud computing solutions can not eliminate DDoS attacks, it can definitely mitigate the threat considering the cloud has more bandwidth and servers are typically spread out throughout different locations. It also offers high levels of cybersecurity, including firewalls and threat monitoring software that can help protect your assets and network. Not to mention, reputable cloud providers offer network redundancy, duplicating copies of your data, systems, and equipment so that in the event of a DDoS attack, you can switch to secure access on backed-up versions.
Why is ARP used to resolve 32 bit IP address? Can ARP be used to resolve 64-bit IP address?
What are the most successful mitigation methods for DDoS attacks?
Some of the most successful mitigation methods for DDoS attacks are:
Looking at statistical patterns to identify attacks early on and address them quickly
Having additional servers or available cloud resources that can handle the extra traffic while the attack is being addressed
Throttling incoming traffic to prevent the server from going down entirely in the case of a DDoS attack
Using honeypots to catch hackers before they attack important/sensitive systems
Aggressive caching so that the system can handle more requests
Using a cloud provider to handle the overflow of requests and to address the attack
Youcan protect against SYN flood attacks by:
Configuring the server to drop the oldest half-hopen connection (missing the ACK)
Configuring the firewall (usually with a paid softeare/service) to handle the handshake and only pass complete handshakes onto the server
Configuring the server to use SYN cookies, dropping the SYN requests after the SYN-ACK is sent, and then proceeding with the connection only when an ACK response is received
Why is it a more feasible option for many companies – and end users – to use cloud computing technology, and what kind of risks would that impose?
Hi Micheal,
Often we see public cloud deployment being suitable for organizations for most uses. However, it is not designed for handling data that introduces high risk. Once the deployment model is selected, different security controls should be in place to deal with the risk.
Data and encryption: Data privacy is always a risk if data is stored in the cloud unencrypted. Also, unauthorized access by malicious employees or intruders becomes a risk for the cloud.
Compliance requirements: Different geographical areas has their own data privacy. Therefore, some public cloud providers and their missing information on the data location, organizations should consider legal requirements and deal with compliance risk.
Control and visibility: The provider is responsible for the administration of the infrastructure. However, it creates a lack of transparency.
Security responsibility: vendors and shared users are always a threat to cloud computing and its services, organization has to make sure responsibility is informed and managed by each party.
Cloud computing solutions are scalable and in most cases have a much lower startup cost than on-premise solutions. Additionally, cloud providers may include solutions for security and/or backup in the case of a disaster. Cloud customers should check SLAs to ensure that the security policies of the provider match or surpass their own security levels. Additionally, some cloud providers will include wording stating that they own any information that the user/customer has stored in their cloud. This can be problematic if the customer/user tries to switch providers. Finally, if the provider goes under and shuts down, it may be difficult to recuperate data and get operations running in a timely manner.
What is the cloud model composed of? What are the essential characteristics, service models, and deployment models?
Hi Joshua,
Deployment models are defined where the infrastructure for deployment resides and who has control over it. Therefore each of the models meets different organizational needs. The types of deployment models are public, private, community, hybrid.
Service models: SaaS, PaaS, IaaS
Architecture models: mainframe systems, client/server computing, internet computing, cloud computing
Essential characteristics: on-demand, self-service, broad network access, resource pooling, rapid elasticity, measured service
Migration phases: planning, contracts, migration, operation, termination
What are the pros and cons of a network that only uses ethernet vs a network that uses WiFi?
Hi Amelia,
The development of IoT apps created a high demand for WiFi so that smart devices can connect to the internet wirelessly. However, there are other factors that still make Ethernet better. As a speed concern, Ethernet definitely has an advantage as it transfers data faster. Ethernet is more reliable because WiFi might suffer from signal interference, and other environmental factors can affect it. WiFi also makes it harder to secure, and its data flow needs to be encrypted to be protected in the process. However, WiFi is easy to install, whereas Ethernet will require you to install cable infrastructure.
According to “An Introduction to DDoS – Distributed Denial of Service attack” many organizations do not use honeypots, and I was wondering what you think the reasoning is behind this? Being able to study attacker’s attack patterns, intentions, & potentially sources seems like a pretty smart way to go about dealing with what is typically a difficult cyber-threat for organizations to manage.
Alexander, most network attacks are directly on the transport layer unless they intend to inflict a DDoS attack to flood the network to bring the session layer to the kneel. However, the honeypots is still practically in use but lessons learned from using honeypots to learn different attack tactics used by attackers to flood the network. Organizations learn a lot about how attackers get into the network environment using the SYN-flood attack is commonly-targeted at the session layer.
Have you ever been the victim of DDOS? What happen?
A DDoS attack involves flooding a website with requests over a short period of time with the goal of overwhelming the site and crashing it. To answer your question…no I have not been a victim. In contrast to a DDoS, which emanates from a single place, the ‘distributed’ element suggests that these attacks are coming from numerous locations at the same time. If an organization network is subjected to a DDoS assault, the network will be bombarded with thousands of requests from many sources over the course of minutes, if not hours. These requests aren’t the consequence of a website experiencing a sudden surge in traffic; instead, they’re automated and will come from a restricted number of sources, depending on the scope of the attack.