-
David Lanter wrote a new post on the site National Center of Academic Excellence in Cybersecurity 7 years, 2 months ago
ITACS Students win 1st and 2nd place in ISACA Philadelphia Chapter’s 2017 Scholarship Competition!
1st Place – $2,500 Scholarship awarded to Ioannis (“Yanni”) Haviaras (read Yanni’s essay here)
2nd […] -
David Lanter wrote a new post on the site National Center of Academic Excellence in Cybersecurity 7 years, 10 months ago
Deval Shah
IT Security – U.S. Customs and Border Protection (Department of Homeland Security)
209 Speakman Hall
Deval.Shah@temple.edu
Course: MIS 5213 Intrusion Detection and Reponse -
David Lanter wrote a new post on the site National Center of Academic Excellence in Cybersecurity 7 years, 10 months ago
Orlando Barone
President – Barone Associates
209 Speakman Hall
Orlando.Barone@temple.edu
Course: MIS 5287: Business Skills for IT Auditors -
David Lanter wrote a new post on the site National Center of Academic Excellence in Cybersecurity 7 years, 10 months ago
Larry Brandolph
Chief Information Security Officer and Vice President of Computer Service and Infrastructure – Temple University
Suite 705 Conwell Hall
Larry.Brandolph@temple.edu
Course: MIS 5170: S […] -
David Lanter wrote a new post on the site National Center of Academic Excellence in Cybersecurity 7 years, 10 months ago
Richard Flanagan
Assistant Professor – Temple University
209 Speakman Hall
ryflanag@temple.edu
Course: MIS 5202: IT Governance -
David Lanter wrote a new post on the site National Center of Academic Excellence in Cybersecurity 7 years, 10 months ago
Ed Ferrara
Chief Information Security Officer – CSL Behring
209 Speakman Hall
Eferrara@temple.edu
Course: MIS 5208: Data Analytics for IT Auditors -
David Lanter wrote a new post on the site National Center of Academic Excellence in Cybersecurity 7 years, 10 months ago
Patrick Wasson
Assistant Director of Application Development – Lewis Katz School of Medicine
209 Speakman Hall
Patrick.Wasson@temple.edu
Course: MIS 5122: Enterprise Architecture for IT Auditors -
David Lanter wrote a new post on the site National Center of Academic Excellence in Cybersecurity 7 years, 10 months ago
Brian Green
DICOM/HL7 Systems Analyst – UltraRAD
209 Speakman Hall
Brian.Green@temple.edu
Course: MIS 5209: Securing the Digital Infrastructure -
David Lanter's profile was updated 7 years, 11 months ago
-
David Lanter wrote a new post on the site ITACS 5206 8 years ago
Why is it important for a business to care about the difference between identity management and access management?
-
David Lanter wrote a new post on the site ITACS 5206 8 years ago
What is the difference between identity management and access management?
Why is it important to a business to care about the difference between identity management and access management?
What is the one […] -
David Lanter wrote a new post on the site ITACS 5206 8 years ago
What is the one interesting point you learned from the readings this week? Why is it interesting?
-
David Lanter wrote a new post on the site ITACS 5206 8 years ago
-
David Lanter wrote a new post on the site ITACS 5206 8 years ago
Recorded lecture: Video
Lecture presentation: Slides
Additional material (not covered in recorded lecture) students are responsible for learning: Read this
Quiz w/solutions: Quiz w/solutions
-
David Lanter wrote a new post on the site ITACS 5206 8 years ago
Question 1: How would you determine if an organization’s network capacity is adequate or inadequate? What impacts could be expected if a portion of an organization’s network capacity is inadequate?
Question 2: […]
-
David Lanter wrote a new post on the site ITACS 5206 8 years ago
How would you determine if an organization’s network capacity is adequate or inadequate? What impacts could be expected if a portion of an organization’s network capacity is inadequate?
-
How would you determine if an organization’s network capacity is adequate or inadequate? What impacts could be expected if a portion of an organization’s network capacity is inadequate?
Network capacity planning is an effective way to determine the organization’s network capacity adequate or not. A key feature of network planning is determining how much bandwidth the network actually needs . QoS is a vital feature in network capacity planning. All links have congestion points and have periodic spikes in traffic. QoS polices are essential to ensure traffic spikes/congestion points are smoothed out, and more bandwidth is allocated to critical network traffic. Without proper QoS policies in place, all traffic has equal priority, and it is impossible to ensure your business-critical applications are getting sufficient bandwidth. For instance, without detailed knowledge of the type of traffic passing through a network, it is not possible to predict if QoS parameters for services like VoIP are meeting target levels. Â Network traffic monitoring will give you the visibility that you need to properly plan network capacity and ensure QoS. Ipswich Flow Monitor network traffic monitoring is invaluable to help you understand bandwidth requirements and for network capacity planning.
Inadequate capacity planning can lead to the loss of the customer and business. Excess capacity can drain the company’s resources and prevent investments into more lucrative ventures. The question of when capacity should be increased and by how much are the critical decisions. Failure to make these decisions correctly can be especially damaging to the overall performance when time delays are present in the system.
-
Question 1: How would you determine if an organization’s network capacity is adequate or inadequate?
What impacts could be expected if a portion of an organization’s network capacity is inadequate?
The capacity are the available resources for the network. To determine if the capacity of the network is adequate, you would conduct a Performance test and compare it to the capacity of the network. For example: If the machine has 8 GB of RAM installed, the capacity of RAM for the system is < 8GB (Actually it would be even less than 8GB because the system will shut down if it hits a certain threshold, which is set closer to 7GB).The impact of a system that reached its max capacity would be system shutdown. To make this system functional, you would have to increase the capacity of the resource. In my example, you would add more RAM, if the board / system was able to utilize the added resources (The mother board may not be able to support the additional capacity for the resource).
This is what happens during a DDoS (Denial of Services) attack. -
Question 2: Suppose an organization is only able to filter and selectively block either: a) network traffic coming into its intranet from the internet (incoming) or b) network traffic going out from the intranet to the internet (outbound). With respect to each of the 3 information system security objectives (i.e. confidentiality, integrity, and availability), if you could only filter and selectively block one network traffic direction which one you would you concentrate on and why?
I would block all incoming traffic because it would be better for the 3 information system security objectives, and still allow for business to operate, it would just operate the way businesses operated in the 1980’s. No email, only phone calls and face-to-face meetings. No website visiting, only getting in your car and visiting the store, or website at another location, but no incoming traffic.
This would only allow a security breach from inside the organization -
How would you determine if an organization’s network capacity is adequate or inadequate? What impacts could be expected if a portion of an organization’s network capacity is inadequate?
Network capacity is the maximum capacity of a link or network path to convey data from one location in the network to another. Network capacity planning is a method to determine whether the organization’s network capacity is adequate or not. As a key feature of network planning is determining how much bandwidth the network actually needs.
The common approaches to Network Capacity Planning as following below:
1. Long-range views of average utilization: this will show a long-term trend of utilization, but the long-term view will average out those spikes of high utilization, thus hiding the problem.
2. Peak utilization, e.g. showing the busiest minute for each day in a month: This shows what days had a busy minute, but doesn’t give insight into the amount of time during which time a link is congested.
4. Traffic totals: easy to show all links on a single view, showing the links with most traffic and even periodic trends to show month-by-month usage. However, it does not give any indication of congestion except in extreme cases.Inadequate network capacity could make the organization network unstable, unresponsive – or worse – unavailable. The bad network performance would result in ineffective service to the business and unfulfilled customers. It would lost possible customers by low level service, so network connectivity is important to an organization’s effectiveness and efficiency.
-
How would you determine if an organization’s network capacity is adequate or inadequate? What impacts could be expected if a portion of an organization’s network capacity is inadequate?
Network capacity must be able meet service-level agreement (SLA) targets of delay, jitter, loss, and availability. SLA requirements for a traffic class can be translated into bandwidth requirements. The ability to meet SLAs is dependent on ensuring that core network bandwidth is adequately provisioned, which depends on network capacity planning.
1. Measure the traffic (aggregate) and forecast the use by the traffic. The bandwidth must be able to handle traffic easily.
2. Test if the bandwidth is always significantly over provisioned to meet committed SLAs
3. Perform simulation testing to overlay the forecast demands
4. Test should be stimulated taking failure cases into consideration.
5. Forecast the usage with the provisioned bandwidth. IF the results vary then capacity is inadequate
6. Possibility and investigation of congestion – The distribution of bandwidth must be such that network availability is good even during high traffic.
7. Costs – It is important to consider overload on network and no situation should be under provisioned, but the costs will increase if the network is too much over provisioned and not utilized
Companies can perform futuristic situation based customized testing to test the capacity is adequate or not.
– What will the response time to if traffic doubles?
– How will applications perform after addition of new application, how many users would it have?
– How will service levels be affected if VMs are active to their full capacity?
– How will changes to I/O devices, network bandwidth, and the size and number of CPUs affect daily operations?
Companies can also perform automotive calculation to determine network capacity adequacy
– Which of my applications have failed to meet SLA in last 6 months?
– Where will the future bottlenecks be?
– How long will it be before my current configurations will fail to meet service levels?
– What will response time will applications need during next month?
In case the capacity is inadequate then many things could go wrong mainly the unavailability of network. Network performance will be slow which will affect daily business tasks and total time not utilized would be a lost especially during peak hours. This will leave systems and resources inefficient incurring costs.http://www-07.ibm.com/services/pdf/nametka.pdf
http://www.cisco.com/c/en/us/td/docs/voice_ip_comm/cust_contact/contact_center/icm_enterprise/icm_enterprise_10_0_1/Design/design_guide/UCCE_BK_UEA1023D_00_unified-cce-design-guide/UCCE_BK_UEA1023D_00_unified-cce-design-guide_chapter_01101.html -
To know if the organization’s network capacity is adequate, the organization must partake in Capacity planning and Performance Management. Capacity planning is the processes for determining the network resources that is required to prevent a performance or availability impact on business-critical applications. Performance management is the practice of managing and monitoring network response time, consistency and quality. Some of the tools within this process is what-if analysis, base-lining, trending, exception management and QoS management.
The first part of capacity planning and performance management is to gather configuration and traffic information. This allows the organization to observe statistics, collect capacity data, and analyze traffic to create a baseline for the organization’s network capacity. The baseline consists of inventorying resource (software, applications, network communication, VOIP, etc,), users, and the bandwidth that is required to enable the organization to run it’s day-to-day and critical business applications.
Once a baseline is established then a trend analysis can help the organization identify network and capacity issues to understand future upgrade requirements. For example, a new internal portal allows the organization’s employees to share videos of community service events. As more users are aware of the new portal, it receives more traffic and the performance of other web-application is reduced.
Once the problem is identified, the organization will plan for the changes and do a what-if analysis to determine the affect of a network change. After it is evaluated then the changes are implemented.
Inadequate capacity can lead to employee not being able to get the resources they need to do their job. For example, if a Supply Chain application was experiencing performance issues, due to inadequate capacity, and the employee was unable to order production materials; then time and money is loss with idle production capacity and resources.
-
Good Summary Jianhui,
I think that the “Long range view of average utilization” is probably not the best indication of required network capacity. As we know, average is just a faux number and will not absolute. What I mean by this is; 1) it doesn’t account for capacity utilization during peak hours, 2) it overstates capacity requirements for idle network capacity. If you build your network based on average, then you will experience latency or performance issues during peak hours of usage. I guess, the hardest part is tradeoff between how much capacity the organization wants to have available at all times and how much they are willing to spend to maintain it.
-
Fred, good post, I believe the performance test is the most common way to help an organization determine whether or not its network capacity is adequate. Then they can compare the result with the previous results or the capacity to check the network capacity. It is critical for the network to have excessive/more capacity to avoid DDoS attack.
-
How would you determine if an organization’s network capacity is adequate or inadequate? What impacts could be expected if a portion of an organization’s network capacity is inadequate?
Network capacity is the maximum capacity of a link or network path to convey data from one location in the network to another. (From IT Law Wiki)
One way to effectively identify the parameters that can affect the network’s performance or availability of an organization in a predictable time is network capacity planning. Through conducting performance simulation and capacity analysis based on traffic information, infrastructure capacity and network utilization, the results provide indications of the expected loading in the network. then planners can compare it against the provisioned network capacity, to determine maximum capability of current resources and the amount of new resources needed to cater to future requirements.
Inadequate network capacity may lead to overload, or crash of network.
-
Priya,
This is a very good explanation to this question. You mentioned all important points which should be considered to determine if an organization’s network capacity is adequate or inadequate. One point I like the most is to test should be stimulated taking failure cases into consideration. It is very important to manage the network downtime and failures to prevent loss of business. BCP DR should always be considered of prime importance while answering the adequacy of the network.
Let’s take an example of banking industry. A network downtime for a bank can make it’s customers unable to lead to unavailability of online banking to it’s customers and can also stop the daily activities of a bank. This can lead to financial loss to it’s customers as well as to the bank too. Hence this should be taken care of while designing the network and testing it’s adequacy to meet any kind of unwanted event. -
How would you determine if an organization’s network capacity is adequate or inadequate? What impacts could be expected if a portion of an organization’s network capacity is inadequate?
Network Capacity Planning would be an ideal way to identify if the current infrastructure can support the the amount of resources necessary for the applications to operate sufficiently during peak hours of the business. Depending on the network in question and the service provider for the network, most provide online tools to enable the client to monitor their traffic in real time as well as set utilization reports to run automatically at certain times of day/month. Also, there are a number of techniques that can be used such as traffic shaping and quality of service/class of service to prioritize traffic and to ensure that the mission critical applications get the necessary bandwidth needed during peak hours.
If a new application was rolled out that was a resource “hog” and was tying up all of the necessary bandwidth that had negative performance consequences for other applications using the same bandwidth. This can be very costly to a business, specifically in the financial sector, where the need for real-time information in order to effectively make money. Any type of lag or jitter in these instances can be devastating in the financial markets. All service providers will back specific type of lines with SLAs surrouding availability, mean time to repair, jitter, latency, etc. It would be good practice to also compare
-
Inorder to determine network capacity is adequate or inadequate we need to have a network capacity planning which includes finding out
1) Traffic Characteristics-Type and amount of traffic
• Traffic volumes and rates
• Prime versus non-prime traffic rates
• Traffic volumes by technology2) Present Operational Capacity
• WAN percent capacity used
• LAN percent capacity used3) Evidence of Congestion
• Packet discards which can be checked with ping operation
• Top error interfaces4)Network growth over a period of time.
Requires detailed view into current bandwidth usage, combined with historical accounts of capacity usage5)QoS
To check if Business-critical applications are getting sufficient bandwidth.If the network capacity is inadequate it will lead to slow network performance which will disrupt the company critical operations.
Delay in deliverable which can tarnish company image
During the peak office hours of morning the slowdown in network will also waste important time of employees if critical applications are running slowhttp://www-07.ibm.com/services/pdf/nametka.pdf
https://www.ipswitch.com/resources/best-practices/network-capacity-planning -
How would you determine if an organization’s network capacity is adequate or inadequate? What impacts could be expected if a portion of an organization’s network capacity is inadequate?
Capacity planning is the process of determining the production capacity is adequate or inadequate needed by an organization to meet changing demands for its products.
There are three steps for capacity planning:
1. Determine service level requirements
2. Analyze current capacity
3. Plan for the future
To manage capacity effectively and to provide adequate bandwidth for critical services, these are some questions people should keep in mind: How much bandwidth does your business need? How close towards maximum utilization are your servers? Which network interfaces will be most utilized 30 days from now?
If portion of network capacity is not adequate it will lead to loss of customers and business because network may be interrupted and therefore unable to perform the service for clients. In addition, there are negative impact expected on business operations flow and possibly data corruption.Source: https://www.sevone.com/supported-technologies/capacity-planning-and-bandwidth-management
-
Network capacity is the measurement of the maximum amount of data that may be transferred between network locations over a link or network path. The network capacity measurement is complex and there are many different variables (network engineering, subscriber services, rate at which handsets enter and leave a covered cell site area) and scenarios that make actual network capacity rarely accurate. With that said, a key indicator of inadequate network capacity would be too much network traffic causing bottlenecks in your process and a slow network which could be caused by an overload of network errors and network congestion
-
Great answer, Wenlin. You’ve correctly pointed out that “QoS policies are essential to ensure traffic spikes/congestion points are smoothed out, and more bandwidth is allocated to critical network traffic”. I’d like to add that to quantitatively measure quality of service, various aspects of the network service are often considered, such as error rates, bit rate, throughput, transmission delay, availability, jitter, etc. If a portion of a network’s capacity is inadequate one can expect problems such as dropped packets, latency, out of order delivery and errors due to interference and low throughput.
-
Well put, Vaibhav. You’ve covered the important points for check network capacity adequacy very well. I especially liked that you’ve mentioned that network growth over time is also an important area to look into. It is easy to overlook slowly but gradually degrading network performance till a major incident occurs. To ensure that such a scenario doesn’t take place, it is important to look at trends for network performance parameters so that any capacity related issue can be identified at the earliest and dealt with appropriately.
-
I want to add the some approaches of how to evaluate the network capacity to your comments. .
1. Long-range views of average utilization: this will show a long-term trend of utilization, but the long-term view will average out those spikes of high utilization, thus hiding the problem.
2. Peak utilization, e.g. showing the busiest minute for each day in a month: This shows what days had a busy minute, but doesn’t give insight into the amount of time during which time a link is congested.
4. Traffic totals: easy to show all links on a single view, showing the links with most traffic and even periodic trends to show month-by-month usage. However, it does not give any indication of congestion except in extreme cases.
-
-
David Lanter wrote a new post on the site ITACS 5206 8 years ago
Suppose an organization is only able to filter and selectively block either: a) network traffic coming into its intranet from the internet (incoming) or b) network traffic going out to the internet (outbound). […]
-
Question 2: Suppose an organization is only able to filter and selectively block either: a) network traffic coming into its intranet from the internet (incoming) or b) network traffic going out from the intranet to the internet (outbound). With respect to each of the 3 information system security objectives (i.e. confidentiality, integrity, and availability), if you could only filter and selectively block one network traffic direction which one you would you concentrate on and why?
I would block all incoming traffic because it would be better for the 3 information system security objectives, and still allow for business to operate, it would just operate the way businesses operated in the 1980’s. No email, only phone calls and face-to-face meetings. No website visiting, only getting in your car and visiting the store, or website at another location, but no incoming traffic.
This would only allow a security breach from inside the organization -
Although this decision would greatly be dependent on the type of business and the situation which calls for such a choice to be made, I personally, would choose to block outbound traffic. My decision is based on the below reasons, keeping in mind the objectives of CIA ( confidentiality, integrity, availability) and assuming that this is only for a short duration (may be due to an upgrade activity or other infrastructural change taking place)
1) Since most resources and tools that the employees use will be on the intranet, employees would be able to perform their duties either way
2) A considerable portion of incoming traffic would be incoming emails from clients and customers. Receiving these emails is very critical. Emails which require immediate action or response need to reach the employees so appropriate action can be taken. For outbound communication, Employees can always contact customers via phone and inform them to expect delay. Non-urgent emails can be responded to later.
3) Blocking outbound traffic would also mean the confidentiality and integrity of company information is maintained. Availability of company resources to internal employees is not hampered in any case so from CIA objective perspective too, this decision would be the right one.
4) Allowing inbound traffic could be opening doors to cyber-attacks, phishing, viruses etc however, having the network secured by firewall and antivirus would greatly reduce the probability of such an attack being successful. -
Suppose an organization is only able to filter and selectively block either: a) network traffic coming into its intranet from the internet (incoming) or b) network traffic going out to the internet (outbound). With respect to each of the 3 information system security objectives (i.e. confidentiality, integrity, and availability), if you could only filter and selectively block one network traffic direction which one you would you concentrate on and why?
I would say “b” because blocking the traffic going out to the internet is like cutting the organization from the outside world. The internet will only work between the company network. Blocking the traffic going out will definitely reduce the risk of an attack. Plus, employee would be focus on what they are supposed to do since they can’t access other sites such as Facebook, shopping sites, illegal sites…
It would be counterproductive to block the incoming network because it would block communication inside the company. How would people use share drive in the network? -
If I had to choose whether to allow inbound traffic or outbound traffic I would go for outbound traffic (intranet to internet) for security reasons alone and block inbound traffic (internet to intranet). In most cases that we hear of data breach, we find that attackers come in on inbound connections rather than outbound.
Though there are cases wherein there are attacks from within the organization due to human error or negligence I find inbound traffic to be less secure for the following reasons:
-Confidentiality: In case of inbound traffic we will not be sure of the source from where data is coming. Controls like filters, firewalls, anti-virus and routers may not be enough to monitor the inflow.
-Integrity: Multiple type of encryptions may be used. As we cannot control the environment outside the organization, more chances of the data to be altered. This opens the system to malicious mails, virus, worms, social engineering attacks, DOS attacks.
-Availability: If there is a non-availability of data due to server issue or broken route, it is not very easy to get it fixed. It would need third party involvement and will completely depend on the source to rectify the issue.
Outbound traffic is more secure as it is flowing out of the organization and it is in the control of the network administrator. It becomes easier to predict and provide preventive and corrective controls if needed.
-Confidentially: The access is given only to the authorized user to be able to send the information
-Integrity: The encryption used to send the data is decided by the network team and so it can be more secure as per the design. Alteration to data can be done by user with malicious intent but this is rare and can be prevented with proper authorization permissions and segregation of duties
-Availability: Availability is directly related to the datacenter or the network within the organization. So DRP defined can help restore the information within the timeline identified by the company. -
Absolutely, I agree.
There is a huge security risk associated with out bound traffic. For instance, DDoS attacks. But if you don’t have an open port to move traffic out, the probability of your network to be a participant (botnet) of such an attack decreases.
There are other risks as well, like, uncontrolled email and file transfers from your network to outside network can compromise the confidentiality aspect.
-
Question 2: Suppose an organization is only able to filter and selectively block either: a) network traffic coming into its intranet from the internet (incoming) or b) network traffic going out from the intranet to the internet (outbound). With respect to each of the 3 information system security objectives (i.e. confidentiality, integrity, and availability), if you could only filter and selectively block one network traffic direction which one you would you concentrate on and why?
The answer to this question comes from another question: For what reason there is a need to block the network. CIA is highest level of objective which an organization wants to achieve with respect to data and security rules and regulations revolves around how to maintain the CIA.
So if there is a case an employee is working suspiciously on some important data and is making efforts to leak the data from the organization to the outside world, we would need to block network traffic going out from the intranet to the internet(Outbound). The other case where we would want to block outbound traffic is if there is a malware attack and due to which outside attackers are able to access data. A similar case occurred recently in Wells Fargo where employees had an unauthorized access to the customer data. In such case employee who have access to customer’s personnel data can easily be leaked to the outside world therefore blocking the outbound traffic is the step which would be needed in such incident.
Blocking the network traffic coming into its intranet from the internet would be necessary in case there is a cyber-attack on the network. The attacks can be a virus attack, denial of service attack (DDOS), Man in the middle attack and so on. In becomes very difficult to control such kind of attack from outside. Hence blocking the inbound network is the only option to prevent data breach and internal network.
-
Fred,
I think it might be depends on nature of business, if the organization doesn’t want to communicate in industry so blocking all income network is good practice, However, I think in terms of confidentiality we should block or be sensitive about outbound information.
-
I strongly agree with you Deepali, that the decision to choose between allowing inbound traffic or outbound traffic is very much dependent on the scenario that calls for such a choice to be made in the first place. You gave excellent examples of both such scenarios – one where the need of the hour is to contain data within the company and one where the need of the hour to keep external threat agents out. I think it is safe to say that there cannot be a fixed right decision purely because the decision is one which needs to be made considering different factors.
-
Mansi,
Good point, I agree with you it totally depends on nature of business, However outbound traffic is more important in my view, I remember in advisory session we had couple weeks ago, we analyze the case that the problem was in outbound traffic wasn’t protected so there was a main problem on that case.
-
-
David Lanter wrote a new post on the site ITACS 5206 8 years ago
In the contexts of being attacked by or unwittingly becoming a resource for distributed denial of service (DDoS), which is a bigger threat to an organization’s network and computer resources and why: Spam p […]
-
A Distributed Denial of Service (DDoS) attack is an attempt to make an online service unavailable by bombarding it with traffic from multiple sources
Spear-phishing attack is carefully crafted and customized to look as if it comes from a trusted sender on a connected subject. Spear-phishing scams often take advantage of a variety of methods to deliver malware .Spam-phishing attack is targeted on by sending mass amount of junk emails and unwanted codes to the recipient through different methods
As mentioned the DDoS is a collective effort so it is launched by a large number of computers or bots together to attack a particular website and server by overwhelming it with huge traffic .In this case the spam phishing is a bigger threat in launching DDoS
Every network service has its bandwidth and if it is flooded by spam email and code it actually causes a DDoS..I wanna take an example of how the large amount spam emails can disable the work completely.
In my previous organization while working on setting up an SMTP server for client website,I had given up my organization email to test the content of email being received.I had mistakenly put up the code inside a unending while loop and tested the code.It was in minutes my entire mailbox was filled with test mails and I could not receive any further emails .Every single mail deleted will bring another numerous test email.My outlook went completely down -
Question 3: In the contexts of being attacked by or unwittingly becoming a resource for distributed denial of service (DDoS), which is a bigger threat to an organization’s network and computer resources and why: Spam phishing or Spear phishing?
In the contexts of being attacked by a DDoS, a bigger threat to an organization is Spear phishing because it is often an email from a familiar source, or so it looks.
Spear phishing is a form of email, instant message, text, ect attack. The attacker poses as a familiar contact and persuades the user into performing a certain action. Since this is coming from a reputable source, the user will unknowingly infect the system by performing the action requested by the reputable source. This would be a bigger threat to me because SPAM is common these days and everyone deletes things they don’t recognize. Spear phishing is something you will recognize.
-
In the contexts of being attacked by or unwittingly becoming a resource for distributed denial of service (DDoS), which is a bigger threat to an organization’s network and computer resources and why: Spam phishing or Spear phishing?
Fraudsters use phishing emails to steal personal information. Although, the email may look harmless but they can convince employees to follow links or download attachments that can be dangerous and compromise PII (name, address, SSN, credit card number, etc.).
Spear-phishing is fundamentally based on the same idea except regular employees are not the target for the attackers; they want the access to the organization’s valuable resources.
Spear-fishers typically gather information from social media sites and other sources to craft highly targeted messages. During an attack, they will send emails to a few employees and not everyone onto the organization’s network (like they do in phishing email attack) to avoid getting filtered out by anti-phishing software. Malwares used by spear-fishers are not like the typical malware that will flood the screen with pop-ups, these malwares are difficult to spot. There have been cases where these malware have improved computer performances. Critical information like customer information, confidential files, organization proprietary information, trade secrets, etc. can be compromised.
In a scenario, where top executives of an organization were spear-fished and their system was compromised, request sent from their botnets will probably not be ignored and is a bigger threat, I believe.
-
Spam phishing: Phishing attacks use spam (electronic equivalent of junk mail) or malicious websites (clicking on a link) to collect personal and financial information or infect your machine with malware and viruses.
Spear phishing: Spear phishing is highly specialized attacks against a specific target or small group of targets to collect information or gain access to systems.
A distributed denial-of-service (DDoS) attack is when a malicious user gets a network of zombie computers to sabotage a specific website or server. The attack happens when the malicious user tells all the zombie computers to contact a specific website or server over and over again. That increase in the volume of traffic overloads the website or server causing it to be slow for legitimate users, sometimes to the point that the website or server shuts down completely.
As for DDoS attack, Spear phishing should be a bigger threat because spam phishing is easier to control than spear phishing. First the email has its own ability to identify spams. Second, spams are easier to tell if people have awareness of protecting themselves to click links or download files in unwanted emails. On the other hand, spear phishing is more targeting to collect specific information. For example, a cybercriminal may launch a spear phishing attack against a business to gain credentials to access a list of customers. Since they have gained access to the network, the email they send may look even more authentic and because the recipient is already customer of the business, the email may more easily make it through filters and the recipient maybe more likely to open the email.
https://staysafeonline.org/stay-safe-online/keep-a-clean-machine/spam-and-phishing
https://www.getcybersafe.gc.ca/cnt/rsks/cmmn-thrts-en.aspx -
Vaibhav,
I think your example makes clear the ways in which spam can take down a network and computer systems. I think the question being asked this week is “double-dipping” in a way.
For spam phishing the example you gave makes it clear that overloading a network or system can bring it down.
However, spear phishing could also bring networks/systems down if the target that was compromised had privileged access, or something to that effect.
-
Question 3: In the contexts of being attacked by or unwittingly becoming a resource for distributed denial of service (DDoS), which is a bigger threat to an organization’s network and computer resources and why: Spam phishing or Spear phishing?
For DDos attack, I think spear phishing is a more targeted form of phishing whereas spam phishing involves malicious emails sent to any random email account. Spear phishing emails are designed to appear to come from someone the recipient knows and trusts, such as a colleague, business manager or human resources department. For attackers, they may study victims’ social networking, like Facebook, LinkedIn, WhatsApp, etc., to gain intelligence about a victim and choose the names of trusted people in their circle to impersonate or a topic of interest to lure the victim and gain their trust.
Nowadays, organization’s computers have already installed spam stopper software or tools to prevent these kind of spam phishing emails to their accounts. So spear phishing becomes more popular than spam phishing for attackers because people have already had consciousness for spams, but people still lack consciousness for spear phishing because of emails sent by “familiar” people. -
Question 3: In the contexts of being attacked by or unwittingly becoming a resource for distributed denial of service (DDoS), which is a bigger threat to an organization’s network and computer resources and why: Spam phishing or Spear phishing?
In the contexts of being attacked by a resource for distributed denial of service (DDoS), I would say that spear phishing a bigger threat to an organization’s network and computer resources. A Distributed Denial of Service (DDoS) attack is an attempt to make the target unavailable by overwhelming it with traffic from multiple sources. A typical spear phishing emails is extremely deceptive as they attempt to represent an identity that is trusted or related to the business itself or user’s interest. It is more likely for the users to open the email and download the malicious file. For spam phishing, I believe this kind of phishing emails would be detected by the email security system. Users are less likely to open the emails if they don’t recognize the contents.
One of the most recent example was the Dyn suffering from a DDoS attack, resulting in network and system downtime for many hours. Social network users could not access to their social media account or commercial transactions could not go through.
Definition of Spear phishing:
Spear phishing is highly specialized attacks against a specific target or small group of targets to collect information or gain access to systems.Definition of Spam phishing:
Spam phishing is the abuse of electronic messaging systems to indiscriminately send unsolicited bulk messages, many of which contain hoaxes or other undesirable contents such as links to phishing sites. -
Hi Mengxue, Great post, I like how you stated that Spear phishing should be a bigger threat to an organization because targets the specific organization with collecting specific contents. And you outstanding example shows how the spear phishing hacker will ask for the customer list. Spear phishing does not have to be carrying malicious software or botnet, it can also ask for your financial information/client’s data/personal private information.
-
Question 3: In the contexts of being attacked by or unwittingly becoming a resource for distributed denial of service (DDoS), which is a bigger threat to an organization’s network and computer resources and why: Spam phishing or Spear phishing?
Spam phishing is a bigger threat to an organization becoming a resource for distributed denial of service (DDoS) attack. Spam phishing typically utilizes mass email to target as many people as possible. In a way, similar to the “shotgun approach,” in that more is better. Conversely spear phishing uses a different strategy by target a small number of victims. Spear phishing will often use social engineering to lure employees into clicking a link on an infected email. While both will often use email, the goals are not generally the same. Because spear phishing targets a much smaller number of people than Spam, it usually is trying to steal sensitive data, financial information, and other valuable information. Spam phishing uses a much larger scale because it often recruits the victims computer in to a “zombie” computer, collectively known as a botnet. These computers are then used for DDoS attacks. -
In the contexts of being attacked by or unwittingly becoming a resource for distributed denial of service (DDoS), which is a bigger threat to an organization’s network and computer resources and why: Spam phishing or Spear phishing?
Generally, the spam phishing attack is a form of electronic junk mail sent to users, and it’s a very dangerous phishing scam since the attackers may use it to obtain sensitive personal information from victims like their credit card information or the password of online banking. Different from traditional spam phishing attack, the spear phishing is focusing on specific targets like employees or management of the organizations. Besides the email phishing attacks, pear phishing attackers also usually built fake websites with virus or malware, if the specific targets open the fake websites, the virus may copy the highly sensitive information from inside of the organization, and cause significantly data leak and damage the information assets of the company.
Comparing these two phishing, the spear phishing is a bigger threat to an organization’s network. Indeed, the spam phishing is more widely affected, but from the perspective of organization’s network, the spear phishing may cause more serious damage. Because the spear phishing is targeting the specific victims like management who have access authority to login to the company’s systems. If the attackers use spear phishing successfully obtain the accessibility of the information systems of the company, all confidential information are under the monitoring of the attackers, even worse, they can steal this sensitive information.
-
I think that in the context of being attacked by a DDoS that spear phishing is the bigger threat. A DDoS to take down an organization’s resources would need to know how to access servers via IP or by figuring out how to get past firewalls. Network administrators may be targeted to reveal this sensitive information. A spear phisher may request permission to audit and the administrator may reveal these resources’ locations against normal protocol. Neither spam phishing or spear phishing would change a DDoS on the organization’s public website as that IP is already known.
Conversely, I think in the context of the organization’s network becoming a botnet for a DDoS that spam phishing is the bigger threat. The botnet’s goal is to become as large as possible with as many connected computer resources at its disposal. The most effective botnets barely change normal computer operation making them difficult to know if you’re even infected. Targeting singularly important users in a company is not a strategy aligned with this goal. -
Spear phishing is highly specialized attacks against a specific target or group of targets to collect information or gain access to systems through personalized e-mail messages and social engineering. This is not random kind of attack, attacker knows target name, target email address, and at least a little about target. It’s a more in depth version of phishing that requires special knowledge about target.
spear phishing is an effective method for targeting several industries, appear to come from a trusted source.
Spam is the electronic equivalent of junk mail. It’s annoying and can potentially be very dangerous if part of a larger phishing scam.
I believe that spear phishing is the bigger threat to an organization’s network. attacks have potential consequences such as identity theft, financial fraud or stealing intellectual property. spear phishing is a real threat and it can bypass normal technical anti-threat barriers and exploits users to infiltrate systems.
Here are solutions to mitigate the Spear phishing attack:
– Consider extra level of authorization such as 2 step of verification.
– Frequently change password.
– Employees training
– Deploy a SPAM filter that detects viruses, blank senders. -
Noah,
Its interesting, I didn’t think the way you explained! True its based on reason for attacks or what is the goal of attackers.
If attacker aims botnet, spam phishing as it sent out in mass quantities can be a bigger treat, and if attacker aims specific information spear phishing as its target specific group of organization is a bigger treat.
-
In the contexts of being attacked by or unwittingly becoming a resource for distributed denial of service (DDoS), which is a bigger threat to an organization’s network and computer resources and why: Spam phishing or Spear phishing?
Spear phishing would be a bigger threat to an organization’s network because it is more targeted towards the victim and is more likely to be effective.
Spam phishing is a smaller threat and less effective because spam emails are usually easily detected. Users who receive spam emails can usually tell that it’s a spam because they tend to be random and unrelated to anything. Spam emails can also be filtered by spam filtering programs in the email service so they rarely even appear in employee emails.
Spear phishing emails on the other hand target specific employee. The emails are uniquely crafted specific to that employee such as appearing as sent by someone the employee knows or is related to something the employee done recently. The email may also be social engineered to get the employee to divulge information. Humans are susceptible to these kinds of emails because we tend to be more positive. When we receive an email that appears to be from someone we know, our initial instinct is to accept it and not be suspicious of it.
-
Good example of Dyn suffering from the DDoS attack. In this case, the cyberattacks cause the network and system down for couple hours, which damaged the information assets of the company and also the reputation. Just curiosity, in the Dyn’s case, is that the spear phishing attacks allowed the attacker got the accessibility by stealing the PII from the administrator?
-
Hey everyone,
I see your point that and agree that spear phishing is the more dangerous of the two. However, I interpreted the question to be that if you were being attacked by a botnet or an attack is attempting to make your computer a part of a botnet, what is more of a concern; Spam Phishing or Spear Phishing? In that case, I believe it will most likely be a spam phishing attack and not a spear phishing attack. I suppose which phishing attempt one should focus on depends on the exact risk.
If I am worried about a DDoS, then my concern would be over a spam phishing attack or just spam attack in general. A DDoS is caused by many computer requests coming in from multiple computer locations. If I were to get millions of spam emails in a matter of seconds, then my email system would go offline. Therefore, for the DDoS risk, I would be more concern with spam phishing or just enough spam to knock off my services.
If I am worried about my computer or computer resources becoming part of a botnet, I would be more concerned still with spam phishing. The reason for this, is because those looking to increase their botnet aren’t to particularly keen on whose computer resources it is. My grandma’s laptop works just as well as a botnet as the laptop of a Fortune 500 CEO, so therefore to increase their numbers it is more quantity over quality. Therefore, botnet owners will likely use spam phishing as a method for increasing their botnet and therefore to address this risk, I would focus on spam phishing. However, Abhay brought up a good point that spear phishing shouldn’t be 100% ruled out as a method for acquiring computer resources for a botnet.
If I am worried about social engineering, then I would be more concerned with spear phishing. Since spear phishing is targeted to only a couple of individuals, the cause behind such attempts isn’t so much to get access to computer resources for a botnet, but more to get access to an organization’s data such as PII or sensitive business information like patents. Since spear phishing is so particular and targets a small group of individuals, as opposed to a spam phishing attempt which can targets millions, the chance of success is greater.
-
Hi Fangzhou,
Great post, I agree with you that spam phishing is widely affected because it targets a massive amount of email users. It is inexpensive, quite and convenient but the success rate is lower compared to spear phishing. Compared to spam, spear phishing may take long time to modify the email for specific targets!
-
Hi, Yuming
You made great points. One thing I want to point out is that spam messages often contain images that the sender can track. When you open the email, the images will load and the spammer will be able to tell if your email works, which could result in even more spam. What we can do as email users to avoid this is by turning off email images.
With phishing scams, people should use their best judgement.
– never send someone money just because you’ve received an email request.
– never download email attachments you weren’t expecting because they might contain malware that could damage your computer and steal your personal information. -
Hi, Fangzhou
You are absolutely right that spam phishing is widely affected. I found some statistics online that was very interesting. There was a campaign, it sent 1000000 messages through spam phishing attack, the open rate was 3%, and click through rate was 5%. However, only 1000 message sent through spear phishing, the open rate was 70%, and click through rate was 50%. You can tell there is a huge difference! There are more people opening their email when it was sent through spear phishing.
-
Dyn definitely was the victim of a DDoS attack, but were they also the victim of spear or spam phishing? No question that others were certainly victims because so many connected devices were infected and turned to bots. But Dyn simply suffered from an inundation of traffic from this botnet, which may only be indirectly related to the phishing. It’s definitely very relevant to the attack, but not sure that either directly affected Dyn from what I’ve seen in the news.
-
-
David Lanter wrote a new post on the site ITACS 5206 8 years ago
-
“Major DDoS attack on Dyn DNS knocks Spotify, Twitter, Github, Etsy, and more offline”
Some popular websites including Twitter, Etsy, Reddit experienced disruptions when hackers launched a large cyber-attack. The cause appears to be the outage of DNS provider called Dyn. On Friday morning, domain host company Dyn confirmed that the attack started at 7:10am and lasted for more than two hours and major websites and services across the East Coast were shut down for two hours; services were later restored by 9:30am.
Dyn further said, “Some customers may experience increased DNS query latency and delayed zone propagation during this time. Updates will be posted as information becomes available.”
Domain Name Systems are like Internet’s phone directory. When an user enters a certain web address in the URL, the DNS then facilitates the request to go to that website and ensures that the user is sent to the right address. Since Dyn suffered an outage, a lot of users trying to access the affected webpages were suffering disruptions.
-
Hackers attacked Dyn’s DNS Services with a DDoS (Distributed Denial-of-Services) attack and shut down internet access to people along the east-coast. People had trouble accessing Twitter, Spotify, Netlix, Amazon and/or Reddit.
Dyn confirmed the attack and has began monitoring and mitigating the DDos attack to their Dyn DNS infrastructure.
DDoS attacks are when a hacker initiates large amounts of services on a machine to take it off-line.
-
Major DDoS Attack Causes U.S. Outages on Twitter, Reddit, Others
This week I found the news about a large distributed denial of service attack (DDoD) directed at DNS, internet performance management company Dyn caused Website outages for a number of its customers including Twiiter, Reddit and Spotify affecting mostly the eastern US. Dyn took immediate action to solve the denial of service problem in 2 hours.
A DDoS attack is an attempt to make an online service unavailable by overwhelming with massive amount of traffic from multiple sources. The hackers usually target a wide variety of important resources, from banks to news websites, and present a major damage to the server to shut down and make people cannot publish and access important information. In the case of the attack on Dyn, this affected the company’s ability to manage DNS queries and connect traffic to customers’ proper IP addresses at normal speeds.
Due to the weak protection of Internet of Things devices, there are rapid increased numbers of DDoS attacks. They are poorly secured Internet-based security cameras, digital video recorders (DVRs) and Internet routers.
Source: http://www.toptechnews.com/article/index.php?story_id=111003TVBR2I
-
Researchers Find Dangerous Intel Chip Flaw
Researchers at the State University of New York and University of California discovered flaw in Intel chips which allows them to bypass ASLR (Address space layout randomization works to defend against a range of attacks by randomizing the locations of code in computer memory). The researchers were able to launch a so-called ‘side channel’ attack on a Haswell chip’s branch target buffer (BTB), which resides in the branch predictor part of the CPU. Doing so enabled them to work out where certain pieces of code were located, effectively undermining ASLR. However, Alfredo Pironti who manage consultant at ethical hacking firm IOActive claimed it is worth nothing that these attacks are often more expensive and time consuming to conduct, compared to classical software attacks.
Theoretically, this intel chip flaw is very dangerous since it makes a range of cyber-attacks far more effective-across Windows, Linus, OS X, Android and IOS. Practically, since it is expensive and costing time, besides it requires stricter conditions such as running a specific software on the victim’s machine and being able to collect CPU metrics. It will be difficult to conduct the hack. Still, we hope intel can fix the problem for security reason.
Link: http://www.infosecurity-magazine.com/news/researchers-find-dangerous-intel/
-
Large DDoS attacks cause outages at Twitter, Spotify, and other sites
Several waves of major cyberattacks against an internet directory service knocked dozens of popular websites offline today, with outages continuing into the afternoon.
Twitter, SoundCloud, Spotify, Shopify, and other websites have been inaccessible to many users throughout the day. The outages are the result of several distributed denial of service (DDoS) attacks on the DNS provider Dyn, the company confirmed. The outages were first reported on Hacker News.
The DDoS attacks on Dyn began this morning. Service was temporarily restored around 9:30 a.m. ET, but a second attack began around noon, knocking sites offline once again.The DNS provider said engineers were working on “mitigating” the issue, but a third wave began around 4:30 p.m. ET before being resolved roughly two hours later.
“The complexity of the attacks is making it complicated for us. It’s so distributed, coming from tens of millions of source IP addresses around the world. What they’re doing is moving around the world with each attack,” Dyn’s York explained.York said that the DDoS attack initially targeted the company’s data centers on the East Coast, then moved to international data centers. The attack contained “specific nuance to parts of our infrastructure,” he added.
Large DDoS attacks cause outages at Twitter, Spotify, and other sites
-
Millions of Indian debit cards ‘compromised’ in security breach
On Wednesday, India’s largest bank, State Bank of India, said it had blocked close to 600 thousands debit cards following a malware-related security breach in a non-SBI ATM network. Several other banks, such as Axis Bank, HDFC Bank and ICICI Bank, too have admitted being hit by similar cyber attacks — forcing Indian banks to either replace or request users to change the security codes of as many as 3.2 million debit cards over the last two months.
On September 5, some banks came across fraudulent transactions in which debit cards were used in China and the US when customers were actually in India. -
“Regulators to Toughen Cybersecurity Standards at Nation’s Biggest Banks”
The article discusses initial framework that US regulators recently unveiled to address cybersecurity at the nation’s biggest bank. The plan was developed by the Federal Reserve and Federal Deposit Insurance Corp (FDIC) and will target US and foreign banks that operate in the US with $50 billion or more in managed assets. The most stringent requirements are reserved for the institutions that pose a systemic risk to the economy and financial system. These banks would be required to demonstrate that the ability to return core operations online within two hours of a cyber attack/IT failure. The financial industry is heavily dependent on information systems and is increasingly interconnected which can increase the impact of one event.
-
The article I read is about a massive atm attack in india which hit 3.2 million accounts from multiple banks and financial platform. Probably the biggest data breaches to date. The majority of the stolen debit card information are powered by VISA or Mastercard.
Hackers used malware to compromise the Payment Services platform, used to power country’s ATM, point-of-sale (PoS) machines and other financial transactions. As of now, the hackers identity is still a mystery however, it looks like the affected customers have observed unauthorized transactions made by their cards in various locations in China.As of now, the Payments Council of India has ordered a forensic audit on the Indian bank servers to measure the damage and investigate the origin of the cyber attack
(Im not sure if you guys are aware but cards which use Magnetic Stripes are easier to clone. Whereas, banks who are using Chip-and-Pin cards store your data in encrypted form and only transmit a unique code (one-time-use Token) for every transaction, making these cards more secure.)
-
3.2 Million Debit Card Hacked in India
In what has been termed as the biggest data breaches in the banking industry in India, 3.2 million debit card details have been stolen. These debit cards have been understood to be used at ATM’s that are suspected to have exposed card and pin details to malware at the back-end. A forensic audit has been ordered by payments councils of India on Indian bank servers and systems to detect the origin of frauds that might have hit customer accounts. Indian banks stung by biggest financial data breach to hit the industry are trying to contain the damage and compensate the effected account holders.
According to national payment corporation of India 90 Million ATM’s have been compromised and at least 641 customers across 189 banks have been hit. As per NPCI total amount lost due to fraudulent transactions on hacked debit cards is Rs. 1.3 Crore. The malware used was a malicious software in form of virus, worms, Trojan, ransom-ware and spyware which impacted the computer systems at ATM’s. Reserve Bank of India has directed the banks to submit a report on such a big data theft.
The manufacturer of ATM’s Hitachi payment services is under the fire as it is believed that the malware was introduced in the systems due to lack of testing of the ATM machines, Point of Sale and other services. The company is believed to have installed more than 50000 ATM’s in the country within the last year. Of the total debit cards hit, 2.6 Million are said to be on VISA and MASTERCARD platform. As a damage control exercise, banks have advised customers to change ATM pins or get their cards replaced.
Some banks have even issued advisory messages to stop using other bank ATM’s at their ATM machines. One of the worst hit bank has already blocked 6 Lack debit cards and has blocked international transactions that can be conducted without PIN.
-
Barack Obama Talks AI, Robo-Cars, and the Future of the World
President Obama had an interview in November’s issue of Wired and he made a few interesting points about cyber security.
OBAMA: “Traditionally, when we think about security and protecting ourselves, we think in terms of armor or walls. Increasingly, I find myself looking to medicine and thinking about viruses, antibodies. Part of the reason why cyber security continues to be so hard is because the threat is not a bunch of tanks rolling at you but a whole bunch of systems that may be vulnerable to a worm getting in there. It means that we’ve got to think differently about our security, make different investments that may not be as sexy but may actually end up being as important as anything.”
While the above vastly simplifies what can be a very complicated set of issues, I do appreciate the “armor vs. medicine” metaphor he uses.
-
Yeah, no problem! Just the irony though, I was seriously complaining about how crappy chrome was acting and I discovered this article and I was like oh man. Additionally, yes I remember that haha using hackers for profit, sounds like a booming business! Thanks for the link, wish I could do the bug bounty program, especially after looking at the pay outs; definitely would be putting that towards student loans,
-
the article I read for this week is called “Massive DDoS Attack Knowcs Out Twitter, Box, Spotify.” The article talked about that the DDoS attack targeted New Hampshire-based company Dyn and its managed DNS infrastructure, and began early Friday Morning. The company originally said that “it restored operations around 9:30 am EST, but a second attack followed that knocked Twitter and others offline again for some users, especially those on the East Coast of the US. The attack is ongoing and is causing outages and slowness for many of Dyn’s customers. The US Department of Homeland Security is investigating but so far no one knows who is behind the attacks.
http://www.infosecurity-magazine.com/news/massive-ddos-attack-knocks-out/
-
“Martin Gottesfeld, Anonymous hacktivist, charged over hospital DDoS attacks”
This article was not only interesting because it’s part of this week’s discussion, but also about hactivists and their moral compass.
Martin Gottesfeld, is being charged for computer hacking crimes related to a DDoS attack on Boston Children’s Hospital and Wayside Youth and Family Support Network. He overloaded their computer systems with illegitimate traffic and kept them down for over a week and making the hospital lose money from recovery efforts and income from fundraisers upwards of $600,000.
Mr. Gottesfeld considers himself a hactivist, fighting for the human rights of the “troubled teen industry.” These institutions are involved in the treatment of adolescents with emotional, psychological, and medical problems. He admitted to waging the DDoS attack because of the alleged mistreatment suffered by Justina Pelletier.
Law aside, do you think what he did was right? Should he have taken action into his own hands?
-
http://www.databreachtoday.com/2-million-hipaa-penalty-after-patient-data-exposed-on-web-a-9465
In Feb 2012 St Joseph Health reported that their electronic records containing PHI were publicly accessible from Feb 1 2011 to Feb 13 2012.
These records were stored at a server with default settings, default password that allowed anyone to access data over plain internet.
After installing this server SJH never verified security controls. They had hired a external party to verify the vulnerabilities however, as they worked in patchwork fashion and this vulnerability was missed. This way the conducted risk analysis was against HIPAA standards.
In this case data did not include SSN, addresses or financial data and there is no indication that the information was used by unauthorized persons.
The health services company will now continue its increased enforcement activity to foresee resolution agreements. The agency is in the Phase 2 of the HIPAA audits, which could result in enforcement activity in certain circumstances -
Cybersecurity Expert Saket Modi Will Make You Afraid To Own A Smartphone
Saket Modi, cofounder of Lucideus Tech, asked an audience at the 2016 FORBES Under 30 Summit in Boston. “How many of you think you are smart enough to use your smartphone?” He asked for a volunteer to briefly hand over a smartphone and quickly got one, which is protected with a passcode. He poked a few buttons on the phone and handed it back within half a minute.
In the big screen behind him, he popped up a long list and asked the owner “Is this the list of all your calls you’ve made, up here?” And yes, it was. He then did the same with the phone’s text messages, contacts, current location and GPS history. The only thing he skipped was the phone’s browsing history.
He then said to the public, “all of this was possible with this phone in my hands for 25 seconds”. “And the best part of this entire thing is: what I just did is not even a hack.” In fact, he hadn’t installed any software on the phone. He had simply run a script to collect permissions–permissions most phone owners have already granted to Facebook, Gmail and other apps without a second thought.
“Destruction of all personal privacy within 25 seconds is just one facet of the new hacking landscape. Ransomware is increasingly being used to extract important information rather than just cash. Hackers are getting paid to hack specific targets, both public and private. Scripts can crawl a target’s Facebook page–or private messages, or even deleted messages–to identify the issues most important to the target, then use them against the target”, stated Modi. -
I read the article “75% of Orgs Lack Cybersecurity Expertise”. According to this article, a study from Tripwire found that 66% of respondents faced increased security risks due to this workforce shortage; and 69% have attempted to use technology solutions to fill the gap. Moreover, a full 72% said they had challenges hiring skilled cybersecurity experts; half said their organizations do not have an effective program to recruit, train and retain skilled cybersecurity experts.
According to Tripwire’s study, only 25% of the respondents were confident their organizations have the number of skilled cybersecurity experts needed to effectively detect and respond to a serious cybersecurity breach.
Indeed, some management in the organization think that the investment in cybersecurity is expensive and the cyber-attacks may never have occurred. Therefore, these organizations usually do not have cybersecurity expertise, and lack of the protection of company’s information assets. This may cause significantly data leak and allows cyber attackers access the information systems of the company.
Source: http://www.infosecurity-magazine.com/news/75-of-orgs-lack-cybersecurity/
-
On October 21st I got email from Big interview website, which I am a member of it. That the global Internet Outages Affecting their website. I was curious about the breach and did search about it.
On October 21st , ton of websites and services, including Spotify and Twitter, were unreachable because of a distributed denial of service (DDoS) attack on Dyn, a major DNS provider. Details of how the attack happened remain vague, but the sure thing is increasingly sophisticated hacks.
Some of the was political, like an attempt to take down the internet so that people couldn’t read the leaked Clinton emails on Wikileaks.
According to the article we are getting in to serious level of DDOS attack and the internet becomes more vulnerable.
List of websites that readers have told us they are having trouble accessing includes; CNN, Etsy, Spotify, Starbucks rewards/gift cards, Netflix, Kayak.It was an important attack even FBI investigating this cyber attack.
http://gizmodo.com/this-is-probably-why-half-the-internet-shut-down-today-1788062835
http://gizmodo.com/the-fbi-and-homeland-security-are-investigating-todays-1788079688
-
Loi,
Its an interesting article its explains the detail of DDos attack, also its point out the rationalization of attacker.
-
Linux backdoor Trojan doesn’t Require Root privileges
A newly observed Linux backdoor Trojan can perform its nefarious activities without root access, by using the privileges of the current user, Doctor Web security researchers have discovered.
Dubbed Linux.BackDoor.FakeFile.1, the malware is being distributed as an archived PDF, Microsoft, or Open Office file, As soon as the file has been launched, the Trojan would save itself to the user’s home directory, in the .gconf/apps/gnome-common/gnome-common folder, search for a hidden file that matches its name, and replaces that file with itself. If the malware doesn’t find the hidden file, it creates it and then opens it using gedit. Post checking and confirming that the linux distribution on the system is not openSUSE, the Trojan retrieves the configuration data from its file and decrypts it.The malicious program then launches two threads, one to share information with the command and control (C&C) server, and the other meant to monitor the duration of the connection. So if the Trojan doesn’t receive instructions within 30 minutes, the connection is terminated. On a compromised system, the backdoor can execute a multitude of commands and send to the C&C server the quantity of messages transferred during the sessions or a list with the contents of the specified folder; send a specified file or a folder with all its contents; delete a directory; delete a file; rename a folder; remove itself; launch a new copy of a process; and close the current session.
Other operations supported by the malware include: establish backconnect and run sh; terminate backconnect; open the executable file of the process for writing; close the process file; create a file or folder; write the transmitted values to a file; obtain the names, permissions, sizes, and creation dates of files in the specified directory; set 777 privileges on the specified file. The backdoor can also terminate its own operation upon command.Source : http://www.securityweek.com/linux-backdoor-doesnt-require-root-privileges
-
The article I read, titled ‘Going easy on cyber security could turn India’s technology growth story into a nightmare’, was about the debit card data breach that left about 3.2 million Indian customers vulnerable. This breach was the biggest attack in the country’s banking system up to date and the attack raises the country’s concern for the “after-thought” cyber strategy that India currently employs. It is apparent to India’s critics, citizens, and government that companies currently look at cyber security more as “good to have” rather than a need and/or requirement. The sustainable solution suggested in the article is public-private partnership for cyber crime, involving governments and private players locking arms over issues such as data ownership, liability, audit frequency, and others. Regardless, the Indian banking system data breach is a wake-up call for India and a reminder to the rest of the world to increase their cyber security investments and strategy. The bottom line for India is they need to take ownership and prepare a comprehensive national strategy for cyber defense.
-
Pretty crazy they couldn’t successfully outsmart these dominating industry type companies. Many of these companies pride themselves on having little downtime. Just shows you that the cyber industry has experts on both sides (good vs. bad). It is crucial for these companies to invest more in securing their systems. It could cause these companies to lose customers. I left Spotify for now having great service and a terrible application. Amazon has some data that is most likely attractive to steal so I can’t say it enough. These companies need to invest in their cyber strategy!
-
It is good that some companies are catching up on their cyber! Many companies turn a blind eye and have a “good to have” type opinion on having an effective cyber strategy and not a “must have”! I think it is interesting that they brought in another company to scan their vulnerabilities. I feel like that may be risky. Hopefully they are learning form the company that they brought in and are gaining expertise in this field. I think the scanning process should be in-house due to how much that data needs to be protected.
-
Chinese Manufacture would be replacing the IoT component partially which were involved in DDoS attack.
Chinese manufacturer Xiongmai Technologies has promised to recall or patch some components and circuit boards that it manufactures including CCTV, webcam devices, digital video recorders which attackers compromised and used to help power a massive internet of things botnet that overwhelmed DNS provider Dyn’s systems on Oct. 21 via distributed denial-of-service attacks.
Security intelligence firm Flashpoint said that the massive DDoS attack involved IoT devices infected with Mirai malware, which overwhelmed the DNS service and prevented internet users from reaching many sites. Flashpoint adds that at least some of those devices were built by or used components from Xiongmai, even if they were not labeled at such. Xiongmai has acknowledged and accepted to replace the devices involved in this attackhttp://www.databreachtoday.com/chinese-manufacturer-promises-partial-iot-component-recall-a-9478
-
That’s an interesting post Yulun. The DDoS attacks now can target in the social medias, which may cause widely effects. Since the social medias like Twitter have large number of users’ personal information, if the servers hacked by attackers, huge number of users may be affected. Therefore, the cyber security of social medias is truly very important.
-
Morgan Stanley ‘s Hong Kong division, Morgan Stanley Hong Kong Securities Ltd., has been fined HK$18.5 million ($2.4 million) by the Hong Kong’s securities regulator, Securities and Futures Commission (SFC) for internal control failures.
Continued Internal Control Failures
The breach of the Hong Kong’s Code of Conduct included Morgan Stanley’s failure to avoid conflict of interest between principal and agency trading, failure of proper disclosure of its short-selling orders as well as maintenance of unsystematic documentation of its electronic trading systems. The breach is suspected to have occurred between 2013 and 2016.
In Jun 2013, during an investigation by the SFC on the irregular price movements of two stocks, it was discovered that Morgan Stanley did not have a separation between its discretionary order dealers and principal account dealers, resulting in a potential conflict of interest. Notably, the separation finally took place in Oct 2014.
Further, the bank failed to disclose its 29,000 short-selling orders from Jan 2014 to Nov 2014. Moreover, in Feb 2015, position limits were breached, which resulted in a stock option contract to exceed the limit by more than 300 contracts on a trading day.
Additionally, between Jun 2012 and Mar 2016, Morgan Stanley failed to follow the instructions of an asset manager to report large open positions, on a delegated basis.Read more: https://www.zacks.com/stock/news/229220/morgan-stanley-fined-24m-on-internal-control-failures
-
Magazine Editor Left Red-Faced After ‘Reply All’ Gaff
The president and editor of a popular American financial magazine made a huge blunder when forwarding a confidential email by clicking reply all instead. The content of the email included a a discussion about a buyout and staff lay offs. This email was sent to the entire Wall Street Journal newsroom.
http://www.infosecurity-magazine.com/news/magazine-editor-left-red-faced/
-
This article is a little bit older but I thought it was quite interesting regarding a DDOS attack on a hosting DNS provider called DNSimple that was a managed DNS service provider for a number of extremely popular websites such as Pinterest, Canopy, Exposure among others. In 2014 a massive distributed denial of service attack was leveraged on one of the most critical business days of the year for online retailers, Cyber Monday. The article goes into detail surrounding who’s responsibility it is to build in fault tolerance or redundancy into critical network services, such as DNS, even when you outsource this function to a managed service provider. Even though you outsource this IT function, it does not mean that you transfer all risk associated with these services to the managed service provider, rather it is still the client’s bottom line on critical shopping days for online retailers that is impacted if their online marketplace is made unavailable for any reason including an online act of terror via DDOS attacks. People do not realize that DNSimple is providing your DNS services, rather, they just know they tried to get to Pinterest in order to find some great online deals for their Christmas shopping and weren’t able to access the website. That creates significant damage to the brand reputation.
The author goes onto explain that outsourcing the DNS function that they are creating a single-point of failure which should never be the case for such a critical IT service for the business. One of the ways the decisions that leads to overwhelming the server function is making TTLs (time to live), a function that defines how long a website will keep local cache records for quicker response time for web surfers, and making these TTLs too long. With a short (under 60 seconds) TTL requirement it is causing everyone to access the servers which is exactly what a DDOS attack does, it overloads the servers resources by sending an unreasonable amount of requests the the server and/or network cannot handle. It goes on to say creating TTLs for a full week, instead of a 60 second guideline, they wouldn’t need to access the servers for a full week and if there is an outage they would only realize it after the TTL request was sent through. Obviously, the impact of an outage is only an impact when the end-user knows of the outage.
The second recommendation is a disaster recovery technique used for WAN design all the time, and that is to use service redundant carriers for the service. The same way you would have a WAN connection from Verizon and AT&T so if one went down you could re-route the traffic to the other, he says best practice should be to use nameservers from different DNS providers. As a general rule of thumb, he recommends that you use 4-6 redundant nameservers when trying to accomplish a 100% SLA on availability. The only way to accomplish having carrier or service redundant name servers through multiple DNS providers is to have editable NS records.
I thought this was a great example of the type of exposure that you can easily overlook when planning you network design and services and not only did he outline how real the threats are, but also readily available ways to mitigate the associated risks and accomplish acceptable SLAs on availability for mission critical websites.
-
Massive DDoS Attack Knocks Out Twitter, Box, Spotify
The article I read talked about the DDoS attack targeted New Hampshire-based company Dyn and its managed DNS infrastructure. The company originally said it restored operations around 9:30 a.m. Eastern Time. However, a second attack followed that knocked Twitter and others offline again for some users, especially those on the East Coast of the US. The attack is ongoing and is causing outages and slowness for many of Dyn’s customers.
Internet has become very vulnerable, an attack on one can lead to attack on many others. An attacker seeking to disrupt services to multiple websites, may be successful simply by hitting one service provider like a DNS provider, or providers of multiple other Internet infrastructure mechanisms.
Mark Chaplain, VP EMEA for Ixia suggested that organizations can mitigate the impact of these attacks by reducing their attack surface—blocking web traffic from the large numbers of IP addresses globally that are known to be bot-infected, are known sources of malware and DoS attack.
Source:
http://www.infosecurity-magazine.com/news/massive-ddos-attack-knocks-out/ -
Loi,
This is an interesting question that we can’t respond. He has his reasons that justify his actions. But if we encourage that kind of behavior we might end up in a chaotic world.
-
“American vigilante hacker sends Russia a warning”
It was recently announced and even discussed in the debate that US intelligence has identified that Russia was behind attacks on the DNC and other targets. A vigilante known as “The Jester” (or th3j35t3r in leet speak) decided to take it upon himself to retaliate against a Russian target. He vandalized the Russian Ministry’s website with a message that went “Comrades! We interrupt regular scheduled Russian Foreign Affairs Website programming to bring you the following important message,” and continues “Knock it off .You may be able to push around nations around you, but this is America. Nobody is impressed.” The website is the US equivalent of the Department of State so this message was visible to the international community. The Jester adds to his comments that Putin’s denial is transparent and that he wants Putin to go back to his “room”. The Jester spoke willingly with CNN to say that the recent massive DDoS also spurred him to action, although there has been no public culprit acknowledged yet. The Jester said he used a code injection technique to modify the website. Due to the attack starting on the weekend the message stayed up for a good portion of the weekend.http://money.cnn.com/2016/10/22/technology/russian-foreign-ministry-hacked/index.html
-
When I buy a new piece of equipment I like when it has a randomly generated password on it, usually in the form of a sticker. Xiongmai should’ve been doing something like this from the start. Since for some devices this is just circuit boards and the brand isn’t listed as Xiongmai, a lot won’t be recalled. Without the ability to send updates to these IoT devices, the Mirai botnet will float there for a long time as many users don’t know that their devices are infected.
-
For some bigger companies, a long TTL may reduce the ability of round robin DNS load balancers to distribute web traffic. Its part of what these big companies pay for. For medium companies they should be setting longer TTLs.
I like that the article covers 2014’s DNS attack as it shows that the internet decided to just absorb/reduce the risk instead of mitigating it entirely. A lot of people won’t look to see that a big portion of the internet was down and the reputation loss will be on each individual site. Since 2014 the mitigation services have become massive companies but the botnets have also grown in size; its a modern arms race. -
Today what I shared is about the data breach as a result of stolen electronic device. It’s easier to steal a laptop than to hack a database. What the theft would do to hack your electronic device.
1. Physical access to the system. The most secured server in the world is rendered largely insecure when you let a hacker stand in front of it with a keyboard and monitor. A major portion of the security aspect there is protecting the server from physical access.
2. Time. If (s)he’s taken it, (s)he has all the time in the world to try whatever and whenever they want.
So the steps you take to protect your data should be designed to make it harder and less worth it. For the average home user’s laptop, if there’s hard disk encryption and other protections, the likelihood of them getting something worth all that time investment is lowered, and they’re much more likely to just wipe the drive, and hock it, rather than hack it. -
The very nature of a DDoS attack is to aggregate many innocuous flows into a large and dangerous one. The essential nature of the attack is to overload the resources of the target. This means we need to master a new skill: managing network in overload. This is a problem faced by the military, since their networks are under active attack by an enemy. Part of the solution is to have clear technical “performance contracts” between supply and demand at ingress and traffic exchange points. These not only specify a floor on the supply quality, but also impose a ceiling on demand.
source: http://www.circleid.com/posts/20161024_internet_needs_a_security_and_performance_upgrade/
-
-
David Lanter wrote a new post on the site ITACS 5206 8 years ago
Presentation: Slides in PDF format
Presentation: Video
- Load More