The article I read for this week is called “Scareware/Malvertising Campaign Targets iPhones with Privacy-Busting VPN.” It mainly talked about that a scareware campaign has been uncovered that pushes a ‘free’ VPN app called MyMobileSecure to iOS users via rogue ads on popular torrent sites. The VPN app itself appears to be real—but researchers say its privacy policies are dubious, at best. The first interesting thing is the malvertising campaign that’s pushing the app. When using iPhone to visit certain sites, a pop-up page plays an ear-piercing beeping sound and claims the device is infected with viruses. According to Malwarebytes Labs, the verbiage is almost hysterical: “We have detected that your Mobile Safari is (45.4%) DAMAGED by BROWSER TROJAN VIRUSES picked up while surfing recent corrupted sites.” Clicking the pop-up takes a person to a fake website advertising the MyMobileSecure VPN, which, it says, will remove “infected applications and files”. Tapping on ‘Remove Virus’ button opens up the App Store to download the app. In this particular case, “one cannot help but feel that this VPN application comes with some serious baggage and unfortunately the average user will not take the time to review the fine details. If the intent is to use a VPN to anonymize your online activities, this does almost the opposite.”
Week 9 - Big Data and Its Use in Cyber-Security
This article interviews three execs from three different companies, and explains what they are most concerned with in terms of cyber security. Each of the interviewees had a different “biggest concern,” but all three seem to agree on the solution being security software packages. Coming off our conversation about budgets / spending this Saturday, I thought it was interesting that Vince Skinner, VP of IS at D.A. Davidson, received plenty of resources to secure his company, but made it a point to stress how even that doesn’t guarantee anything: “An open checkbook doesn’t guarantee success,” said Skinner. “Even with money you need people, processes and technology to adequately protect a company.”
The CSO at Aon, Anthony Belfiore, shared Skinner’s concerns about being confident due to the latest and greatest cyber security tech. “God forbid someone drop a cyber nuke or DDOS from malware — they can take down a whole environment,” said Belfiore. “If we’re down it doesn’t really matter how secure we are — we have a problem.”
Shadow IT and sanctioned cloud apps are gaining ground in the enterprise. At last count, employees at enterprise-class organizations were using 841 different apps on average, according to Blue Coat Elastica Cloud Threat Labs. It would seem that these days the only thing growing as quickly as the proliferation of cloud apps are the security and compliance issues accompanying them. For companies that adopt cloud apps faster than they apply effective security there are dangerous implications, but risks associated with use of cloud apps can be mitigated with technologies available today through CASB gateways, CASB cloud app API integration, and secure web gateways.
As we’ve seen, cloud apps are already an essential part of business in our digital and connected age. The adoption rates of SaaS are fast and only accelerating, and it’s viewed by many in the executive suite as the #1 disruptive technology currently at play in the enterprise. The benefits of cloud apps are many. Compared to the older client-server model of actual software licenses and installations, cloud apps are very cost effective, boast far easier remote access, they spin up and adapt very quickly, and they can improve both productivity and collaboration.
As many of us in the security industry already know, the presence of Shadow IT can wreak havoc on compliance. When data is going through third-party SaaS applications, for instance, it’s important to understand what security risks those applications pose and whether those risks fall within the guidelines accepted by the relevant compliance standards. These, in many cases, include SOX, PCI-DSS, HIPAA and COBIT, among others.
Compliance is but one example where Shadow IT can cause problems. We’re now in an environment where there’s a great deal of cloud app adoption and often times with executive sponsorship. But the problem for IT security and risk professionals is they often have no way of actually knowing which cloud apps are running on their infrastructure and which employees are using them. Security teams many times just don’t have the tools to monitor and control any of these cloud apps — and that’s a big and at times very costly problem.
So what does an organization need to do to wrangle Shadow IT and get it under control?
Here are four steps you need to take to solve this problem.
1. Visibility. As I’ve stated above, you need to know which cloud apps are being used. You will need an audit solution such as the Blue Coat Elastica Audit. By taking logs from proxies, firewalls and logs from other appliances on the network, an audit solution will generate a report that will detail all the different cloud apps running on your infrastructure and the associated users. A good audit solution will also provide you with the characteristics around those apps. Once this is in place, you’ve gone from having no information whatsoever to to knowing exactly which cloud apps are being used in your organization — it’s now no longer Shadow IT. These characteristics are very important to know: what are the risks associated with these apps and how do you evaluate each of these apps based on a myriad of different attributes. Ultimately, you want to be able to assign an app some sort of rating: the higher the number, for instance, the less risk it carries and the more business-ready it is. You also want a solution that allows you to set varying levels of characteristics and attributes, such as multifactor authentication, compliance and encryption requirements, among others.
2. Analysis. Here you really need to dive in and explore exactly what are the risks associated with these apps you’ve identified in step 1 above. What precisely makes these apps risky, do they meet varying compliance requirements, have you solved for issues of data sovereignty? A quality audit solution will be able to provide an extremely detailed report with all the needed information to undertake the next step.
3. Decision making. OK, you’ve gained visibility, analyzed, and now you’re ready to decide which apps can remain in your environment and which must be shut down. You now have the information to decide which apps you’ll monitor, which will be completely green-lighted and which must be banished to protect your organization. Ideally, you’ll also want an audit solution that allows you to perform a comparative analysis, side by side, of alternative apps to find the one(s) with a lower risk profile. An added bonus is that the decision-making step also can enable cost cutting by consolidating multiple accounts used by different departments within the same organization or by eliminating access to non-sanctioned apps.
4. Enforce controls. This is the step where you really dial things in and control cloud app activities as they’re ongoing. You’ll want to set your policies based on your audit solution feed and, also, to be able to set those cloud app policies with your proxy. To accomplish this, you will also need the detailed characteristics of those apps — business readiness ratings, risk attributes and the like — fed through the network to your proxy.
Of course, it’d be great if this were all you needed to do. But there’s one final step that I didn’t include above because it’s something that will always be ongoing: continuous monitoring. This whole process doesn’t come to an end once you’ve completed the above four steps. You’ll need to monitor because cloud apps change all the time, are updated, cloud app risk ratings will increase or decrease, and new functions of cloud apps will need to be properly vetted, among a host of other always-changing variables.
The steps I’ve outlined above are a great way to get your arms around the big issues of Shadow IT today. The cloud is here to stay, and so long as employees use cloud apps from within an organization’s firewall, we’ll always have to wrangle with Shadow IT, Shadow Data and the attendant problems and risks. Because of this, you’ll need an integrated visibility and control solution that provides the integrated CASB and proxy capabilities listed above.
Nowadays, organizations are collecting and processing massive amounts of information. The more data is stored, the more vital it is to ensure its security. A lack of data security can lead to great financial losses and reputational damage for a company. As far as Big Data is concerned, losses due to poor IT security can exceed even the worst expectations.
Almost all data security issues are caused by the lack of effective measures provided by antivirus software and firewalls. These systems were developed to protect the limited scope of information stored on the hard disk, but Big Data goes beyond hard disks and isolated systems.
Nine Big Data Security Challenges
- Most distributed systems’ computations have only a single level of protection, which is not recommended.
- Non-relational databases (NoSQL) are actively evolving, making it difficult for security solutions to keep up with demand.
- Automated data transfer requires additional security measures, which are often not available.
- When a system receives a large amount of information, it should be validated to remain trustworthy and accurate; this practice doesn’t always occur, however.
- Unethical IT specialists practicing information mining can gather personal data without asking users for permission or notifying them.
- Access control encryption and connections security can become dated and inaccessible to the IT specialists who rely on it.
- Some organizations cannot – or do not – institute access controls to divide the level of confidentiality within the company.
- Recommended detailed audits are not routinely performed on Big Data due to the huge amount of information involved.
- Due to the size of Big Data, its origins are not consistently monitored and tracked.
How Can Big Data Security be improved?
Cloud computing experts believe that the most reasonable way to improve the security of Big Data is through the continual expansion of the antivirus industry. A multitude of antivirus vendors, offering a variety of solutions, provides a better defense against Big Data security threats.
Refreshingly, the antivirus industry is often touted for its openness. Antivirus software providers freely exchange information about current Big Data security threats, and industry leaders often work together to cope with new malicious software attacks, providing maximum gains in Big Data security.
Here are some additional recommendations to strengthen Big Data security:
- Focus on application security, rather than device security.
- Isolate devices and servers containing critical data.
- Introduce real-time security information and event management.
- Provide reactive and proactive protection.
The cybersecurity waters are teeming with threats by criminals, nation states, and hacktivists, and government agencies do not have the personnel, tools, or time to properly handle the massive amounts of data involved especially with the attack surface constantly expanding. However, with the ability to discover insights in the very data they are sinking in, big data may be the requisite lifeline.
While 90 percent of government data analytics users report they have seen a decline in security breaches, 49 percent of federal agencies say cybersecurity compromises occur at least once a month as a result of an inability to fully analyze data. These are some of the findings in a new report from MeriTalk’s (a public-private partnership focused on improving the outcomes of government IT), Navigating the Cybersecurity Equation, which examines how agencies are using big data and advanced analytics to better understand cybersecurity trends and mitigate threats.
The top challenges for feds surrounding cybersecurity as reported by participants were:
- The sheer volume of cybersecurity data, which is overwhelming (49 percent)
- The absence of appropriate systems to gather necessary cybersecurity information (33 percent)
- The inability to provide timely information to cybersecurity managers (30 percent）
Data analytics is playing a significant role in looking at the threat landscape—to determine where weaknesses are, and whether the right strategies and tools are in place. Additionally, there is a focus between the military and the intelligence services, which are centered on pursuing penetration testing. It is difficult to do penetration testing on live systems, and the challenge is that you’ll never really be able to test the vulnerabilities across the network for fear of bringing down critical applications. Dell EMC Cyber Solutions Group has been developing applications that enable penetration at a database level with zero impact on the database and provides real-time assessments across the network in seconds. As soon as these threats are identified, assessments can be announced to determine how fit the security solution is. The Cyber Solutions Group is part of Dell EMC, and features expertise in the realms of advanced storage technologies, high availability, cyber security, big data, cloud computing, and data science.
SecureWorks provides proven threat detection. Offered as a service, SecureWorks monitors the environment and looks for any kind of threats whether from the network, or internally, or externally. The reason SecureWorks is based on the Cloudera ApacheTM Hadoop® software platform is because the amount of attacks happening on a given environment is so high. It’s necessary to be able to monitor logs from network devices, logs from computers, notebooks, servers, etc. Typically, you don’t have anywhere to put all that log data, and you don’t have anything fast enough to process and analyze the data. Hadoop is an open source analytics platform, built from the ground up, to address today’s big data challenges. It enables government agencies to load and consolidate data from various sources into the highly scalable Hadoop Distributed File System (HDFS). This data can then be processed using highly distributed compute jobs through the MapReduce framework.
The main reason behind the rising popularity of data science is the incredible amount of digital data that gets stored and processed daily. Usually, this abundant data is referred to as “big data” and it’s no surprise that data science and big data are often paired in the same discussion and used almost synonymously. While the two are related, the existence of big data prompted the need for a more scientific approach – data science – to the consumption and analysis of this incredible wealth of data.
In order for cybersecurity professionals to see the greatest possibilities offered by big data and data science it would be ideal to go Back to the Future to see how data insights will unfold. Lacking the time-travel expertise of that movie’s Doc Brown, today’s data scientists must imagine the possibilities of how big-data analysis will inform and educate our world.
The application of data science techniques to cybersecurity relies on the prompt availability of massive amounts of data on which models can be built and tested to extract interesting insights.
How much data is enough?
To give you an idea of how much data needs to be processed, a medium–size network with 20,000 devices (laptops, smartphones and servers) will transmit more than 50 TB of data in a 24–hour period. That means that over 5 Gbits must be analyzed every second to detect cyberattacks, potential threats and malware attributed to malicious hackers!
While dealing with such volumes of data in real time poses difficult challenges, we should also remember that analyzing large volumes of data is necessary to create data–science models that can detect cyberattacks while both minimizing false positives (false alarms) and false negatives (failing to detect real threats).
The three V’s of context
When discussing big data, the three big “V’s” are often mentioned: Volume, Variety and Velocity.
- Volume: large quantities of data are necessary to build robust models and properly test them. If a data scientist is relying on machine learning to build a model, large data samples are necessary to understand and extract new features, and properly estimate the performance of the model before deploying it in production environments. Also, when a given model is based on simple rules or heuristic findings, it is of paramount importance to test it out on large data samples to assess performance and the possible rate of false positives. When the data sample is “large” enough and has enough “variability”, the data scientist can try to identify different ways of categorizing the data and unexpected properties of the data may become evident.
- Variety: This term usually refers to the number of types of data available. From the point of view of data organization, this refers to structured data (e.g., data that follows a precise schema) versus unstructured data (e.g., log records or data that involves a lot of text). For cybersecurity data science models, “Variability” really matters more than “Variety.” Variability refers to the range of values that a given feature could take in a data set.
The importance of having data with enough variability in building cyber security models cannot be stressed enough, and it’s often underestimated. Network deployments in organizations – businesses, government agencies and private institutions – vary greatly. Commercial network applications are used differently across organizations and custom applications are developed for specific purposes. If the data sample on which a given model is tested lacks variability, the risk of an incorrect assessment of the model’s performance is high. If a given machine learning model has been built properly (e.g., without “overtraining”, which happens when the model picks up very specific properties of the data on which it has been trained), it should be able to generalize to “unseen” data. However, if the original data set lacks in variability, the chance of improper modeling (for example, misclassification of a given data sample) is higher.
- Velocity: the amount of digital information increases more than tenfold every five years according to aThe Economist article “Data, data everywhere“. If a data scientist has to analyze hundreds of million of records and every single query to the data set requires hours, building and testing models would be a cumbersome and tedious process. Being able to quickly iterate through the data, modify some parameters in a particular model and quickly assess its performance are all crucial aspects of the successful application of data science techniques to cybersecurity.
Volume, Variety, and Velocity (as well as Variability) are all essential characteristics of big data that have high relevance for applying data science to cyber security.
The challenge of detecting and preventing advanced persistent threats may be answered by using Big Data style analysis. These techniques could play a key role in helping detect threats at an early stage, using more sophisticated pattern analysis, and combining and analyzing multiple data sources. There is also the potential for anomaly identification using feature extraction.
Today logs are often ignored unless an incident occurs. Big Data provides the opportunity to consolidate and analyse logs automatically from multiple sources rather than in isolation. This could provide insight that individual logs cannot, and potentially enhance IDS and IPS through continual adjustment and effectively learning “good” and “bad” behaviors.
Integrating information from physical security systems, such as building access controls and even CCTV, could also significantly enhance IDS and IPS to a point where insider attacks and social engineering are factored in to the detection process. This presents the possibility of significantly more advanced detection of fraud and criminal activities.
At the very least, Big Data could result in far more practical and successful SIEM, IDS and IPS implementations.
The article I read for this week is called “How Big Data is Improving Cyber Security.” It talked about that big data and analytics is showing promise with improving cyber security. 90% of respondents from MeriTalk’s new US government survey said they have seen a decline in security breaches. 84% of respondents said they have used big data to help block these attacks. Companies that are already heavy analytics users have a greater amount of confidence when it comes to using analytics to detect threats. On the other hand, there are still many challenges as new cyber security threats are popping up daily. 53% said they are using analytics for their overall strategy and 28% are using it in a limited capacity. Even with this, 59% said that their given agency has been compromised at least once per month because they were not able to keep up and fully analyze the data.
Cyber security needs the risk management and actionable intelligence that is common from big data analysis. While it is great to have tools that can analyze data, the key is to automate tasks so that the data is available more quickly and the analysis is sent to the right people on time. This will allow analysis to classify and categorize cyber threats without the long delays that could make the data irrelevant to the attack at hand. Some might believe that big data will quickly solve the problems of the cyber security industry. The reality is that data and analytics will allow companies to identify anomalies and advanced attack vectors.
Information regarding data center analytics platforms focuses on how and what to implement, as opposed to why.
With the growing need of data, data centers are becoming increasingly complex. The added layers of virtualization and distributed services blur a once clear data flow map. Additionally, the continued expansion of hybrid and multi-cloud environments creates borderless networks that are a challenge to manage and suffer from a loss of end-to-end visibility. Yet, the added complexity being designed into modern data centers is important.
Today’s business world requires a data center that allows for application flexibility and scalability. So, while complexities do indeed create new challenges in the data center, IT operations managers must learn to adapt to those challenges. And one way to solve these types of challenges is through the combined use of data center analytics and automation.
Solving the problems of increasing layers of virtualization, distributed workflows, and a need to easily move data and applications around at will largely revolves around two pieces of information. First, there is the need to understand application dependencies. These are the resources that a single application requires making the application function. This includes virtual machines, containers, and microservices, as well as storage, networking and any other physical or virtualized infrastructure components that are necessary for it to work.
The second component is to understand the data flows between these application-specific dependencies and how end users of the application interact. With the information that can be mined using IT ops collection tools, one can automate the process of creating a real-time application dependency map of the entire data center landscape, both private and public.
With the power of an application dependency map, the layers of virtualization, lack of visibility and distribution of application resources simply melt away. And what we’re left with is an easy to grasp layout of how an application truly functions on your network. Following is the example of day-to-day IT operations management of network security, application mobility and disaster recovery can all benefit from analysis.
Using the information gained at the application level regarding specific dependencies and communication flows, data center administrators can simply allow access for those communications and feel confident in blocking everything else. So instead of attempting to manually determine application dependencies using tools such as protocol analyzers and NetFlow collectors, a data center analytics platform automates this entire process. Most platforms also maintain a data flow history. This creates historical data-flow behavior baselines. Ultimately, algorithms can be configured to alert on deviations from the baselines that could indicate a security breach.
While virtually every enterprise data center has a disaster recovery plan in place, the static nature of these plans doesn’t fare well in modern data centers that are constantly changing. As network and security policies are updated, they can have negative effects DR plans to the point where recovery procedures no longer work. The same application dependencies and data flow information we collect and use to solve security, issues can also be used to automatically update disaster recovery processes at your private or cloud-operated DR site. This is a tremendous benefit that will significantly cut down on the time it takes to maintain DR plans while also eliminating the potential for human error.
Talking about the use of Big data in cybersecurity, it can help the government setting with a number of high-profile use case examples, how the internet-of-things(IoT) is taking a firm hold in helping government agencies collect and find insights from data sources, and how cybersecurity and data analytics are helping to secure government applications.
Big data is helping federal agencies to properly handle the massive amounts of data involved especially with the attack surface constantly expandingData analytics is playing a significant role in looking at the threat landscape—to determine where weaknesses are, and whether the right strategies and tools are in place. Additionally, there is a focus between the military and the intelligence services, which are centered on pursuing penetration testing.
81 percent of federal agencies say they’re using data analytics for cybersecurity in some capacity—53 percent are using it as a part of their overall cybersecurity strategy, and 28 percent are using it in a limited capacity.