Interviews | June 5, 2019

Phishing Programs Need to Focus on Attacks Type, Tactics & Themes


Rohyt Belani
CEO & Co-Founder

Cofense

Q1. Why are organizations having such a hard time dealing with the phishing threat considering all the awareness around the issue? What are they doing wrong or insufficiently?

For many organizations, current approaches to phishing defense focus on (a) perimeter technology and (b) a mistaken approach to educating users to catch phish that get past the perimeter—the belief that simply driving down user susceptibility, or click rate, keeps the organization secure.

Refer perimeter controls—the phishing threat landscape is continuously evolving. Just as technology catches up, threat actors innovate their tactics and techniques to maximize chances of delivery and successful payload execution. It's an uncomfortable truth of phishing defense- despite increasing investments in next-generation perimeter and inline controls, significant quantities of malicious emails are still reaching the inbox.

Refer educating users—substantial investments are made to increase phishing awareness. The effectiveness of these programs in reducing risk is assessed through periodic phishing 'tests', or phishing simulations. In these tests, too much emphasis is placed on the number of recipients who click, or 'fail,' the test. Organizations need to understand that phish 'testing' is the enemy of true phishing defense. Real phish are the real problem, even though phishing simulation has a vital role to play in conditioning users to be aware of evolving threats.

Programs should focus on the attack types, tactics, themes and motivators the organization faces, with the goal being to ensure that users recognize these shifting threats, and that they are enabled and empowered to report them. By concentrating on the phish reporting rate, the true metric of phishing resilience and overall program success, organizations will build a mature phishing defense. They will also be able to operationalize the capability of their users to identify the real attacks that technical controls have failed to prevent.

Q2. Enterprises seem to be putting a lot of effort into detecting and stopping phishing attacks. But considering that more than nine in 10 breaches are the result of a phish, should organizations be focusing more on response and damage containment?

First, it needs to be understood that there is no third-party lab or certification to test the claims of anti-phishing technologies. Vendors can claim they are trying their hardest, but not a single one of them publishes a metric like 'Time of Phishing Tactic Discovery to Prevention.' In a Cofense study, 90% of IT executives surveyed said they worry most about email-related threats. To address this, over 80% make significant and increasing investments in secure email gateway and anti-malware solutions to stop phishing threats.

Whilst these technologies provide some protection, they are not the silver-bullet that many vendors claim. The Cofense Phishing Defense Center (PDC) receives and analyses suspicious emails reported by over 2 million global users. In 2018, 1 in 7 of the emails the PDC received were found to be malicious. Every one of the emails we received had passed through one or more layers of controls before being deemed safe to deliver to the recipient's inbox. Therefore, it's clear the 'nets' have holes. Technical controls simply cannot stop all phishing threats. It's essential that all organizations accept that every day malicious emails will get delivered to their intended recipient. They must also accept that when technology fails, security teams fail to get the visibility they need.

To gain a more effective phishing defense, the primary focus should be on improving this visibility, including the scope and the scale of the attack, and the capabilities needed to rapidly reduce exposure and mitigate risk.

This begins with creating a network of human sensors — the collective group of users who are conditioned to recognize phishing threats — and enabling and encouraging them to report the danger. Security Operations teams then need to consume, prioritize and analyze user reports faster and better, so they can understand the risk and drive decisive actions. Activities such as hunting for the threat within the email environment need to be optimized. Typically, processes to support this activity are hampered by the mail environment, namely the inability to hunt for the whole threat vs. individual emails, and by a reliance on other teams with conflicting priorities.

The phishing threat landscape remains in a state of constant mutation. The goals and objectives of phishing attacks might have remained largely the same — i.e. financial gain, data theft, espionage and plain disruption — however, the tools and tactics developed by our adversaries are continuously evolving. This is what makes it so difficult for technology, and resource-constrained Security Operations teams to keep up, let alone get ahead.

Increasingly prevalent tactics include: geo-location and IP aware threats, the abuse of cloud file-sharing services, and the creative use of previously unseen file types to deliver threats that defeat and deceive existing controls. All of which, means that organizations cannot simply rely on technology to provide a protective shield around our users, our assets and our data. And this is not going to change. As vendors race to mitigate exploited weaknesses, they're already several steps behind.

Cofense enables organizations to think differently, to truly understand the real phishing threat. Since Black Hat 2018, Cofense has worked tirelessly to apply our unique perspective on the phishing problem to the capabilities that we deliver. We enable our customers to stop phishing attacks in their tracks. Recent innovations have included:

Responsive Delivery — a unique capability that is changing the game in phishing simulation. Responsive Delivery works by looking for 'proof of life' in a user's inbox. Only when the user is proven to be active will the platform deliver the simulation email — maximizing chances of user interaction and the desired educational outcome.

Cofense Vision — building on our announcement from Black Hat 2018, we now enable Security Operations teams to hunt and quarantine phishing threats that have bypassed perimeter controls. Look for enhancements to integration with Cofense Triage, a new stand-alone threat hunting interface, and enriched API endpoints to support further integration with tools in the existing security stack, such as SIEM and SOAR.


Justin Fier
Director of Cyber Intelligence & Analytics

Darktrace

Q1. How is the growing adoption of Software as a Service (SaaS) models complicating the cybersecurity challenge for enterprise organizations? What new requirements does SaaS impose from a threat detection and mitigation standpoint?

In the face of novel threats, machine-speed attacks, and the security skills shortage, many security teams are struggling to effectively secure their on-prem infrastructure, even before adopting SaaS applications, which exponentially complicates their already difficult task. Add to this the lack of visibility and control afforded by SaaS models and the new mindset required by the agility and speed of digital business, and cloud and SaaS become one of the most vulnerable elements of the modern enterprise.

The most common pitfall I witness is a misunderstanding of who is responsible for securing the data that traverses SaaS applications. While vendors are responsible for securing infrastructure and applications and have to conform to high security standards, enterprises must ensure that user and network activity is properly managed and secured. This leads to a specific set of threat vectors that a company must be able to defend against but which most security offerings are ill equipped to detect at an early stage. These include insider threat, compromised credentials, misconfigurations, and unsecured APIs.

In order to detect these threats, it's critical that a business has visibility into and across SaaS applications. Darktrace detected one instance of compromised credentials where a CFO was logged in to Salesforce from Cincinnati, but was attempting to log in to Office365 from Lagos at the same time. This threat would have been impossible to detect without the ability to correlate activity across applications and unified visibility across the entire enterprise. A siloed approach to security creates blind spots and vulnerabilities.

Due to how easily and quickly SaaS services can be spun up, security or risk officers are often not consulted before sending out massive amounts of sensitive data through non-vetted applications or protocols. A security team's role now goes well beyond the perimeter to the fast-moving edge. This speed means that it's more important than ever before that companies can detect threats early, enabling security professionals to take action before damage is done.

Q2. What are some key contributors to success when it comes to leveraging AI and ML in security operations?

When it comes to thinking about the effectiveness of AI, it's critical to consider what types of machine learning are underpinning the AI application. Most frequently, organizations rely on supervised machine learning, which must be trained on vast quantities of data. Typically, AI cyber security applications are trained on historical attacks and past threats. While AI driven by supervised machine learning might create some efficiency gains for the business, it suffers from the same flaws and shortcomings as rules and signature-based approaches: an inability to detect novel attacks. Alternatively, unsupervised ML that learns on the job in business environments can detect anomalies indicative of the earliest stages of cyber-threat, including never-seen-before threats, malicious and non-malicious insider threat, policy violations, and other threats that might be unique to a business.

Another key contributor to success is that a security team trusts the AI. Often, AI can be a black box that makes decisions without explaining the logic or reasoning for why it made these decisions. When leveraging AI in security operations, technologies that explain what activity or action led to its recommendations helps to build a team's confidence in its results and suggestions.

Building trust is especially important when AI begins to take action on behalf of security teams. Darktrace built a mode into our cyber AI response technology called 'human confirmation mode.' When in 'human confirmation mode,' the AI doesn't take action to stop the threats, but details what steps it would take and why. Over time a company is able to become confident that the AI will take the right action and can switch technology into fully active mode.

Finally, it's important to set your expectations and understand AI's limitations. AI and autonomous response won't make your security team obsolete--it will just free them up to focus on more strategic tasks. They'll be able to move beyond dashboards and parsing log files, since the AI will explain and visualize data in a way that makes sense. Analysis will become a more interactive event.

Q3. What are Darktrace's plans at Black Hat USA 2019? What do you want attendees to know about your company and its strategy for the next few years?

2019 marked the three-year anniversary of the launch of Darktrace's autonomous response technology, Antigena. Employed by 500 companies around the world and stopping an attack around the world every three seconds, Antigena is the first proven application of autonomous response technology in the enterprise.

Most response technologies work by automating responses in a predefined playbook. In the face of increasingly advanced threats and with the promise of AI attacks on the horizon, it's not enough to merely automate encoded human knowledge. Informed by an evolving and comprehensive understanding of a business, AI-powered autonomous response can respond to stop never-before-seen threats in real time. Instead of stopping an attack by taking entire offices offline or locking up individual devices, Antigena takes precise, proportionate action to stop a threat in its tracks without causing other business disruption.

This February, we launched our Antigena Response Module for Email, bringing Darktrace's intelligent autonomous response to what remains one of the most common ways attackers gain access to enterprises. This update also included new response modules for Cloud. Our presentation at Black Hat this year will offer a deep dive into autonomous response, discussing how it works, barriers to adoption, real world success stories, and even use cases we've seen outside of security.

We view autonomous response as the future of cyber defense, transforming vulnerable organizations into self-defending digital businesses. We're continuing to expand our autonomous response capabilities, ensuring we can detect and respond to stop attacks across every aspect of the modern digital business. We're also exploring where else AI can save time for security teams, exploring how it can streamline investigation or automate the SOC.


Josh Douglas
Vice President, Product Management for Threat Intel

Mimecast

Q1. How has the email security challenge evolved in recent years? What's different about protecting against e-mail borne threats these days compared to a few years ago?

Over the years organizations have gone from direct malware based attacks, to link based attacks, to impersonation attacks with complete social engineering to gain credentials to cloud applications. Those credentials hold the keys to the kingdom for many customers. This means that as a security partner, we have to know about our customers' third-party suppliers, advanced attack techniques and open source intelligence around our customer environments. At the end of it all, organizations still face the challenge of human error. Without it, these attacks would not be successful.

Q2. Why has threat intelligence become a priority for enterprise organizations? How can threat intelligence better protect you against email threats?

Threat intelligence is an extremely important aspect for enterprises today when it brings value. Value comes in proactive actions that an organization can take before they are compromised and IoCs merely indicate that fact. Using counter-intelligence efforts to understand the risk that is exposed to an attacker, organizations can cut off vulnerabilities and help train employees that have higher human error than others.

Q3. What is Mimecast's messaging going to be at Black Hat USA 2019? What are you hoping enterprises will learn about your company at the event?

Post-breach threat intelligence (aka IoCs) is not proactive threat intelligence. Understanding your risk, is more important than understanding an attackers arsenal or their attribution. An enterprise will never be able to prosecute an attacker nor will you be able to take offensive measures against their infrastructure. That is a zero sum game. Controlling what you can will elevate the game and divert their attacks to softer targets.


Stu Solomon
Chief Strategy Officer

Recorded Future

Q1. How is threat intelligence making a difference for enterprise organizations? What's the business case for using it?

Enterprises that use threat intelligence have a better understanding of what and how external threats may impact their overall enterprise risk. This empowers leaders to make more informed operational security and architectural investment decisions by equipping them with a broader view of the organization in both global and digital contexts. Threat intelligence brings to light external threats and enables contextualization of those threats for onward action with confidence.

The most immediate business case for threat intelligence is to optimize security operations by aligning teams and facilitating better communications within a common operating picture. Functionally, that means teams with threat intelligence will spend less time generating their own collection requirements, analyzing data in a silo, and building ad hoc methods of communicating findings amongst themselves. Threat intelligence also reduces the likelihood of analytical errors by removing resource intensive phases like information processing and analysis from already overworked SOC, IR, and vulnerability management teams.

Another business case for threat intelligence is that it presents options for long term, strategic planning. Here again, emphasis is placed on the connection between threat intelligence and organizational risk. Not only can threat intelligence be configured to work with automated systems to maintain short-term security posture. It can (and should) be used by senior executives and boards in the determination of investments. In this way, threat intelligence can be as tailored and versatile as an enterprise needs it to be, capable of blocking tactical threats while empowering decision makers with information about the world as it pertains to the organization.

Q2. What are some of the biggest challenges organizations face in gathering, integrating and using threat intelligence these days? Where in the process do the breakdowns typically happen?

Since its inception, the business of intelligence has had some high barriers to entry and is generally limited by human capacity and activity. There was a time not so long ago that the only organizations capable of producing intelligence were necessarily governments. This is due to the difficulty of entering far off countries and accessing intelligence sources without raising suspicion and being able to do so on a limited scale. Hollywood produces a constant supply of images of lone wolf operatives traveling the world, sneaking around locked offices, speaking multiple languages, and hanging around cocktail parties and casinos to get the information they need. In truth, those flashy portrayals are only a fraction of what it takes to produce intelligence.

It can be helpful to be aware of what is called the "intelligence life cycle." This is an operational model that shows what it takes to put together what is called finished intelligence. The intelligence first cycle calls for the development of objective-based requirements or the development of intelligence questions, which need to be answered. From there, those go to teams specializing in creating and executing a sourcing-plan to gather the raw and (hopefully) relevant data. And after two of five steps, this is where Hollywood tends to stop. Raw collection goes for processing. After processing, information is handed to trained analysts who review contextual significance of the information and make probability-based assessments that align with the original requirements. Finally, the analysis is polished and disseminated, again based on the requirements expressed at the beginning of the cycle. Those in charge of planning review the finished intelligence reassess their requirements, and the cycle starts again.

The sheer complexity of establishing the kind of team at scale necessary to the production of finished intelligence that is timely, accurate, and actionable, makes this a challenge to enterprises seeking to institute the function. The majority of breakdowns tend to occur as individuals are forced to multitask, reaching over and through the phases of the cycle to improve throughput and reduce costs. Worse is when biases creep in because the organization has not established the discipline necessary to accept and socialize controversial (but objective) findings. Of course this process is ultimately constrained by both the capacity and availability of an analyst to perform this work.

Enterprises that seek third party threat intelligence face challenges as well. For one, as well defined as intelligence is within a certain community, the word has been co-opted by organizations that produce what is more accurately referred to as threat data or threat feeds. These can be helpful for some blocking, but are replete with false positives and information that is irrelevant to a specific enterprise. The challenge then is finding finished intelligence that is tailored to the context of an enterprise and provides some basis for action.

Q3. What exactly is machine learning powered threat intelligence? How is it helping improve threat detection and response?

Before diving into how it supports threat intelligence, it is important to define machine learning. Machine learning is a subspecialty of artificial intelligence. It is a practical science that brings together computer programming, statistics, and subject matter expertise to develop trainable models that can quickly make sense of large, unorganized datasets. Machine learning is an incredibly powerful tool for threat intelligence teams given all the sources and sensors rapidly gathering and publicly transmitting information around the world today.

Recorded Future uses machine learning to collect and process the vast amount of data that is openly available on the Internet, including the un-indexed deep web and the dark web. To date, with the help of machine learning we have crafted more than one billion Intelligence Cards. This would be a monumental task for humans to accomplish on their own and completely infeasible to accomplish within any realistic operational timeframe. Therein lies the challenge. In cyberspace, a threat actor can attack an organization from anywhere in the world and accomplish a set of objectives in a remarkably short amount of time.

If threat intelligence relied solely on manual collection and processing, by the time actionable intelligence was available, the damage would have already been done. We see examples of this today where advanced persistent threats can go unnoticed in systems for months at a time, allowing attackers to move around the network at their leisure, establishing clear paths for data exfiltration.

With Machine learning, threat intelligence approaches what looks more like a continuous function. That means security teams can achieve more complete situational awareness, take actions to proactively protect against imminent threats, and respond to incidents faster. And although complete intelligence analysis still requires human creativity, intuition, and expertise, the intelligence cycle is made more efficient and more effective with machine learning.

Q4. What can attendees at Black Hat USA 2019 expect from Recorded Future? What do you anticipate they'd be most interested in hearing about?

Black Hat attendees can expect us to talk about our latest innovation in threat intelligence: Certified Datasets. Over the past six months, we have built a new structured data analytics pipeline to be able to run analytics on raw technical data to produce insights that will deliver prevention- and detection-grade original intelligence.

This has been a massive and collaborative effort by Recorded Future's Insikt, Data Science, and R&D teams and demonstrates a huge step forward for our company to be a producer of original technical intelligence at this scale. Simply put, this kind of threat intelligence can't be found anywhere else.

We will also focus on onward consumption and usage of intelligence in human enabled and machine enabled workflow integrations. We work hard with our clients and partners to ensure that our intelligence allows the enduser to take action with confidence and thus derive true decision advantage.

More generally, we think the BlackHat crowd will be interested to learn about how Recorded Future produces and delivers highly accessible threat intelligence. We understand the challenges teams face across the security organization (and even throughout the enterprise) and have adapted our products and services to make intelligence second nature and indispensable.

One example of how we are democratizing intelligence is through the recent release of our Recorded Future Express browser extension, which seamlessly layers real-time intelligence over any SaaS application directly in the browser. Between Express and our core applications, which include mobile accessibility, security professionals can look to Recorded Future for the best in threat intelligence whenever and wherever they need it.

We look forward to discussing threat intelligence and how Recorded Future is simplifying the process of integrating actionable intelligence into existing security workflows.


Andreas Kuehlmann
Co-General Manager of the Synopsys Software Integrity Group

Steve McDonald
Co-General Manager of the Synopsys Software Integrity Group

Synopsys

Q1. Andreas, how has the growing adoption of serverless computing, hybrid cloud, microservices, containerization, and other trends complicated the application security testing challenge for enterprise organizations?

There are three key trends that are impacting application development and changing the way we test applications for security:

First, applications used to (and many still do) run behind a corporate firewall, which provides basic security protection. As they move to cloud or mobile platforms, this protection goes away, and the attack surface shifts directly to the application. Similarly, embedded software used to run on stand-alone, unconnected devices, precluding remote intrusions by an attacker. Now devices are increasingly networked, making them and their embedded software applications vulnerable to cyberattacks. As a result, the applications themselves have become the focal point of cyber-protection and need to be hardened for security, including their functional code, interfaces, and interactions with other software components (and possibly also hardware).

Second, the speed of application development has dramatically increased, driven by business needs and facilitated by agile development processes, CI/CD flow automation, and DevOps support. In response, AppSec testing tools must support fast turnaround times, fit into an automated, staged CI/CD flow, and provide comprehensive reporting to support quick deployment decisions. Moreover, higher release agility shifts the responsibility to address security issues to the developers. To minimally impact productivity, AppSec testing tools must be highly accurate, with a low false-positive rate, and offer actionable remediation advice.

Third, application architectures are shifting away from monolithic executables and are now built as a set of microservices, allowing a LEGO-style composition of higher-level functions from lower-level building blocks that communicate over web APIs. Most importantly, they support a flexible and scalable operational architecture based on containers and automated deployment orchestration to public/private clouds or a hybrid combination of them. As a result, the cyberattack surface has significantly broadened due to the increased numbers of APIs and the expanded scope involving many services, potentially running across multiple compute platforms. For this reason, AppSec tools, including SAST, DAST, SCA, and IAST, must be container, microservices, cloud infrastructure, and network aware. Furthermore, comprehensive API security testing has become a critical element of an overall AppSec testing strategy.

Q2. Steve, from a capabilities standpoint, what does it take to manage the risk posed by the use of open source software in modern application development?

First and foremost, you need to know what's in your code. You obviously can't patch something you don't know you have, but open source is hard to keep track of. Whether a developer pulls in a piece of code they used in a prior-generation product or downloads something from a website, it's up to the development organization to maintain an accurate and up-to-date inventory of all open source components in use within the applications their company deploys. In short, you first need to know what is in your product.

Armed with this visibility, the development organization then needs to be able to enforce open source policies during development so that problematic open source can be flagged and addressed early in the SDLC. The challenge here is matching this governance with the increasing speed of development. So it's critical that any enforcement methodology can be automated and integrated with the CI/CD and DevOps tools and processes the development teams are using.

Finally, and perhaps most importantly, once software is in production, it's imperative to be able to quickly and reliably determine when newly reported vulnerabilities affect the software, so that patches can be deployed before hackers have the opportunity to exploit them. This is why building an accurate inventory ahead of time is key. As an example, the Apache Struts issue was a result of this visibility gap. Had the affected organization known what open source was in their code, they could have easily found and fixed the problem before hackers compromised their customer data.

Q3. Andreas, what are some of the key attributes that organizations should be looking for when shopping for an IAST product, and why?

Many are hailing interactive application security testing (IAST) as the next step in the evolution of application security testing. Gartner expects IAST adoption to have exceeded 30% by 2019 as it provides significant advantages over other testing methodologies.

No matter which IAST solution an organization chooses, there are a few key features you should look for:

  1. Fast, accurate, and comprehensive results out of the box, with a low false-positive rate: Development and QA teams need to focus on quickly finding critical security vulnerabilities and must avoid wasting time on debugging false positives or tuning tools to reduce them.
  2. Automated identification and verification of vulnerabilities: Advanced IAST tools automatically verify the validity of complex vulnerabilities, further reducing or even completely eliminating false positives.
  3. Detailed security guidance and remediation advice: To maintain their productivity, developers need detailed and contextual information about vulnerabilities, where they are located in the source code, and how to remediate them.
  4. Comprehensive support for microservices: For complete coverage of modern application architectures, an IAST solution must be able to easily cross multiple microservices from a single application for assessment.
  5. Ease of deployment in automated CI/CD workflows: Seamless integration into multistage CI/CD pipelines is critical for quickly establishing development environments, especially for organizations that have to support a large number of projects.
  6. Updated security dashboards for standards compliance: An IAST solution must support reporting on security standards such as PCI DSS, OWASP Top 10, and CWE/SANS Top 25 to provide organizations insight into security risks, trends, and coverage, as well as security compliance for operating web applications.
  7. Sensitive data tracking: To achieve compliance with key industry security standards such as PCI DSS and GDPR, advanced IAST products support tracking of sensitive data in applications.
  8. SCA binary analysis capability: Teams need visibility into security vulnerabilities and license types in open source and third-party components used in an application. Advanced IAST tools support binary analysis to identify these components and alert teams of known security vulnerabilities or license violations.

Q4. Steve, which of Synopsys' announcements and developments over the past year would you highlight at Black Hat USA 2019? What can attendees at the event expect to hear about the company's plans for the next few years?

In February 2019, Synopsys announced the first release of the new Polaris Software Integrity Platform, which is aimed at integrating the components of our product portfolio into an easy-to-use solution that enables security and development teams to build secure, high-quality software faster.

The vision of the Polaris platform was inspired by our customers. They asked us to integrate our various point solutions into a single platform that simplifies installation, usage, and maintenance; supports cloud, on-premises, and hybrid deployments; and are in tune with modern development workflows. When architecting Polaris, we had three key stakeholders in mind:

  1. The developer and the security engineer want a single UI and a unified workflow to understand vulnerability findings and remediation advice from different tools, and a unified method to triage the results.
  2. The security manager, CISO, or other executive wants a single-pane-of-glass view to assess the security posture of their application portfolio, with consolidated results from their various testing tools.
  3. The DevOps engineer wants a unified method to install, upgrade, and maintain the platform and deploy it in a scalable manner.

The Polaris platform is composed of two main components. The Polaris Code Sight IDE plugin serves as a developer workbench, running the tools locally in an incremental mode. This helps developers address many security issues as they code, essentially preventing issues from getting checked into the code repository in the first place. The Polaris Central Server is integrated into the central CI/CD workflow and ensures that any remaining defects are caught before the application goes to production.

Going forward, we will continuously expand the capabilities of Polaris using this strategy by adding more technology and services components, broadening support of workflow integrations, and improving individual tool capabilities. In particular, we are exploring how our different core technologies can supplement one another, making their combination significantly more powerful than the sum of the individual pieces.

Sustaining Partners