- Feb
- Risk Analysis: The problem of probability
Walt Williams
An overview of traditional use of probability in quantitative models of risk analysis and a proposal or two of a better approach. Traditionally, risk is thought to be the product of impact when multiplied by probability. While we learned in elementary school not to combine apples and oranges, somehow NIST expects us to believe that multiplying probability by impact gives you meaningful results. This presentation will look both at ways to calculate probability meaningfully, the value of doing so in risk analysis, and what the relationship with impact actually is and why it's important to understand this.
Walter Williams served as an infrastructure and security architect at firms as diverse as GTE Internetworking, State Street Corp, Teradyne, The Commerce Group and EMC. He has since moved to security management, where he'd served as at IdentityTruth, Passkey and now manages security at Lattice Engines. He is an outspoken proponent of design before build, an advocate of frameworks and standards, and has spoken at Security B-Sides on risk management as the cornerstone of a security architecture. His articles on Security and Service Oriented Architecture have appeared in the Information Security Management Handbook. He sits on the board of directors for the New England ISSA chapter and was a member of the program committee for Metricon8. He has masters degree in Anthropology from Hunter College.Slides on Realistic and Affordable Quantitative Information Security Risk Management (PDF, 1.9MB)
Risk analysis matrix used during the interactive part of the talk. ISO 27005 Risk Measurement Matrix against BITS threat catalog spreadsheet (Excel xlsx, 800K)
- Apr
- Evaluating Distributed File System Performance
Jeff Darcy
The first part of this talk will cover general issues such as the effect of different workloads, measurement pitfalls, and common cheats used by storage vendors. The second part will introduce common tools such as fio and IOzone to measure storage performance.
In the third part, Jeff will demonstrate how to set up and test a popular distributed file systems using these tools, and how to analyze results. Most importantly, attendees will learn to recognize anomalies in their own tests, or misleading results from others', so that they can get an accurate picture of each system's capabilities and limitations.
Jeff Darcy has been working on distributed storage since 1989, when that meant DECnet and NFSv2. Since then he has played a significant role in the development of clustered file systems, continuous data protection, and other areas. He is currently a developer at Red Hat, with the rare opportunity to work on two open-source distributed file systems - GlusterFS and Ceph - at once.Slides on Evaulating Distributed File System Performance (from google docs)
2016 | 2015 | 2014 | 2013 | 2012 | 2011 | 2010 | ||||||||||||
2009 | 2008 | 2007 | 2006 | 2005 | 2004 | 2003 | 2002 | 2001 | 2000 | |||||||||
1999 | 1998 |