8 takeaways from NIST’s application container security guide

By Tim Mackey, Senior Technical Evangelist for Black Duck Software by Synopsys

 

Link to original article on Synopsys blog, published May 1, 2018

 

Companies are leveraging containers on a massive scale to rapidly package and deliver software applications. But because it is difficult for organizations to see the components and dependencies in all their container images, the security risks associated with containerized software delivery has become a hot topic in DevOps. This puts the spotlight on operations teams to find security vulnerabilities in the production environment.

 

Closely tracking the explosive growth of containers in the last couple years, Black Duck by Synopsys created OpsSight — our first product that gives IT operations teams visibility into their container images to help prevent applications with open source vulnerabilities from being deployed.

 

Synopsys isn’t the only organization to identify this trend. The National Institute of Standards and Technology (NIST) published the “Application Container Security Guide” in September 2017 to address the security risks associated with container adoption.

 

Chances are, hackers are aware of the growing popularity of containers as well, which is why we compiled eight takeaways from NIST’s report on container security so you can be proactive about vulnerabilities in your production environment.

 

 

1. As the use of containers becomes best practice in DevOps, existing software development and security methodologies could be disrupted.

 

Organizations are adopting containers to accelerate software delivery, embrace flexibility in the production environment, and move to the cloud. NIST recommends that organizations tailor their

 

“…operational culture and technical processes to support the new way of developing, running, and supporting applications made possible by containerization.1

 

As an example, due to the immutable nature of containers, vulnerabilities found within those containers are not simply fixed or patched with the latest software update. Instead, the base images themselves should be updated and redeployed as new containers entirely. This is an important operational difference, which is why processes and tools might have to be adjusted.

 

Unlike traditional operational patterns in which deployed software is updated ‘in the field’ on the hosts it runs on, with containers these updates must be made upstream in the images themselves, which are then redeployed.2

 

 

2. While containers help speed software delivery, they pose new risks to application security.

 

NIST acknowledges the benefits of containers, but cautions:

 

“…when a container is compromised, it can be misused in many ways, such as granting unauthorized access to sensitive information or enabling attacks against other containers or the host OS.3

 

Just as traditional applications are vulnerable to hackers, containers can be breached. The security risk associated with vulnerabilities in containers can and should be controlled. The most effective and proactive way of doing that is by finding and removing vulnerabilities in base images.

 

 

3. Define a container security strategy and utilize a tool that can help you enforce it throughout the DevOps life cycle.

 

Organizations should adopt tools that “validate and enforce compliance” of container security policies. The most advanced tools will enable this enforcement by providing a method to prevent containers with security vulnerabilities from being deployed.

 

Organizations should use tools that take the declarative, step-by-step build approach and immutable nature of containers and images into their design to provide more actionable and reliable results…This should include having centralized reporting and monitoring of the compliance state of each image, and preventing noncompliant images from being run.4

 

Container orchestrators are a good place to start. NIST claims, “Orchestrators should ensure that nodes are securely introduced to the cluster [and] have a persistent identity throughout their life cycle.5

 

 

4. The large-scale use of containers is new—so are the tools to manage them.

 

Don’t rely on traditional security tools that aren’t designed to manage the security risks associated with hundreds or thousands of containers. NIST reports:

 

“…traditional tools are often unable to detect vulnerabilities within containers, leading to a false sense of safety” (pg v; Executive Summary). Rather, “adopt container-specific vulnerability management tools and processes for images to prevent compromises.6

 

The institute warns, “traditional developmental practices, patching techniques, and system upgrade processes might not directly apply to a containerized environment.7

 

 

5. Containers should be monitored continuously because new security vulnerabilities are being discovered every day.

 

With hundreds or thousands of containers running at the same time, finding and remediating every newly discovered vulnerability in each container can be a challenge.

 

…an image created with fully up-to-date components may be free of known vulnerabilities for days or weeks after its creation, but at some time vulnerabilities will be discovered in one or more image components, and thus the image will no longer be up-to-date.8

 

To ensure containers are secure from newly reported vulnerabilities, NIST suggests organizations “utilize a container-native security solution that can monitor the container environment and provide precise detection of anomalous and malicious activity within it.9

 

 

6. Organizations should ensure their approach to container security scales to their containerized environment.

 

Tools scale, people don’t. It only takes one vulnerable container out of thousands to cause a breach— which is why organizations need visibility into every container image simultaneously.

 

Traditional security solutions “may not be able to operate at the scale of containers, manage the rate of change in a container environment, and have visibility into container activity.10

 

 

7. Gain visibility into each container, and group the containers based on similar security risks.

 

NIST suggests organizations “group containers with the same purpose, sensitivity, and threat posture on a single host OS kernel to allow for additional defense in depth.11

 

If containers are grouped together based on their security and purpose, hackers will have a harder time expanding that compromise to other container groups. Smart grouping makes the breach easier to detect and contain — this starts with understanding the security risks in each container.

 

 

8. “An ounce of prevention is worth a pound of cure.

Be proactive about container security to prevent breaches before they happen.

 

Deploy and use a dedicated container security solution capable of preventing, detecting, and responding to threats aimed at containers during runtime.12

 

To learn more about how Black Duck OpsSight can help IT operations and infrastructure teams tackle some of the most pressing challenges associated with container deployments, check out www.blackducksoftware.com/products/opssight.

 

 

1 National Institute of Standards and Technology, Application Container Security Guide, page iv; Executive Summary

2 Ibid., pg 13; 3.1.1 Image Vulnerabilities

3 Ibid., pg 17; 3.4.4 App Vulnerabilities

4 Ibid., pg v; Executive Summary

5 Ibid., pg 24; 4.3.5 Orchestrator node trust

6 Ibid., pg v; Executive Summary

7 Ibid., pg iv; Executive Summary

8 Ibid., pg 13; 3.1.1 Image Vulnerabilities

9 Ibid., pg vi; Executive Summary

10 Ibid., pg vi; Executive Summary

11 Ibid., pg v; Executive Summary

12 Ibid., pg vi; Executive Summary

 

So you want to implement Quality Assurance… or should it be Quality Control?

By Bill Ferrarini, Senior Quality Assurance Analyst at SunGard Public Sector, and CISQ Member

 

Most companies will use these terms interchangeably, but the truth is Quality Assurance is a preventative method while Quality Control is an Identifier.

 

Don’t go shooting the messenger on this one, I know that each and every one of us has a different point of view when it comes to quality. The truth of the matter is we all have the same goal, but defining how we get there is the difficult part.

 

Let’s take a look at the different definitions taken from ASQ.org.

 

Quality Assurance

Quality Control

The planned and systematic activities implemented in a quality system so that quality requirements for a product or service will be fulfilled.

The observation techniques and activities used to fulfill requirements for quality.

Quality Assurance is a failure prevention system that predicts almost everything about product safety, quality standards and legality that could possibly go wrong, and then takes steps to control and prevent flawed products or services from reaching the advanced stages of the supply chain.

Quality Control is a failure detection system that uses a testing technique to identify errors or flaws in products and tests the end products at specified intervals, to ensure that the products or services meet the requirements as defined during the earlier process for QA.

 

 

As different as the definitions are, their scope is also very different.

 

To define a company’s Quality Assurance strategy is to specify the process, artifacts, and reporting structure that will assure the quality of the product. To define a company’s Quality Control is to specify the business and technical specifications, release criteria, test plan, use and test cases, and configuration management of the product under development.

 

It is important for a company to agree on the differences between Quality Assurance (QA) and Quality Control (QC). Both of these processes will become an integral part of the companies’ quality management plan. Without this delineation a companies’ quality system could suffer from late deliveries, being over budget, and a product that does not meet the customers’ criteria.

 

Quality Assurance

The ISO 9000 standard for best practices states that Quality Assurance is “A part of quality management focused on providing confidence that quality requirements will be fulfilled.”

 

Quality Assurance focuses on processes and their continuous improvement. The goal is to reduce variance in processes in order to predict the quality of an output.

 

To measure a company’s success in a Quality Assurance Implementation, you would do well to monitor the follow areas:

  • Best Practices
  • Code
  • Time to Market

Quality Control

The ISO 9000 standard for best practices states that Quality Control is “A part of quality management focused on fulfilling quality requirements.”

 

While QA is built around known best practices and processes, QC is a bit more complicated. To Control Quality, at a minimum you need to know two pieces of information:

  • The Customer’s view of Quality
  • Your company’s view of Quality

There are certain to be gaps between these two opposing views. How well you bring those gaps together will determine the Quality of your product.

 

Other metrics that come into play within a Quality Control environment would be:

  • Number of defects found vs. fixed in an iteration
  • Number of defects found vs. fixed in a release
  • Defects by severity level

These are just some of the metrics you would use to measure the success of your Quality Control implementation.

 

Summary

Neither QA nor QC focuses on the “whose fault is it?” question. The goal of a good QA and QC implementation should be to make things better by continuously improving your quality from start to finish. This requires good communication between the QA/QC groups.

 

Key attributes for success are:

  • Participation: Both process owners and users need to provide their expert input on how things “should” work, and define that in a fashion that allows your Quality Control to monitor the function.
  • Transparency: Open communication and the ability to look at all aspects of the process are critical to fully understand and identify both what works and what doesn’t.
  • Clear Goals: The entire team should know the intended results.

So if your company is implementing a Quality Management System, your first order of priority will be to understand the differences between QA and QC and when established measure and improve every chance you get.

 

About the Author

Bill Ferrarini is a Senior Quality Assurance Analyst at SunGard Public Sector. Bill has over 25 years of experience testing software, hardware, and web browser based systems. After beginning his career as a software developer, Bill has been devoted solely to furthering the Quality Management movement. He has a diploma in Quality Management, a degree in Video and Audio Production, is a former certified ISO internal auditor, and an accomplished musician.

Gartner Application Architecture, Development & Integration Summit 2014

25604_thumb_logo_gartnerGartner Application Architecture, Development & Integration Summit 2014 will be held December 8 – 10, in Las Vegas, NV. Mark your calendar now and stay up to date on the must-attend event for AADI professionals. 

 

Don’t miss out on a robust agenda of the hottest topics in AADI, industry-defining keynotes, top solution providers and the opportunity to network with industry experts and peers. CISQ representatives will be there to speak about the importance of software quality.

 

For more information click here.

CISQ Executive Lunch – Software Quality and Size Measurement in Government Sourcing

Where: Marriott Grand Hotel Flora, Via Veneto, 191, Rome, Italy

When: July 11, 2014

 

Government and industry have been plagued by expensive and inconsistent measures of software size and quality.  The Consortium for IT Software Quality has responding by creating industry standard measurement specifications for Automated Functions Points that adheres as closely as possible to the IFPUG counting guidelines, in addition to automated quality measures for Reliability, Performance Efficiency, Security, and Maintainability.  Dr. Bill Curtis will describe these specifications and how they can be used to manage the risk and cost of software developed for government and industry use.

What Software Developers Can Learn From the Latest Car Recalls

By Sam Malek, CTO / Co-Founder of Transvive Inc., and CISQ Member

 

If you have been following the news these days, you probably heard about the recall of some General Motors cars because of an ignition switch issue. It is estimated to be 2.6 million cars (1) and will cost around $400 million (2), which is roughly $166 per vehicle. This price is significantly expensive for a 57 cent part that could have been easily replaced on the assembly line.

 

As we enter the third wave of the industrial revolution (Toffler), where information technology is starting to dominate major parts of everyday life, software is becoming a critical component of day-to-day activities: from the coffee machine that might be running a small piece of code to the control unit that governs vehicles, and everything else in between. 

 

However, these days with the overflow of news about applications that have made millions – even billions – of dollars for their developers, the stories we hear about the development life cycle does not necessarily highlight the quality aspect of any software development cycle. Even in some recent documentaries and blog articles, the software development cycle is portrayed as a mad rush to get software out of the door without proper attention to its quality.

 

The case for higher software quality becomes even more important when an application is touching a critical aspect of our daily living. For example, vehicle drivers do not expect the onboard instrumentation to shutdown or crash due to a software malfunction – especially while driving on a highway.

 

While the origins of software defects are many (including design defects, requirements defects, coding defects, etc.), coding defects present the highest percentage of software defects when measured as a number of defects per a functional point. Dr. Capers Jones estimates that about 35% of any software application’s defects are attributed to code defects (3). These code defects can be easily detected and fixed while software is being manufactured, and the cost would be relatively very small when compared to fixing them later – as we found out with GM’s ignition switch issue.

 

The process of developing software is maturing. In the past, the focus was on extensive testing in multiple areas such as user acceptance testing, regression testing, and scalability testing. Today there are quality tools that enable early detection of software coding, structural, security and reliability defects during the manufacturing process. These tools can highlight potential issues within the software, thereby reducing the risk of fixing defects at later stages.

 

Late inspection of software can cause rework and expose technical debt, potentially making the cost of fixing defects or the cost of a change anywhere from 40 to 100 times greater than the cost of fixing those very same defects when they were first created (Boehm, 2004). This alone can make the case for implementing early quality monitoring tools.

 

Early inspection is not new. In fact it has been an integral part of the Industrial Revolution especially in the 1980’s through the work of the late W. Edwards Deming, who was well known at that time as the father of quality.

 

While software practices such as waterfall methodologies have been focusing more on detection of defects in later cycles, we can learn from the quality revolution to harness the “Mistake Proofing” technique for automatically preventing defects from happening. This is called “Poka Yoke” – a Japanese term which means “mistake proofing”. The purpose of mistake proofing is to prevent defects or unnecessary work later on before or during the final product is within the hands of its users.

 

Over the past few years, we have seen many IT shops implement proactive diagnosis only on the operational side of IT, such as proactive network and security monitoring. A smaller number of development shops have also integrated proactive defect tracking and fixing within software application development. As a result, these shops deliver the highest possible quality work and have the highest customer satisfaction.

 

If you look at the history of auto manufacturing which began in the 1890’s within the United States, it took almost 90 years for the industry to start learning about the true meaning of Total Quality – although it was already implemented in other parts of the world and especially in Japan- after the rise of the import automobiles market share. The “Key” question is, how many years will the software industry take to realize the same conclusion?

 

About the Author

Sam Malek is CTO and Co-Founder of Transvive Inc., an application modernization consulting firm. Sam has a track record of aligning business and IT strategies and a passion for helping organizations transform, improve service delivery and achieve operational excellence. Sam has been working with enterprises to design and implement strategies to deliver innovative solutions to complex problems in the Enterprise Architecture and Application Portfolio Management areas, specifically the field of application modernization.

 

References

(1)    GM ignition switch probe finds misjudgment but no conspiracy-http://www.cbc.ca/news/business/gm-ignition-switch-probe-finds-misjudgment-but-no-conspiracy-1.2664803

(2)    Chevy Aveo Recall Brings GM Total To 13.8 Million-  http://washington.cbslocal.com/2014/05/21/chevy-aveo-recall-brings-gm-recall-total-to-13-8-million/

(3)    SOFTWARE DEFECT ORIGINS AND REMOVAL METHODS –http://namcookanalytics.com/software-defect-origins-and-removal-methods/

Automating Function Points – ICTscope.ch (SwiSMA/SEE)

Speaker: Massimo Crubellati, CISQ Outreach Liaison, Italy

Location: swissICT Vulkanstrasse, Zurich, Switzerland

 

Abstract:

IT executives have complained about the cost and inconsistency of counting Function Points manually.  The Consortium for IT Software Quality was formed as a special interest group of the Object Management Group (OMG) co-sponsored by the Software Engineering Institute at Carnegie Mellon University for the purpose of automating the measurement of software attributes from source code.

 

One of the measures the founding members of CISQ requested was Automated Function Points specified as closely as possible to the IFPUG counting guidelines.  David Herron, a noted FP expert led the effort which has now resulted in Automated Function Points being an Approved Specification of the OMG.  This talk with discuss the specification and report on experience with its use, including comparisons with manual counts.  It will also present methods for using AFPs for calibrating FP estimating methods early in a project as well as how to integrate automated counts into development and maintenance processes.

 

For more information, click here.

Productivity Challenges in Outsourcing Contacts

By Sridevi Devathi, HCL Estimation Center of Excellence, and CISQ Member

 

In an ever competitive market, year-on-year productivity gains and output-based pricing models are standard ‘asks’ in most outsourcing engagements. Mature and accurate SIZING is the KEY in order to address the same!

 

It is essential that the below stated challenges are clearly understood and addressed in outsourcing contracts for successful implementation.

 

Challenge 1 – NATURE OF WORK

All IT Services provided by IT vendors are NOT measurable using the ISO certified Functional Sizing Measures like IFPUG FP, NESMA FP or COSMIC FP (referred as Function Points hereafter). While pure Application development and Large Application enhancement projects are taken care of by Function Points, there are no industry standard SIZING methods for projects/work units that are purely technology driven, like the following:

  • Pure technical projects like data migration, technical upgrades (e.g. VB version x.1 to VB version x.2)
  • Performance fine tuning and other non-functional projects
  • Small fixes in business logic, configuration to enable a business functionality
  • Pure cosmetic changes
  • Pure testing projects
  • Pure agile projects

 

Challenge 2 – NEWER TECHNOLOGIES

  • The applicability of Function Points for certain technologies like Data Warehousing, Business Intelligence and Mobility are not established.
  • While COSMIC is supposed to be the most suitable for such technologies, there is not enough awareness and/or data points.

 

Challenge 3 – TIME CONSUMING AND COMPETENCY ISSUES

  • It is of utmost importance to ensure that IFPUG/COSMIC certified professionals are involved in SIZING; hence there is a dependency on subject matter experts.
  • Also appropriate additional efforts need to be budgeted upfront for SIZING of applications; releases and projects.

 

Conclusions and Recommendations

Challenge 1 & 2 could lead to situations where more than 50% of work done is not ‘SIZE’able in a given engagement. Most clients do not foresee this gap, and often expect that the SIZE delivered by a vendor should be in proportion to the efforts paid.  It is critical to have these challenges documented and agreed to with the client upfront.

 

Challenge 3 could be addressed by usage of tools. For example, CAST provides automated FP counts based on code analysis. So it would be worthwhile for IT vendors to validate and ratify the CAST automated FP counts for various technologies, architectures and nature of work. While there would be exception scenarios which are not addressed by CAST, the dependency on FP Subject Matter experts could be significantly reduced.  CAST supports the Automated FP Standard – http://www.castsoftware.com/news-events/press-release/press-releases/cast-announces-support-for-the-omg-automated-function-point-standard

 

Various other tools on IFPUG FP, like Total Metrics, could also be used, if manual FP counting is required. While these tools do not remove the dependency on FP subject matter experts, they significantly reduce the overall efforts on SIZING and also help in faster impact analysis of changes done to existing applications.

 

 

About the Author

Sridevi Devathi has 19 years of IT experience in the areas of Estimation Center of Excellence, Quality Management & Consulting, IT Project Management and Presales. She has been with HCL for past 16 years, and currently leads the HCL Estimation Center of Excellence. She has taken up various certifications like CFPS®, PMP®, IQA, CMM ATM and Six Sigma Yellow Belt. She has taken part in external industry forums like CISQ Size Technical Work group in 2010 (http://it-cisq.org), IFPUG CPM Version 4.3 review in 2008 (http://www.ifpug.org), and BSPIN SPI SIG during 2006-2007 (http://www.bspin.org).

Software Quality Challenges in Healthcare Systems – OMG (Boston, MA USA)

Model Based Systems Engineering (MBSE) in Healthcare Summit. Wednesday, June 18, 2014, Boston, MA

 

The OMG Technical Meeting provides IT architects, business analysts, government experts, vendors and end-users a neutral forum to discuss, develop and adopt standards that enable software interoperability for a wide range of industries.

 

On Wednesday, June 18, 2014, Dr. Bill Curtis will be hosting a session on Software Quality Challenges in Healthcare Systems. Here is the abstract:

 

The recent Healthcare.gov debacle highlighted the challenges of software quality in healthcare systems. However, these challenges extend far beyond badly managed government projects. Healthcare has lagged other industry segments in adopting recent advances for improving software quality, such as continuous improvement of both process and product. Generally organizations building embedded software for medical devices have been ahead of those building business software for administering medical operations and billing. This talk will review how continuous process improvement coupled with lean principles has dramatically improved software in other industry segments and will include a short case study from a medical device manufacturer. It will then discuss the more recent focus on the structural quality of software, which cannot be ensured through traditional testing methods. Structural issues related to Reliability, Performance, Security, and Maintainability will be discussed along with the costs and risks they affect.

 

More information can be found here.