Cyber-attack causes Rouen hospital to turn to pen and paper

Cyber-attack on a hospital in Rouen last week caused “very long delays in care”, reports the AFP news agency. Medical staff at the French city’s University Hospital Centre (CHU) were forced to abandon PCs as ransomware had made them unusable, a spokesman said. Instead, staff returned to the “old-fashioned method of paper and pencil”, said head of communications Remi Heym.

Abusing Web Filters Misconfiguration for Reconnaissance, (Fri, Nov 22nd)

Yesterday, an interesting incident was detected while working at a customer SOC. They use a “next-generation” firewall that implements a web filter based on categories. This is common in many organizations today: Users’ web traffic is allowed/denied based on an URL categorization database (like “adult content”, “hacking”, “gambling”, …). How was it detected? 

We received notifications about suspicious traffic based on bad websites detection (read: “not allowed” in the firewall policy). The alert could be read like:

The IP x.x.x.x tried to access the URL xxxxxxxxx (matching category: xxxxxxxxx)

Asterisk Project Security Advisory – AST-2019-008

Asterisk Project Security Advisory –

Product Asterisk
Summary Re-invite with T.38 and malformed SDP causes crash.
Nature of Advisory Remote Crash
Susceptibility Remote Authenticated Sessions
Severity Minor
Exploits Known No
Reported On November 07, 2019
Reported By Salah Ahmed
Posted On November 21, 2019
Last Updated On November 21, 2019
Advisory Contact bford AT sangoma DOT com
CVE Name CVE-2019-18976

Will Your WAF Know When You Are Compromised?

In my last blog post “The Existential Crisis of a WAF,” I talked through the consequences of an attack getting through either by a rule not matching, a device misconfiguration, or traffic obfuscation. The latter includes the inability to decrypt and parse the traffic, which was the case with the Equifax breach. I also discussed Trusted Execution™, a new technology that provides huge value over the rule-based network security devices, including:

How to use CI/CD to deploy and configure AWS security services with Terraform

Like the infrastructure your applications are built on, security infrastructure can be handled using infrastructure as code (IAC) and continuous integration/continuous deployment (CI/CD). In this post, I’ll show you how to build a CI/CD pipeline using AWS Developer Tools and HashiCorp’s Terraform platform as an IAC tool for AWS Web Application Firewall (WAF) deployments. AWS WAF is a web application firewall that helps protect your applications from common web exploits that could affect availability, compromise security, or consume excessive resources.

Terraform is an open-source tool for building, changing, and versioning infrastructure safely and efficiently. With Terraform, you can manage AWS services and custom defined provisioning logic. You create a configuration file that describes to Terraform the components needed to run a single application or your entire AWS footprint. When Terraform consumes the configuration file, it generates an execution plan describing what it will do to reach the desired state, and then executes it to build the described infrastructure.

In this solution, you’ll use Terraform configuration files to build your WAF, deploy it automatically through a CI/CD pipeline, and retain the WAF state files to be later referenced, changed, or destroyed through subsequent deployments in a durable backend. The CI/CD solution is flexible enough to deploy many other AWS services, security or otherwise, using Terraform. For a full list of supported services, see HashiCorp’s documentation.

Note: This post assumes you’re comfortable with Terraform and its core concepts, such as state management, syntax, and command terms. You can learn about Terraform here.

Solution Overview

Figure 1: Architecture diagram

For this solution, you’ll use AWS CodePipeline, an automated CD service to form the foundation of the CI/CD pipeline. CodePipeline helps us automate our release pipeline through build, test, and deployment. For the purpose of this post, I will not demonstrate how to configure any test or deployment stages.

The source stage uses AWS CodeCommit, which is the AWS fully-managed managed, Git-based source code management service that can be interacted with via the console and CLI. CodeCommit encrypts the source at rest and in transit, and is integrated with AWS Identity and Access Management (IAM) to customize fine-grained access controls to the source.

Note: CodePipeline supports different sources, such as S3 or GitHub – if you’re comfortable with those services, feel free to substitute them as you walk through the solution.

For the build stage, you’ll use AWS CodeBuild, which is a fully managed CI service that compiles source code, runs tests, and produces software packages that are ready to deploy. With CodeBuild, you don’t need to provision, manage, and scale your own build servers. CodeBuild uses a build specification file, which is a collection of build commands, variables and related settings, in a YAML file, that CodeBuild uses to run a build.

Finally, you’ll create a new Amazon Simple Storage Service (S3) bucket and Amazon DynamoDB table to durably store the Terraform state files outside of the CI/CD pipeline. These files are used by Terraform to map real world resources to your configuration, keep track of metadata, and to improve performance for large infrastructures.

For the purpose of this post, the security infrastructure resource deployed through the pipeline will be an AWS WAF, specifically a Global Web ACL that can attach to an Amazon CloudFront distribution, with a sample SQL Injection and Blacklist filtering rule.

The deployment steps will be as shown in Figure 1:

  1. Push artifacts, Terraform configuration files and a build specification to a CodePipeline source.
  2. CodePipeline automatically invokes CodeBuild and downloads the source files.
  3. CodeBuild installs and executes Terraform according to your build specification.
  4. Terraform stores the state files in S3 and a record of the deployment in DynamoDB.
  5. The WAF Web ACL is deployed and ready for use by your application teams.

Step 1: Set-up

In this step, you’ll create a new CodeCommit repository, S3 bucket, and DynamoDB table.

Create a CodeCommit repository

  1. Navigate to the AWS CodeCommit console, and then choose Create repository.
  2. Enter a name, description, and then choose Create. You will be taken to your repository after creation.
  3. Scroll down, and then choose Create file, as shown in Figure 2:
     

    Figure 2: CodeCommit create file

  4. You will be taken to a new screen to create a sample file, write readme into the text body, name the file readme.md, and then choose Commit changes, as shown in Figure 3:
     

    Figure 3: CodeCommit editing files

Note: You need to create a sample file to initialize your Master branch that will not interfere with the build process. You can safely delete this file later.

Create a DynamoDB table

  1. Navigate to the Amazon DynamoDB console, and then choose Create table.
  2. Give your table a name like terraform-state-lock-dynamo.
  3. Enter LockID as your Primary key, keep the box checked for Use default settings, and then choose Create, as shown in Figure 4.

Note: Copy the name and ARN of the DynamoDB table because you will need it later when configuring your Terraform backend and CodeBuild service role.

 

Figure 4: Create DynamoDB table

Create an S3 bucket

  1. Navigate to the Amazon S3 console, and then choose Create bucket.
  2. Enter a unique name and choose the Region you have built the rest of your resources in, and then choose Next.
  3. Enable Versioning and Default encryption, and then choose Next.
  4. Select Block all public access, choose Next, and then choose Create bucket.

Note: Copy the name and ARN of the S3 bucket because you will need it later when configuring your Terraform backend and CodeBuild service role.

Step 2: Create the CI/CD pipeline

In this step, you will create the rest of your pipeline using CodePipeline and CodeBuild. If you have decided to not use CodeCommit, read CodePipeline’s documentation here about other sources.

  1. Navigate to the AWS CodePipeline console, and then choose Create pipeline.
  2. Enter a Pipeline name, select New service role, and then choose Next, as shown in Figure 5:
     

    Figure 5: CodePipeline settings

  3. Select AWS CodeCommit as the Source provider, select the name of the repository you created, and then choose master as your Branch name.
  4. Choose Amazon CloudWatch Events (recommended) as your detection option, and then choose Next, as shown in Figure 6:
     

    Figure 6: CodePipeline source stage

  5. For Build provider, choose AWS CodeBuild and change your region as needed, and then choose Create project.

    Important: Selecting Create Project will open a new screen in your browser with the AWS CodeBuild console; do not close the browser because you will need it!

  6. Enter a Project name and description, and then scroll to the Environment section.
  7. For Environment image, choose Managed image, and then configure the following sub-selections, as shown in Figure 7:
    1. Operating system: Ubuntu
    2. Runtimes(s): Standard
    3. Image: aws/codebuild/standard:1.0
    4. Image version: Always use the latest image for this runtime version
       

      Figure 7: CodeBuild environment image

    Select the checkbox under Privileged, select New service role, and take note of this Role name because you will be modifying it later. 

    Figure 8: CodeBuild service role

    Choose the dropdown menu named Additional configuration (shown in Figure 8), scroll down to Environment variables, and then enter the following values, as shown in Figure 9:
    1. Name: TF_COMMAND
    2. Value: apply (this is case sensitive)
    3. Type: Plaintex
       

      Figure 9: CodeBuild variables

      Note: These values are used by the build specification to inject Terraform commands into Runtime.

    In the Buildspec section, choose Use a buildspec file. You don’t need to provide a name because buildspec.yaml in your ZIP package is the default value CodeBuild will look for. In the Logs section, choose the checkbox next to CloudWatch logs – optional, and then choose Continue to CodePipeline (see Figure 10). 

    Figure 10: CodeBuild logging

    Note: The separate window will close at this point and you will be back in the CodePipeline console.

    Now, back in the CodePipeline console, choose Next, choose Skip deploy stage, and then choose Skip when prompted, as shown in Figure 11. 

    Figure 11: CodePipeline skip deploy stage

    Confirm your details are correct in the Review screen, and then choose Create pipeline.

    After creation, you will be taken to the Pipeline Status view for the pipeline you just created. This interface allows you to monitor the status of CodePipeline in near real time. You can pivot to your Source repository and Build project by selecting the Details link, as shown in Figure 12.
     

    Figure 12: CodePipeline status

    You can also see previous CodePipeline runs by choosing the History view on the navigation pane on the left, as shown in Figure 13. This view is also useful for viewing multiple concurrent CodePipeline runs.
     

    Figure 13: CodePipeline History

    Step 3: Modify the CodeBuild service role

    In this section, you will add an additional policy to your CodeBuild service role to allow Terraform to deploy your WAF and write state information to DynamoDB and S3.

    Navigate to the IAM Console, and then choose Roles from the navigation pane. Search for the CodeBuild service role, select it, and then choose Add inline policy.

    Note: The inline policy is used to avoid accidental deletions or modifications, and provide a one-to-one relationship between the permissions and the service role.

    Choose the JSON tab and paste in the following policy. Ensure you populate the Resources section of the policy with the ARN of your S3 Bucket and DynamoDB table created in Step 3.1, as shown in Figure 14.

    { "Version": "2012-10-17", "Statement": [ { "Sid": "WafSID", "Action": [ "waf:CreateIPSet", "waf:CreateRule", "waf:CreateRuleGroup", "waf:CreateSqlInjectionMatchSet", "waf:CreateWebACL", "waf:DeleteIPSet", "waf:DeleteLoggingConfiguration", "waf:DeletePermissionPolicy", "waf:DeleteRule", "waf:DeleteRuleGroup", "waf:DeleteSqlInjectionMatchSet", "waf:DeleteWebACL", "waf:GetChangeToken", "waf:GetChangeTokenStatus", "waf:GetGeoMatchSet", "waf:GetIPSet", "waf:GetLoggingConfiguration", "waf:GetPermissionPolicy", "waf:GetRule", "waf:GetRuleGroup", "waf:GetSampledRequests", "waf:GetSqlInjectionMatchSet", "waf:GetWebACL", "waf:ListActivatedRulesInRuleGroup", "waf:ListGeoMatchSets", "waf:ListIPSets", "waf:ListLoggingConfigurations", "waf:ListRuleGroups", "waf:ListRules", "waf:ListSqlInjectionMatchSets", "waf:ListSubscribedRuleGroups", "waf:ListTagsForResource", "waf:ListWebACLs", "waf:PutLoggingConfiguration", "waf:PutPermissionPolicy", "waf:TagResource", "waf:UntagResource", "waf:UpdateIPSet", "waf:UpdateRule", "waf:UpdateRuleGroup", "waf:UpdateSqlInjectionMatchSet", "waf:UpdateWebACL" ], "Effect": "Allow", "Resource": "*" }, { "Sid": "S3SID", "Action": [ "s3:GetObject", "s3:ListBucket", "s3:PutObject" ], "Effect": "Allow", "Resource": "" }, { "Sid": "DDBSID", "Action": [ "dynamodb:DeleteItem", "dynamodb:GetItem", "dynamodb:PutItem" ], "Effect": "Allow", "Resource": "" } ] }

Gathering information to determine unusual network traffic, (Thu, Nov 21st)

When working with threat intelligence, it’s vital to collect indicators of compromise to be able to determine possible attack patterns. What could be catalogued as unusual network traffic? This is all traffic that is not being seen normally in the network, meaning that after building a frequence table all IP addresses shown less than 1% are suspicious and should be investigated.

How to recognize AI snake oil

Princeton computer scientist Arvind Narayanan (previously) has posted slides and notes from a recent MIT talk on “How to recognize AI snake oil” in which he divides AI applications into three (nonexhaustive) categories and rates how difficult they are, and thus whether you should believe vendors who claim that their machine learning models can perform as advertised.

One Reason the US Military Can’t Fix Its Own Equipment

Manufacturers can prevent the Department of Defense from repairing certain equipment, which puts members of the military at risk. Elle Ekman, a logistics officer in the United States Marine Corps, writes: In the United States, conversations about right-to-repair issues are increasing, especially at federal agencies and within certain industries. In July, the Federal Trade Commission hosted a workshop to address “the issues that arise when a manufacturer restricts or makes it impossible for a consumer or an independent repair shop to make product repairs.” It has long been considered a problem with the automotive industry, electronics and farming equipment. Senators Elizabeth Warren and Bernie Sanders have even brought it up during their presidential campaigns, siding with farmers who want to repair their own equipment; while the senators are advocating national laws, at least 20 states have considered their own right-to-repair legislation this year.

I first heard about the term from a fellow Marine interested in problems with monopoly power and technology. A few past experiences then snapped into focus. Besides the broken generator in South Korea, I remembered working at a maintenance unit in Okinawa, Japan, watching as engines were packed up and shipped back to contractors in the United States for repairs because “that’s what the contract says.” The process took months. With every engine sent back, Marines lost the opportunity to practice the skills they might need one day on the battlefield, where contractor support is inordinately expensive, unreliable or nonexistent. I also recalled how Marines have the ability to manufacture parts using water-jets, lathes and milling machines (as well as newer 3-D printers), but that these tools often sit idle in maintenance bays alongside broken-down military equipment. Although parts from the manufacturer aren’t available to repair the equipment, we aren’t allowed to make the parts ourselves “due to specifications.”

Case Study: MixMode AI Detects Attack not Found on Threat Intel

In October, 2019 a MixMode customer experienced an incident where an external entity attacked a web server located in their DMZ, compromised it, and then pivoted internally through the DMZ to attempt access of a customer database. While the attacker was successful in penetrating the customer’s network, MixMode was able to detect the event before they were successful in penetrating the customer database.

How to neutralize the rising threat of ransomware

LanguageEnglish

Earlier this month it was reported that the average pay-out for cybercriminals targeting individuals and businesses has increased to over $41,000 in Q3 of 2019, a growth of 13.1% over the previous quarter. The increase suggests that ransomware remains big business for cyber criminals, particularly as successful execution means easy money without the need for malicious actors to worry about exfiltration of the data they have stolen. On top of this, the tools for a ransomware attack are becoming increasingly sophisticated and commercially available on the dark web, making it more likely that attacks increase in number and are successful.