One-liner to find (vulnerable) NPM package versions in all your repos

Recently, quite a few NPM packages with millions of downloads were brutally hacked, and malware was injected [1, 2]

Turned out, there is no easy way to determine whether an affected version is present in node_modules of all components of my apps, including dependencies of dependencies (I tried NPM audit, NPM list, and folder search, all were cumbersome and quite hard to apply to a containerized multi-repo app). The problem with such packages is also that it may be enough to have them on your computer or a build system to get fully compromised,

I came up with a simple bash one-liner that anyone can run in their dev folders – simply add the list of sub-folders to scan as TARGETS and the name of a culprit package – i.e. “rc” as CULPRIT.

CULPRIT="rc"; TARGETS=( "awesome-repo-1" "awesome-repo-2" ); for element in "${TARGETS[@]}"; do echo "Checking $element"; find $element/node_modules -path "*/$CULPRIT/**" -prune -name "package.json" -exec cat {} + | grep -e \"version\": -e _location ; done

This script will print out the versions of the component and their locations. And then it is up to you to read into the details of CVE and find out whether you are hacked or not… yet.

Also, link to Github Gist.

Alternatively, to scan all child folders and their node_modules without picking individual repos:

CULPRIT="rc"; for element in *; do; if [ -d "$element" ]; then echo "Checking $element"; find $element/node_modules -path "*/$CULPRIT/**" -prune -name "package.json" -exec cat {} + | grep -e \"version\": -e _location ; fi; done

Happy hunting!

Complete GDPR loophole in Sweden for $233!

I’ve been researching the privacy issues of Swedish websites such as Hitta, Eniro, MrKoll, Merinfo and many others that automatically collect personal information on individuals from open and semi-open resources and often use it to make money.

Turns out, that in Sweden, all these resources has applied for an exception from the GDPR as per Freedom of Exception right (YGL) and received a formal letter that grants them immunity to.. well, to anything in GDPR. So, legally, they do not have to delete any of personal data, nor be obliged to secure its storage. And IMY (Swedish Data Protection Agency) accepted its defeat and says it can’t do anything to these websites:

https://www.imy.se/privatperson/dataskydd/vi-guidar-dig/utgivningsbevis/

To me, this looks like a classic legal loophole where the commercial websites use the utgivningsbevis to collect, process and get rich using the private and personal data of Swedish citizens and residents.

And all of it, under the flag of Freedom of Speech – so this means, they can collect all possible data on a person and run around the internet with it, risking to spill it over, leak and do harm – all because they obtained the exception from the privacy rules.

Now, does obtaining utgivningsbevis from the Media agency require the website being a media? Nope.

Is it given to only websites that exercise their Freedom of Speech actively – i.e. publishing original materials, voicing opinions? Nope.

The voluntary utgivningsbevis can be requested by and given to.. basically anyone who agrees to call themselves a “responsible publisher” and costs SEK2000 (using today’s exchange rate, about $233).

Here’s automatic translation of the full criteria list:

So, in the essence, you can collect personal data, do whatever with it – as long as it is connected to Sweden. And it precedes GDPR because of the realization of the constitutional act.

As of today, there 1561 of granted utgivningsbevis: https://www.mprt.se/tillstandsregister/?q=&search-type=14

And many of them are just poorly designed commercial websites that found a loophole and used it – according to my opinion, exercised under the same Freedom of Speech right as their utgivningsbevis.

Whistle Willow – whistleblowing solution in Jira or Confluence Cloud!

From December 17th, 2021 companies larger than 250 employees need to provide internal reporting channels for whistleblowing tips and suggestions – as per EU directive on whistleblower protection.

First of all, what is whistleblowing and why does EU protect it?

Whistleblowing is what Edward Snowden did to NSA – he exposed the nation-wide illegal surveillance and tools, and in turn was declared an outlaw had to flee the country.

Whistleblowing in general is about bringing threats or harm to public interest to attention of internal stakeholders or external entities. The protection of whistleblowers, their identities and ensuring there is no prosecution for informing on the wrongdoing, even if it goes against company’s business interest, is extremely important – for both whistleblowers and companies. It creates a safe haven for reporters and lets them come through with the knowledge that otherwise would stay suppressed.

Establishing internal reporting channels and enabling whistleblowing program needs to be simple, quick and affordable. And that why I created Whistle Willow – a Jira and Confluence Cloud application that can get up and running in less than 5 minutes. Also, get compliant with the EU Directive as a nice bonus.

Whistle Willow provides whistleblowers a secure channel to submit their reports in Jira or Confluence, and the Compliance team gets to receive submissions, act upon them and keep the report updated with last changes and mitigations – all without revealing identity of a whistleblower.

The entire stack of Whistle Willow operations, from A to Z, is done in Atlassian platform. This means, no data leaves it, and there are no external integrations required. The application is built on top of Atlassian next-gen serverless platform Forge and uses 100% of cloud benefits, while keeping the highest security standards. It can be installed from the Atlassian marketplace and is ready to be used with Atlassian accounts right after.

The security of reports is guaranteed by tenant isolation, unique encryption keys per tenant and randomized submission times for reports. The app allows to establish a two-way communication channel between whistleblower and report reviewer without revealing reporter’s personal details.

Whistle Willow is made for whistleblowers and records no personal information in logs or submissions – and offers 30-day free trial and one-click installation. Also, it costs less than $1 per user and has no hidden charges, all transactions are done via Atlassian. Check the website for more details, or install directly via Marketplace.

Simplicity is really important for establishing the trusted and efficient whistleblowing program, and I believe that Whistle Willow can help more truths come out and let companies act upon them to improve.

How to pass SSH key to Docker build in Teamcity or elsewhere

When building in Docker, it is often we need to access private repos using authorized SSH key. However, since Docker builds are isolated from build agent, the keys remain outside of a container being built. Historically, people came up with many workarounds including passing the key to the container via ARG, forwarding SSH_AUTH_SOCK and other risky tricks.

To solve this long-standing problem, Docker 18.09 got an experimental feature that passes an available ssh key loaded to ssh-agent to the docker build. This key can be used in any of the RUN steps of Dockerfile.

To use it in Teamcity, other build system or even locally:

  1. Add a Build feature “SSH Agent” and chose a key you want to load to a local ssh-agent running at a build agent.
    For using it locally, you need to run ssh-agent and supply it with a private key for authentication.
  2. Set environment variable DOCKER_BUILDKIT=1. It can be done either via env.DOCKER_BUILDKIT as TC build parameter or simply run export DOCKER_BUILDKIT=1 as the first build step.
  3. Update docker build command in your Dockerfile to: docker build --ssh default Dockerfile .
    –ssh default will make the ssh key available within Docker build.
  4. Update the very first Dockerfile line with
    # syntax=docker/dockerfile:1.0.0-experiment
  5. (Optional) Ensure that a private repo (i.e. hosted on Github) is accessible via SSH. Something in line with this in your Dockerfile:
    RUN mkdir -p ~/.ssh && chmod 700 ~/.ssh && git config --globalurl."ssh://git@github.com/".insteadOf "https://github.com/" \
    && ssh-keyscan github.com >> ~/.ssh/known_hosts && chmod 644 ~/.ssh/known_hosts
  6. Finally, pass the key to RUN command in your Dockerfile:
    RUN --mount=type=ssh git pull git@github.com:awesomeprivaterepo.git
    Here, –mount=type=ssh will use the default key from ssh-agent for authentication with the private repo.

There is a possibility to provide multiple keys for using at different steps of Docker build. More information can be found in these awesome blogs: 1, 2

So what is SecretKeeper?

One day I got a password sent to me over email at work. Then some time later – by Slack, Teams, Skype for Business, Skype, you name it. And yes, I totally get it – there are various password managers, tools and solutions that let you share a secret securely – why don’t we all use them? But there is no need to perform an epic eye-roll (yes, it’s a signature move of many security people) and blame incompetent users for doing something not completely secure…

The better question to ask is – if I wanted to share a secret securely, would it be actually that simple? So simple that it doesn’t create an entry barrier, doesn’t drain your colleagues energy and doesn’t require many extra step, manuals and precautions. As simple as sending it via a chat. Turned out, sharing sensitive information wasn’t that simple at all. Security always comes with strings attached, in a form of additional complexity, MFA, captcha, 16-letter-at-least-one-digit passwords and so forth.

Therefore, I decided to create SecretKeeper. I wanted it to be:

  • Used for sharing “secrets” – sensitive bits of information or files between two users.
  • It has to be deadly simple to use – one page, one button, one click.
  • Secure as it can be – goes without saying
  • Easily deployable to your hosting provider (so you can control your own instance)

The idea is incredibly simple – it is a web application that saves your secret (text or file) encrypted for a short amount of time. It generates a one-time link that you can share. Once the secret is read, it is deleted forever. There are no additional passwords or controls to protect the link – but it is a sha-256 hash of a random number, a long string that is hard to guess.

So, you want to share “hello” with your buddy. You open SecretKeeper:

skeeper1

You write hello in the textbox, chose the time of life for the secret – and get a link for sharing.

skeeper2

SecretKeeper is meant to be hosted by you, your organization or a hosting provider you trust. Therefore, I spent some nights making deployment as simple as one-click. There are two options:

  • Docker container
  • Azure AppService

The entire code is open-source, you can make sure there are no backdoors. It runs Kestrel, .NET Core 2.2, generates random links using secure algorithm (no System.Random!) and gets along with certificates really well. By the way, there is no other way to run it but with HTTPS. I even went full pro-mode on and audited it with Burp Professional and fixed some caching and HSTS configuration issues.

You can try out a working version of Appservice with SecretKeeper here:

https://skeeper.azurewebsites.net/

I finally made all the final adjustments to call it 1.0 release, but there are many exciting features I would like to add to the tool – for example, add additional protections, such as password, or login with your SSO account to make it Enterprise-ready (chuckles). Anyway, a help is welcome, just check the list of open issues.

Running npm audit when using private registry

As I wrote previously, NPM got a great tool for checking security of the dependencies – npm audit.

However, if running npm audit and using private package registry (Proget, Artifactory, etc), it may fail with “npm ERR! 400 Bad Request – POST” when trying to send audit details collected about your dependencies for checking to https://<YOUR FEED URI>/-/npm/v1/security/audits – the assumed security audit endpoint of the private registry. Most likely, your registry doesn’t replicate official npm security API.

To fix the issue, simply add the public registry endpoint to your npm audit command line:

npm audit --registry="https://registry.npmjs.org/"

New tool for making sense of npm-audit output

Managing Node.js dependencies and their security has never been a fun task. My heart stops for a few moments whenever I open node_modules folder and see how much stuff my minimalistic one-page app is pulling from the depth of web.

In attempt to fix it, this year, NPM acquired a great project – NSP, Node Security Platform that consisted of a vulnerability data feed and CLI. NSP security advisory feed was merged into NPM tool, but CLI was discontinued. Instead, we’ve got a new command – npm audit. However, the original NSP was able to produce much nicer output comparing to npm-audit which seems to be hated even by NPM developers. There were a few open issues on Github about prettifying its output but they all are now abandoned.

My main problem with npm-audit is that it’s actually a bit dumb – it can’t exclude devDependencies, fail on a severity threshold, and resulting JSON is just a mess. My main need is to simply integrate security check into build system and automatically parse the results. With current state of npm-audit it was not possible.

So it was time to act – I created “npm-audit-ps-wrapper” tool – a very simple Powershell wrapper around npm-audit which fixes all the problems I just described. And most important, it is ready for automation and use with CI/CD.

https://github.com/doshyt/npm-audit-ps-wrapper

Example of npm-audit output:

npm audit -j

{
  "actions": [
    {
      "action": "install",
      "module": "aurelia-cli",
      "target": "0.35.1",
      "isMajor": false,
      "resolves": [
        {
          "id": 338,
          "path": "aurelia-cli>npm>fs-vacuum>rimraf>glob>minimatch>brace-expansion",
          "dev": false,
          "optional": false,
          "bundled": true
        },
        {
          "id": 338,
          "path": "aurelia-cli>npm>fstream>rimraf>glob>minimatch>brace-expansion",
          "dev": false,
          "optional": false,
          "bundled": true

Example of npm-audit-ps-wrapper output:

{
    "VulnerabilitySource": "sshpk",
    "VulnerabilityTitle": "Regular Expression Denial of Service",
    "VulnerableVersions": "<1.13.2 || >=1.14.0 <1.14.1",
    "PatchedVersions": ">=1.13.2 < 1.14.0 || >=1.14.1",
    "VulnerabilityChains": [
      "aurelia-cli>npm>node-gyp>request>http-signature>sshpk",
      "aurelia-cli>npm>npm-registry-client>request>http-signature>sshpk",
      "aurelia-cli>npm>request>http-signature>sshpk"
    ],
    "VulnerabilitySeverity": "high",
    "AdvisoryUrl": "https://npmjs.com/advisories/606"
  },
  {
   ...
  }
}

Benefits of the wrapper tool:

  • Switch to ignore devDependencies.
  • Resulting JSON contains a list of vulnerabilities with minimal viable information about them.
  • Switch to fail on a set severity threshold level.
  • Write output to a JSON file.
  • Switch for silent execution.

Hope it helps to streamline security of your JS libs and make it a bit better!

TechDays Sweden 2018 slides and demo

I had a great pleasure giving a talk on Secure infrastructure with Terraform, Azure DSC and Ansible at Microsoft Techdays 2018 in Stockholm. The blog post based on the content is in workings.

As I promised to publish my slides and demos, here they are – in a Github repo.

The demos are grouped into three folders: ansible, dsc and tf. Dsc and tf have subfolder called “hardened”. This is where a more secure version of the template is.

“Tf – > hardened -> general” subfolder has various resources I used to supply the hardened demo, such as KeyVault and Azure Policy.

You can start using the templates right away, just look for edited IDs and password replaced with xxx-yyy etc.

Or drop me a question if in doubt.

Update management in Hybrid cloud with Azure Automation and Log Analytics

It’s no news that Azure has a neat OMS integration and can be used to monitor update status of enrolled machines. What strikes me the most is the simplicity it provides to patching in the hybrid cloud infrastructure and the ability to get it under control in the minimal time.

Azure Update Management is part of an Automation Account and is tied to subscription it is created in. It means, you will be able to directly add Azure VMs from the same subscription. For other VMs – either Azure in a different subscription, or on-prem servers, you need to install OMS Agent. You can get one from the Log Analytics account associated with your Automation Account. You can find the link under the point 1 of the Getting started screen:

oms1

Proceed and grab an installer of the agent. You can unpack the agent MSI to a folder with /c /t:<folder> flags and install unattended – by providing the installer the following LogAnalytics parameters – Workspace ID and Workspace Key (you can find them . To simplify the distribution, I zipped the folder with unwrapped MSI content and wrote a small script for unattended installation that can run in memory (except for extracting zip archive). It requires the mentioned workspace details and a link to the agent zip archive.

Once you install the agent, it starts reporting to LogAnalytics and provides visibility into the update status for each machine.

oms2

The information is available in both Update Management blade and LogAnalytics workspace. The latter provides better stats but the data is non-actionable, one would need to get back to Update Management and trigger update from there.

oms3

oms4

oms5

 

From the Update Management blade, one can:

  • schedule update deployment
  • include specific machines or groups
  • include or exclude particular patches by KB
  • select update categories to apply
  • schedule continuous update that will check and install required categories regularly

And the last detail – LogAnalytics can be switched to a free tier and will still be able to serve the needs of Update Management bringing costs of patch management on hundreds of servers close to zero.

Together with Azure DSC (configuration management) and Machine Inventory that I reviewed earlier, Azure Automation provides a wide range of tools to replace expensive and complex tools for managing hybrid infrastructure.

Automating alert response with Azure Security Center and Azure Logic Apps

Responding a security event is the core practice in the modern security frameworks. After a potential threat was detected, it is time to act. The shorter the response time is the less damage an attacker can deal to your cloud.

Detection in Azure

Azure Security Center in the Standard pricing tier ($15/VM node per month) comes with automated detection mechanisms. The core detection capability is built around real-time traffic and system logs parsing and applying machine learning algorithms to it:

security-center-detection-capabilities-fig1

A single dashboard can be found under Security Center -> Security Alerts blade and also on the main page of the Security Center:

alertsdetection

Alerts represent single or multiple security events of the same nature and time span. Incidents are created from multiple alerts which are classified as related to each other – for example, an attacker runs a malicious script, extracts local password hashes and cleans the event log. This sequence of action will generate one incident.

Incident forensics

Incidents can be investigated in a forensics tool Investigation Dashboard (in the preview, as of May 2018). This tool draws the relationships between alerts, events that caused the alert, affected resources, users. It also can help when reconstructing lateral movements of attackers within the network.

investigation.PNG

Automated response

Incident forensics represents a post-mortem investigation. An adversary event did happen, and the attackers have already done some damage to the enterprise. We don’t have to wait until malicious actors finish their job – we can start acting right after getting the first signals about the intrusion. Alerts are generated by Azure in real-time, and recently Security Center got a powerful integration with Azure Logic Apps.

Logic Apps in Azure represent workflows of actions with pre-built triggers, conditions, and actions which include a wide range of both native and 3-rd components. For example, your logic app can listen to RSS feed and automatically tweet once new pages are published to the feed. Or, run a custom powershell through Azure Automation.

One of the recent additions to Logic Apps – Security Center triggers. This feature turns Azure security alerts into the powerful tool for fighting attackers once they trip a wire.

You can Security-related Azure Logic Apps under Security Center -> Playbooks (Preview).

Building the logic

After adding a new playbook, a user gets presented with Loic App Designer. The trigger is pre-populated – When a response to Azure Security Center alert is triggered. Once we get an alert, the playbook is executed. Then, we add a condition – there are multiple parameters that the alert arrives with. Let’s take “Alert Severity” and set the condition to High:

trigger

Other alert parameters include Confidence Level, Alert Body, Name, Start or End Time and many more. The range is quite broad which makes it possible to generate very specific responses to almost any imaginable event.

Now, if the condition is TRUE – Alert Severity is High, we want to contain the threat. One of the ways to do so is to isolate a VM under attack. Let’s say, assign it to a different Network Security Group which has no connection to the internal company network or some of its segments. To do it, we would need to get a VM name from the alert and run some Azure Powershell performing the NSG re-assignment.

Creating the Automation Job

Now, we can go to Azure Automation and create an Automation Job for our needs. This can be done through the blades Automation Accounts -> Runbooks -> Add a runbook. As Runbook type, choose “Powershell”.

Then, we insert the following code:

Param(
[string]$VMName
)

$connectionName = "AzureRunAsConnection"

try
{
# Get the connection "AzureRunAsConnection "
  $servicePrincipalConnection=Get-AutomationConnection -Name $connectionName

  Add-AzureRmAccount `
    -ServicePrincipal `
    -TenantId $servicePrincipalConnection.TenantId `
    -ApplicationId $servicePrincipalConnection.ApplicationId `
    -CertificateThumbprint $servicePrincipalConnection.CertificateThumbprint
}

catch {
  if (!$servicePrincipalConnection)
  {
     $ErrorMessage = "Connection $connectionName not found."
     throw $ErrorMessage
} else {
  Write-Error -Message $_.Exception
  throw $_.Exception
  }
}

# Get VM object
$vm = Get-AzureRmVM -Name $VMName -ResourceGroupName AzureBootcamp
# Get NIC
$Nic = Get-AzureRmNetworkInterface -ResourceGroupName AzureBootcamp | Where-Object {$_.VirtualMachine.Id -eq $vm.Id}
# Change Network Security group to IsolatedNetworkNSG
$Nic.NetworkSecurityGroup = Get-AzureRmNetworkSecurityGroup -ResourceGroupName AzureBootcamp -Name "IsolatedNetwork-NSG"
# Apply changes
Set-AzureRmNetworkInterface -NetworkInterface $Nic

This code gets the VMName as a parameter, authenticates to your Azure account with Azure Run-As connection (requires preliminary configuration). Then, it get’s VM’s NIC and assigns it to the security group “IsolatedNetwork-NSG”. Save the automation runbook with name IsolateVM, for instance, and don’t forget to publish the changes after editing Powershell.

Putting it all together

The last step, adding the action to the Azure Logic App we-ve been building. Select “Azure Automation – Create job” and point it to the IsolateVM automation book.

logicapptrue.PNG

Here, we specified “Host Name” as Runbook parameter (notice, it automatically picked up parameter name VMName that we created int he runbook).

Save the logic – and this is it. Once an alert is generated a VM is expelled to the isolated security group with limited access.

Testing and tuning the playbook

To test this integration before an actual event happens, go to any of previous events in Security Center – Security Alerts (you can generate them, for example, by trying to downloadMimikatz from Github), click on the event, then click on “View playbooks” button. In the new window find your Logic app workflow and press “Run” under “Run playbook”:

runpalybook

This will send exactly same trigger as this alert would have done. From the playbook run window or Run history, you will be presented with a static view similar to Logic App Designer with the only difference that it contains the logic path that was taken in this run:

logicexecution

Actual inputs that were submitted with the trigger can be viewed by expanding “When a response to an Azure Security Center alert is triggered” section.

alertdescription

The Azure Security Center alerts integration with Logic Apps provides limitless capabilities not only for informing about detections (via email, Slack, Skype) but also for an automated response to potential attacks with auto-tuning cloud infrastructure and isolating the threat, a show in the example.

Have fun building your own playbooks and fighting the threats before they become incidents.

Stay secure!