Running npm audit when using private registry

As I wrote previously, NPM got a great tool for checking security of the dependencies – npm audit.

However, if running npm audit and using private package registry (Proget, Artifactory, etc), it may fail with “npm ERR! 400 Bad Request – POST” when trying to send audit details collected about your dependencies for checking to https://<YOUR FEED URI>/-/npm/v1/security/audits – the assumed security audit endpoint of the private registry. Most likely, your registry doesn’t replicate official npm security API.

To fix the issue, simply add the public registry endpoint to your npm audit command line:

npm audit --registry="https://registry.npmjs.org/"

New tool for making sense of npm-audit output

Managing Node.js dependencies and their security has never been a fun task. My heart stops for a few moments whenever I open node_modules folder and see how much stuff my minimalistic one-page app is pulling from the depth of web.

In attempt to fix it, this year, NPM acquired a great project – NSP, Node Security Platform that consisted of a vulnerability data feed and CLI. NSP security advisory feed was merged into NPM tool, but CLI was discontinued. Instead, we’ve got a new command – npm audit. However, the original NSP was able to produce much nicer output comparing to npm-audit which seems to be hated even by NPM developers. There were a few open issues on Github about prettifying its output but they all are now abandoned.

My main problem with npm-audit is that it’s actually a bit dumb – it can’t exclude devDependencies, fail on a severity threshold, and resulting JSON is just a mess. My main need is to simply integrate security check into build system and automatically parse the results. With current state of npm-audit it was not possible.

So it was time to act – I created “npm-audit-ps-wrapper” tool – a very simple Powershell wrapper around npm-audit which fixes all the problems I just described. And most important, it is ready for automation and use with CI/CD.

https://github.com/doshyt/npm-audit-ps-wrapper

Example of npm-audit output:

npm audit -j

{
  "actions": [
    {
      "action": "install",
      "module": "aurelia-cli",
      "target": "0.35.1",
      "isMajor": false,
      "resolves": [
        {
          "id": 338,
          "path": "aurelia-cli>npm>fs-vacuum>rimraf>glob>minimatch>brace-expansion",
          "dev": false,
          "optional": false,
          "bundled": true
        },
        {
          "id": 338,
          "path": "aurelia-cli>npm>fstream>rimraf>glob>minimatch>brace-expansion",
          "dev": false,
          "optional": false,
          "bundled": true

Example of npm-audit-ps-wrapper output:

{
    "VulnerabilitySource": "sshpk",
    "VulnerabilityTitle": "Regular Expression Denial of Service",
    "VulnerableVersions": "<1.13.2 || >=1.14.0 <1.14.1",
    "PatchedVersions": ">=1.13.2 < 1.14.0 || >=1.14.1",
    "VulnerabilityChains": [
      "aurelia-cli>npm>node-gyp>request>http-signature>sshpk",
      "aurelia-cli>npm>npm-registry-client>request>http-signature>sshpk",
      "aurelia-cli>npm>request>http-signature>sshpk"
    ],
    "VulnerabilitySeverity": "high",
    "AdvisoryUrl": "https://npmjs.com/advisories/606"
  },
  {
   ...
  }
}

Benefits of the wrapper tool:

  • Switch to ignore devDependencies.
  • Resulting JSON contains a list of vulnerabilities with minimal viable information about them.
  • Switch to fail on a set severity threshold level.
  • Write output to a JSON file.
  • Switch for silent execution.

Hope it helps to streamline security of your JS libs and make it a bit better!

TechDays Sweden 2018 slides and demo

I had a great pleasure giving a talk on Secure infrastructure with Terraform, Azure DSC and Ansible at Microsoft Techdays 2018 in Stockholm. The blog post based on the content is in workings.

As I promised to publish my slides and demos, here they are – in a Github repo.

The demos are grouped into three folders: ansible, dsc and tf. Dsc and tf have subfolder called “hardened”. This is where a more secure version of the template is.

“Tf – > hardened -> general” subfolder has various resources I used to supply the hardened demo, such as KeyVault and Azure Policy.

You can start using the templates right away, just look for edited IDs and password replaced with xxx-yyy etc.

Or drop me a question if in doubt.

Update management in Hybrid cloud with Azure Automation and Log Analytics

It’s no news that Azure has a neat OMS integration and can be used to monitor update status of enrolled machines. What strikes me the most is the simplicity it provides to patching in the hybrid cloud infrastructure and the ability to get it under control in the minimal time.

Azure Update Management is part of an Automation Account and is tied to subscription it is created in. It means, you will be able to directly add Azure VMs from the same subscription. For other VMs – either Azure in a different subscription, or on-prem servers, you need to install OMS Agent. You can get one from the Log Analytics account associated with your Automation Account. You can find the link under the point 1 of the Getting started screen:

oms1

Proceed and grab an installer of the agent. You can unpack the agent MSI to a folder with /c /t:<folder> flags and install unattended – by providing the installer the following LogAnalytics parameters – Workspace ID and Workspace Key (you can find them . To simplify the distribution, I zipped the folder with unwrapped MSI content and wrote a small script for unattended installation that can run in memory (except for extracting zip archive). It requires the mentioned workspace details and a link to the agent zip archive.

Once you install the agent, it starts reporting to LogAnalytics and provides visibility into the update status for each machine.

oms2

The information is available in both Update Management blade and LogAnalytics workspace. The latter provides better stats but the data is non-actionable, one would need to get back to Update Management and trigger update from there.

oms3

oms4

oms5

 

From the Update Management blade, one can:

  • schedule update deployment
  • include specific machines or groups
  • include or exclude particular patches by KB
  • select update categories to apply
  • schedule continuous update that will check and install required categories regularly

And the last detail – LogAnalytics can be switched to a free tier and will still be able to serve the needs of Update Management bringing costs of patch management on hundreds of servers close to zero.

Together with Azure DSC (configuration management) and Machine Inventory that I reviewed earlier, Azure Automation provides a wide range of tools to replace expensive and complex tools for managing hybrid infrastructure.

Automating alert response with Azure Security Center and Azure Logic Apps

Responding a security event is the core practice in the modern security frameworks. After a potential threat was detected, it is time to act. The shorter the response time is the less damage an attacker can deal to your cloud.

Detection in Azure

Azure Security Center in the Standard pricing tier ($15/VM node per month) comes with automated detection mechanisms. The core detection capability is built around real-time traffic and system logs parsing and applying machine learning algorithms to it:

security-center-detection-capabilities-fig1

A single dashboard can be found under Security Center -> Security Alerts blade and also on the main page of the Security Center:

alertsdetection

Alerts represent single or multiple security events of the same nature and time span. Incidents are created from multiple alerts which are classified as related to each other – for example, an attacker runs a malicious script, extracts local password hashes and cleans the event log. This sequence of action will generate one incident.

Incident forensics

Incidents can be investigated in a forensics tool Investigation Dashboard (in the preview, as of May 2018). This tool draws the relationships between alerts, events that caused the alert, affected resources, users. It also can help when reconstructing lateral movements of attackers within the network.

investigation.PNG

Automated response

Incident forensics represents a post-mortem investigation. An adversary event did happen, and the attackers have already done some damage to the enterprise. We don’t have to wait until malicious actors finish their job – we can start acting right after getting the first signals about the intrusion. Alerts are generated by Azure in real-time, and recently Security Center got a powerful integration with Azure Logic Apps.

Logic Apps in Azure represent workflows of actions with pre-built triggers, conditions, and actions which include a wide range of both native and 3-rd components. For example, your logic app can listen to RSS feed and automatically tweet once new pages are published to the feed. Or, run a custom powershell through Azure Automation.

One of the recent additions to Logic Apps – Security Center triggers. This feature turns Azure security alerts into the powerful tool for fighting attackers once they trip a wire.

You can Security-related Azure Logic Apps under Security Center -> Playbooks (Preview).

Building the logic

After adding a new playbook, a user gets presented with Loic App Designer. The trigger is pre-populated – When a response to Azure Security Center alert is triggered. Once we get an alert, the playbook is executed. Then, we add a condition – there are multiple parameters that the alert arrives with. Let’s take “Alert Severity” and set the condition to High:

trigger

Other alert parameters include Confidence Level, Alert Body, Name, Start or End Time and many more. The range is quite broad which makes it possible to generate very specific responses to almost any imaginable event.

Now, if the condition is TRUE – Alert Severity is High, we want to contain the threat. One of the ways to do so is to isolate a VM under attack. Let’s say, assign it to a different Network Security Group which has no connection to the internal company network or some of its segments. To do it, we would need to get a VM name from the alert and run some Azure Powershell performing the NSG re-assignment.

Creating the Automation Job

Now, we can go to Azure Automation and create an Automation Job for our needs. This can be done through the blades Automation Accounts -> Runbooks -> Add a runbook. As Runbook type, choose “Powershell”.

Then, we insert the following code:

Param(
[string]$VMName
)

$connectionName = "AzureRunAsConnection"

try
{
# Get the connection "AzureRunAsConnection "
  $servicePrincipalConnection=Get-AutomationConnection -Name $connectionName

  Add-AzureRmAccount `
    -ServicePrincipal `
    -TenantId $servicePrincipalConnection.TenantId `
    -ApplicationId $servicePrincipalConnection.ApplicationId `
    -CertificateThumbprint $servicePrincipalConnection.CertificateThumbprint
}

catch {
  if (!$servicePrincipalConnection)
  {
     $ErrorMessage = "Connection $connectionName not found."
     throw $ErrorMessage
} else {
  Write-Error -Message $_.Exception
  throw $_.Exception
  }
}

# Get VM object
$vm = Get-AzureRmVM -Name $VMName -ResourceGroupName AzureBootcamp
# Get NIC
$Nic = Get-AzureRmNetworkInterface -ResourceGroupName AzureBootcamp | Where-Object {$_.VirtualMachine.Id -eq $vm.Id}
# Change Network Security group to IsolatedNetworkNSG
$Nic.NetworkSecurityGroup = Get-AzureRmNetworkSecurityGroup -ResourceGroupName AzureBootcamp -Name "IsolatedNetwork-NSG"
# Apply changes
Set-AzureRmNetworkInterface -NetworkInterface $Nic

This code gets the VMName as a parameter, authenticates to your Azure account with Azure Run-As connection (requires preliminary configuration). Then, it get’s VM’s NIC and assigns it to the security group “IsolatedNetwork-NSG”. Save the automation runbook with name IsolateVM, for instance, and don’t forget to publish the changes after editing Powershell.

Putting it all together

The last step, adding the action to the Azure Logic App we-ve been building. Select “Azure Automation – Create job” and point it to the IsolateVM automation book.

logicapptrue.PNG

Here, we specified “Host Name” as Runbook parameter (notice, it automatically picked up parameter name VMName that we created int he runbook).

Save the logic – and this is it. Once an alert is generated a VM is expelled to the isolated security group with limited access.

Testing and tuning the playbook

To test this integration before an actual event happens, go to any of previous events in Security Center – Security Alerts (you can generate them, for example, by trying to downloadMimikatz from Github), click on the event, then click on “View playbooks” button. In the new window find your Logic app workflow and press “Run” under “Run playbook”:

runpalybook

This will send exactly same trigger as this alert would have done. From the playbook run window or Run history, you will be presented with a static view similar to Logic App Designer with the only difference that it contains the logic path that was taken in this run:

logicexecution

Actual inputs that were submitted with the trigger can be viewed by expanding “When a response to an Azure Security Center alert is triggered” section.

alertdescription

The Azure Security Center alerts integration with Logic Apps provides limitless capabilities not only for informing about detections (via email, Slack, Skype) but also for an automated response to potential attacks with auto-tuning cloud infrastructure and isolating the threat, a show in the example.

Have fun building your own playbooks and fighting the threats before they become incidents.

Stay secure!

Developers and IT-Pros: who will be left in the past? How to work together efficiently

devvsitpro

Last weekend, I was invited to Global Azure Bootcamp in Linköping where among the other activities also participated in the panel discussion about the future of Developers and IT-Pros collaboration and ways to make it more efficient. It was a brilliant and sharp discussion the main points of which (or rather my view on them:) I would like to share in this post.

IT-Pros

To start with – who are these mysterious creatures? I like the term “IT-Pro” since it covers more than more common dev counterparts – Operations or Ops. It is important to understand that Ops are related to the infrastructure people, while from my perspective, IT Pros include everyone who is NOT a developer. Such brave folks like Security, DBA, Cloud Architects, Consultants, UX designers (why not?!) and many more. Why define a special group for them? It helps to define the boundaries of the conflict – Devs vs !Devs without actually negating developers, since there are more similarities between these two groups than it seems at first.

The key misconception

“Devs create. IT-Pros don’t create”. This is true… in the wrongly built organizations – and there are way too many of them, in the reality. The problem comes from the fact that IT-Pros are usually understaffed. From a dumb manager’s perspective, IT-Pros need the headcount just enough to put down fires. This is a grave mistake – only after the fires are down, the REAL work starts. The upgrades and improvements to everything which was under fire and replacing it with something that won’t catch fire at all the next time. So, IT-Pros are creative, and they do create – when the company is smart enough to let them do so.

The world we live in

The world of modern computing adopted (and spoiled) the famous *aaS – as-a-Service abbreviation. We quickly move to the state of technology where everything could be offered as a service. DBs, compute resources, APIs, security tools, you name it. Which make Devs happy and threatens IT-Pros whos yesterdays tasks were just replaced by a new Azure/AWS/Google Cloud service. So how Devs and IT Pros respond to these changes?

The shift

The DevOps movement and its rapid adoption caused the famous shift to the left which basically empowered Developers with the tools which previously were managed by Operations. Simultaneously, Cloud has awakened and brought all power of quick deployments, testing in production and advanced telemetry to the developers. Devs became almighty and now can (almost) do their thing – write the code and never be bothered.

What happened to IT-Pros?

The shift hasn’t avoided the IT-Pro zone – I observe a similar change. Pros start writing their automation and deployment scripts, learning more efficient ways of doing yesterdays tasks by borrowing best methods of development and adopting them for IT-Pro work. They read the code and write own code. UX designers create mindblowing frameworks of design atoms and molecules, put them into source control and use Continuous Integration. There is no more distinction between Devs and IT-Pros based on writing the actual code.

Does it mean, there will be no IT-Pros eventually? Will Devs replace them?

No, not at all. If we look at the root of what Devs do and love doing for a living – it is not about spawning VMs to Azure. They consider it as a necessary evil or something that enables them. What they do is to create. It is like a painter who loves drawing but also has to go to IKEA, buy and assemble her easel. The core knowledge of Devs is the development itself – building complex distributed systems, efficient workflows, secure APIs – for the needs of modern world.

On the other hand, we have IT-Pros who in fact love deploying machines in Azure, and also know thousands of ways of doing it for hundreds of use cases. And now they also start to code and automate. What we get is a powerful combo that can build the virtual world for the products that Devs are writing.

Together, forever

It is obvious, they can’t survive without each other. The world will require more and more complex products – more secure, more resilient, more flexible. And while someone has to build it, others have to create architectures where these products can work at their best. It is not about deploying just a bunch of VMs to Azure and installing SQL server on them. It is about building an identity-controlled cloud with fully automated threat detection, where the product runs in a couple of dozen containers with replication to a bunch of regions, backup and data retention strategy.

And with ever-changing fluffy cloud landscape (it’s cloud, after all), new features become available weekly and sometimes completely change the game in one night. IT-Pros need to be aware of them before they become GA and have the adoption plan.

Continuous Integration

Cloud offers so many possibilities, all the cutting-edge tech is there up for grabs. But does your product architecture supports it. It should. But it doesn’t. The very common answer which causes months of refactoring, releasing. And – BAM – a newer and cooler tech is out there, and we’re back to the square one. Who could help them devs? IT-Pros! If a dev team integrates a Cloud Architect into their architecture meeting, they will be able to plan the future functionality of the product, target a specific cloud, align with its roadmap and get the best description of limited preview features that will be GA at the time of the product release.

Next steps

To adapt, IT-Pros need to become more efficient. Previously, Developers solved this issue for themselves by taking part of the Ops work and learning basics of what they did. It is time for IT-Pros to do the same to Devs. With the automation and coding skills, IT-Pros will be able to level up the complexity of cloud deployments and at the same time cut the time required for them.

To adapt, Devs need to integrate with IT-Pros when it comes to Cloud, Security, Design and make this integration continuous that starts from the design stages, goes through development testing and … actually lasts forever.

To adapt, organization management needs to staff IT-Pro teams properly and focus them on creating value instead of putting the fires down.

Identifying threats: Software inventory of Azure VMs

Azure VMs recently got a bunch of new features – Inventory, Change tracking and Update management (became GA on 8th of March, 2018). These features fill the gap in identifying the software that is deployed to IaaS clouds – the information necessary for securing these resources. The features are based on the Azure Automation capabilities and require an Automation account to run the workloads.

Inventory feature provides visibility into software installed on a VM (can be accessed from the individual VM blade), services, Linux daemons and also the timeline of events as part of the change the tracking view.

in2

VM view:

Inv1

In this example, we can see Adobe Flash Player and Steam client installed on the machine – both are increasing the attack surface for this infrastructure.

If you want to get more detail overview – proceed to Log Search tab. For example, this query will yield all non-Microsoft applications inventoried in the past day:

ConfigurationData
| where ConfigDataType == “Software”
| where SoftwareType == “Application”
| where Publisher != “Microsoft Corporation”
| order by TimeGenerated desc

logsearch1

The Change tracking view provides the visibility into the software changes and allows to track them efficiently. Also, you can add particular files or registry entries to watch. Watching entire folders is not yet supported for Windows VMs.

ch1.GIF

To get an overview of multiple machines either:

  • Click “Manage multiple machines” on Inventory or Change tracking view blade
    ch2
    This view also can be used to add non-Azure VMs through the Hybrid worker functionality.
  • Or go to the automation account that was associated with Inventory and Change tracking when first enabled. It also provides a convenient view of associated machines.

Update management can find machines with missing OS updates and fix it by scheduling a deployment.

upd1.PNG

Update management is integrated into the Security Center so that critical updates are never left out for all machines in the cloud. But it is important to remember to turn on these features first.

At the moment, only Update management allows taking an action based on the gathered data. Inventory doesn’t have a built-in functionality to remove unwanted applications or perform an on-demand scan. Also, it is lacking reporting capabilities and global overviews of applications in the cloud. Change tracking data can be used only for setting up custom alerts (requires Log Analytics knowledge).

At the same time, based on the direct connection of these features to the Azure Automation, I expect there will be more functions added to make it possible to fix found issues and secure a resource based on the Inventory data. It is clear that Microsoft is taking an important action in securing IaaS – and providing the data is only the first step.

P.S. For more advanced Inventory and Software Asset Management solutions, one may look into 3-rd party providers such as Snow Software (where I work at the moment).