Complete GDPR loophole in Sweden for $233!

I’ve been researching the privacy issues of Swedish websites such as Hitta, Eniro, MrKoll, Merinfo and many others that automatically collect personal information on individuals from open and semi-open resources and often use it to make money.

Turns out, that in Sweden, all these resources has applied for an exception from the GDPR as per Freedom of Exception right (YGL) and received a formal letter that grants them immunity to.. well, to anything in GDPR. So, legally, they do not have to delete any of personal data, nor be obliged to secure its storage. And IMY (Swedish Data Protection Agency) accepted its defeat and says it can’t do anything to these websites:

https://www.imy.se/privatperson/dataskydd/vi-guidar-dig/utgivningsbevis/

To me, this looks like a classic legal loophole where the commercial websites use the utgivningsbevis to collect, process and get rich using the private and personal data of Swedish citizens and residents.

And all of it, under the flag of Freedom of Speech – so this means, they can collect all possible data on a person and run around the internet with it, risking to spill it over, leak and do harm – all because they obtained the exception from the privacy rules.

Now, does obtaining utgivningsbevis from the Media agency require the website being a media? Nope.

Is it given to only websites that exercise their Freedom of Speech actively – i.e. publishing original materials, voicing opinions? Nope.

The voluntary utgivningsbevis can be requested by and given to.. basically anyone who agrees to call themselves a “responsible publisher” and costs SEK2000 (using today’s exchange rate, about $233).

Here’s automatic translation of the full criteria list:

So, in the essence, you can collect personal data, do whatever with it – as long as it is connected to Sweden. And it precedes GDPR because of the realization of the constitutional act.

As of today, there 1561 of granted utgivningsbevis: https://www.mprt.se/tillstandsregister/?q=&search-type=14

And many of them are just poorly designed commercial websites that found a loophole and used it – according to my opinion, exercised under the same Freedom of Speech right as their utgivningsbevis.

Whistle Willow – whistleblowing solution in Jira or Confluence Cloud!

From December 17th, 2021 companies larger than 250 employees need to provide internal reporting channels for whistleblowing tips and suggestions – as per EU directive on whistleblower protection.

First of all, what is whistleblowing and why does EU protect it?

Whistleblowing is what Edward Snowden did to NSA – he exposed the nation-wide illegal surveillance and tools, and in turn was declared an outlaw had to flee the country.

Whistleblowing in general is about bringing threats or harm to public interest to attention of internal stakeholders or external entities. The protection of whistleblowers, their identities and ensuring there is no prosecution for informing on the wrongdoing, even if it goes against company’s business interest, is extremely important – for both whistleblowers and companies. It creates a safe haven for reporters and lets them come through with the knowledge that otherwise would stay suppressed.

Establishing internal reporting channels and enabling whistleblowing program needs to be simple, quick and affordable. And that why I created Whistle Willow – a Jira and Confluence Cloud application that can get up and running in less than 5 minutes. Also, get compliant with the EU Directive as a nice bonus.

Whistle Willow provides whistleblowers a secure channel to submit their reports in Jira or Confluence, and the Compliance team gets to receive submissions, act upon them and keep the report updated with last changes and mitigations – all without revealing identity of a whistleblower.

The entire stack of Whistle Willow operations, from A to Z, is done in Atlassian platform. This means, no data leaves it, and there are no external integrations required. The application is built on top of Atlassian next-gen serverless platform Forge and uses 100% of cloud benefits, while keeping the highest security standards. It can be installed from the Atlassian marketplace and is ready to be used with Atlassian accounts right after.

The security of reports is guaranteed by tenant isolation, unique encryption keys per tenant and randomized submission times for reports. The app allows to establish a two-way communication channel between whistleblower and report reviewer without revealing reporter’s personal details.

Whistle Willow is made for whistleblowers and records no personal information in logs or submissions – and offers 30-day free trial and one-click installation. Also, it costs less than $1 per user and has no hidden charges, all transactions are done via Atlassian. Check the website for more details, or install directly via Marketplace.

Simplicity is really important for establishing the trusted and efficient whistleblowing program, and I believe that Whistle Willow can help more truths come out and let companies act upon them to improve.

Automating alert response with Azure Security Center and Azure Logic Apps

Responding a security event is the core practice in the modern security frameworks. After a potential threat was detected, it is time to act. The shorter the response time is the less damage an attacker can deal to your cloud.

Detection in Azure

Azure Security Center in the Standard pricing tier ($15/VM node per month) comes with automated detection mechanisms. The core detection capability is built around real-time traffic and system logs parsing and applying machine learning algorithms to it:

security-center-detection-capabilities-fig1

A single dashboard can be found under Security Center -> Security Alerts blade and also on the main page of the Security Center:

alertsdetection

Alerts represent single or multiple security events of the same nature and time span. Incidents are created from multiple alerts which are classified as related to each other – for example, an attacker runs a malicious script, extracts local password hashes and cleans the event log. This sequence of action will generate one incident.

Incident forensics

Incidents can be investigated in a forensics tool Investigation Dashboard (in the preview, as of May 2018). This tool draws the relationships between alerts, events that caused the alert, affected resources, users. It also can help when reconstructing lateral movements of attackers within the network.

investigation.PNG

Automated response

Incident forensics represents a post-mortem investigation. An adversary event did happen, and the attackers have already done some damage to the enterprise. We don’t have to wait until malicious actors finish their job – we can start acting right after getting the first signals about the intrusion. Alerts are generated by Azure in real-time, and recently Security Center got a powerful integration with Azure Logic Apps.

Logic Apps in Azure represent workflows of actions with pre-built triggers, conditions, and actions which include a wide range of both native and 3-rd components. For example, your logic app can listen to RSS feed and automatically tweet once new pages are published to the feed. Or, run a custom powershell through Azure Automation.

One of the recent additions to Logic Apps – Security Center triggers. This feature turns Azure security alerts into the powerful tool for fighting attackers once they trip a wire.

You can Security-related Azure Logic Apps under Security Center -> Playbooks (Preview).

Building the logic

After adding a new playbook, a user gets presented with Loic App Designer. The trigger is pre-populated – When a response to Azure Security Center alert is triggered. Once we get an alert, the playbook is executed. Then, we add a condition – there are multiple parameters that the alert arrives with. Let’s take “Alert Severity” and set the condition to High:

trigger

Other alert parameters include Confidence Level, Alert Body, Name, Start or End Time and many more. The range is quite broad which makes it possible to generate very specific responses to almost any imaginable event.

Now, if the condition is TRUE – Alert Severity is High, we want to contain the threat. One of the ways to do so is to isolate a VM under attack. Let’s say, assign it to a different Network Security Group which has no connection to the internal company network or some of its segments. To do it, we would need to get a VM name from the alert and run some Azure Powershell performing the NSG re-assignment.

Creating the Automation Job

Now, we can go to Azure Automation and create an Automation Job for our needs. This can be done through the blades Automation Accounts -> Runbooks -> Add a runbook. As Runbook type, choose “Powershell”.

Then, we insert the following code:

Param(
[string]$VMName
)

$connectionName = "AzureRunAsConnection"

try
{
# Get the connection "AzureRunAsConnection "
  $servicePrincipalConnection=Get-AutomationConnection -Name $connectionName

  Add-AzureRmAccount `
    -ServicePrincipal `
    -TenantId $servicePrincipalConnection.TenantId `
    -ApplicationId $servicePrincipalConnection.ApplicationId `
    -CertificateThumbprint $servicePrincipalConnection.CertificateThumbprint
}

catch {
  if (!$servicePrincipalConnection)
  {
     $ErrorMessage = "Connection $connectionName not found."
     throw $ErrorMessage
} else {
  Write-Error -Message $_.Exception
  throw $_.Exception
  }
}

# Get VM object
$vm = Get-AzureRmVM -Name $VMName -ResourceGroupName AzureBootcamp
# Get NIC
$Nic = Get-AzureRmNetworkInterface -ResourceGroupName AzureBootcamp | Where-Object {$_.VirtualMachine.Id -eq $vm.Id}
# Change Network Security group to IsolatedNetworkNSG
$Nic.NetworkSecurityGroup = Get-AzureRmNetworkSecurityGroup -ResourceGroupName AzureBootcamp -Name "IsolatedNetwork-NSG"
# Apply changes
Set-AzureRmNetworkInterface -NetworkInterface $Nic

This code gets the VMName as a parameter, authenticates to your Azure account with Azure Run-As connection (requires preliminary configuration). Then, it get’s VM’s NIC and assigns it to the security group “IsolatedNetwork-NSG”. Save the automation runbook with name IsolateVM, for instance, and don’t forget to publish the changes after editing Powershell.

Putting it all together

The last step, adding the action to the Azure Logic App we-ve been building. Select “Azure Automation – Create job” and point it to the IsolateVM automation book.

logicapptrue.PNG

Here, we specified “Host Name” as Runbook parameter (notice, it automatically picked up parameter name VMName that we created int he runbook).

Save the logic – and this is it. Once an alert is generated a VM is expelled to the isolated security group with limited access.

Testing and tuning the playbook

To test this integration before an actual event happens, go to any of previous events in Security Center – Security Alerts (you can generate them, for example, by trying to downloadMimikatz from Github), click on the event, then click on “View playbooks” button. In the new window find your Logic app workflow and press “Run” under “Run playbook”:

runpalybook

This will send exactly same trigger as this alert would have done. From the playbook run window or Run history, you will be presented with a static view similar to Logic App Designer with the only difference that it contains the logic path that was taken in this run:

logicexecution

Actual inputs that were submitted with the trigger can be viewed by expanding “When a response to an Azure Security Center alert is triggered” section.

alertdescription

The Azure Security Center alerts integration with Logic Apps provides limitless capabilities not only for informing about detections (via email, Slack, Skype) but also for an automated response to potential attacks with auto-tuning cloud infrastructure and isolating the threat, a show in the example.

Have fun building your own playbooks and fighting the threats before they become incidents.

Stay secure!

Integrating security into DevOps practices

DevOps as the cultural and technological shift in the software development has generated a huge space for improvements in the neighboring areas. To name one – Application security.

Since DevOps is embedded into every step of an idea on its way to the customer, it can be also used as the framework for driving security enhancements with the reduced costs – since automation and continuous delivery are built for the CI/CD needs. With a gentle security seasoning, an existing infrastructure will bring value to securing the product.

Where to start

As I said, we want security to affect all or most of those steps where DevOps transformation is already bringing value. Let’s just take a look at what we have on an abstract DevOps CD pipeline:

ci_insecure

It is a pretty straightforward deployment pipeline. It starts with requirements that are implemented into the code, which is covered with unit tests and built. The resulting artifact is deployed to staging where tested with automation, and also a code review takes place. When it is all done and succeeded, the change is merged to master, integration tests are running on the merge commit and artifacts are deployed to Production.

Secure CI/CD

Now, making no changes to the CD flow, we want to make the application more secure. Boxes in red are security features proposed to be added to the pipeline:

ci_secure

Security requirements

On the requirements planning stage (it can be a backlog grooming or sprint meeting), we instruct POs and engineers to analyze the security impact of the proposed feature and put mitigations/considerations into the task description. This step requires the team to understand the security profile of the application, the attacker profile and also have in place a classification of threats based on different factors (data exposure, endpoints exposure etc). This step requires some preliminary work to be done and is often ignored in the Agile environments. However, with a security embedded into the requirements, it becomes so much simpler for an engineer to actually fix possible issues before it gets exploited by an attacker. According to the famous calculation of the cost of fixing a possible failure, adding security to the design costs the less and brings most of the value.

In my experience, a separate field in the PBI or a dedicated section in the PBI template needs to be added to make sure the security requirements are not ignored.

Secure coding best practices

For an engineer who implements the feature, it is essential to have a reference how to make this or that particular security-related decision basing on a best practices reference document or guidance.  It can be a best-practices standard maintained by the company or the industry – but the team must agree on which particular practice/standard to follow. It should answer simple but obvious questions – for example, how to secure API? How to store password? When to use TLS?

Implementing this step brings consistency into secure side of the team’s coding. Also, it educates engineers and integrates best security coding practices into their routines forming a security-aware mindset.

Security-related Unit testing

This step assumes that we cover the highly risked functions and features of the code with unit tests first. It is important to maintain the tests fresh and increase coverage alongside with the ongoing development. One of the options is that for some risky features adding security unit tests is required for passing Code review.

Security-related Automated testing

In this step, the tests cover different scenarios of using/misusing the product. The goal is to make sure the security issues are addressed and verified with automation. Authorization, authentication, sensitive data exposure – to name a few areas to start with.

This set of tests needs to exist separately from the general test set providing visibility into the security testing coverage. Needs for implementing new automated security tests can be specified on the Requirements design stage and verified during Code review.

Static code analysis

This item doesn’t exist on the diagram but can also be mentioned. Security-related rules need to be enabled in the Static code analysis tool and be part of the quality gateway which determines whether a change is ready for production. There is a vast amount of different plugins and tools that allow performing automated analysis and fix what human eye may miss.

Security Code review

This code review needs to be done by a security-minded person or security champion from a specific AppSec team (if there is any). It is important to differ it from an ordinary CR and focus on the security impact and possible code flaws. Also, a person performing the review makes sure the Secure requirements are addressed, required unit/system tests are in place and the feature is good to go into the wild.

Security-related Automated testing

Similarly to automated test in the previous step with the only difference that here we test the system as the whole, after merging the change to the master.

Results

After all, we managed to reuse the existing process with adding a few key points related to security with clear rules and visible outcomes. DevOps is an amazing way helping us build a better product, and adding more improvements on this way on-the-go has never been easier.

Scripts to find WannaCry vulnerable VMs in VMWare vCenter

WannaCry ransomware hit the news by infecting high-profile targets via a security hole that existed in Windows prior to 13.03.2017 when it was patched.

I created a script that connects to the vCenter and checks if the latest hotfix in the system was installed before or after Microsoft released the patch. This doesn’t give 100% protection since some fixes might have been installed manually but the required one omitted.

However, in centralized IT environments that rely on turned on Windows Update service that applies all important updates, it might be a good way to check for the vulnerability.

Link to my repo:

https://github.com/doshyt/Wannacry-UpdatesScan

I work on bringing similar functionality to PS Remoting and also looking for ways to figure out if a new patch was installed but not the one fixing the problem.

UPDATE 19.05:

I added a useful script that performs a remote check for SMBv1 being enabled for Windows 8 / Server 2012 + machines. It can be run against a list of computer names / FQDNs.

https://github.com/doshyt/Wannacry-UpdatesScan/blob/master/checkSmbOn.ps1

Returns $true if SMBv1 is enabled on a system level.

To turn off SMBv1, execute the following command on a remote machine

Set-SmbServerConfiguration -EnableSMB1Protocol $false

 

Benefiting from TeamCity Reverse dependencies

Reverse Dependencies is the feature of TeamCity that allows building more complex build workflows with setting parameters from “parent” build down to their own snapshot dependencies. I’ve been looking for this functionality for a while and recently accidentally discovered how to make it working. Back then, I felt like I found a treasure 🙂

Example: one of the builds can be run with code analysis turned on – for using the results in the “parent” code analysis build (i.e. with SonarQube).  But you only need it when manually triggering a SonarQube build, in any other case (i.e. code checked-in to the repo) you don’t want TC to spend time on running Code analysis. The code analysis is turned on with a System property system.RunCodeAnalysis=TRUE in the “child” build.

And here is the trick  – the “parent” wants to set a property of the “child” build but can’t access it outside of the scope of own parameters. How would you do it in a common way? Maybe, create two different

How would you do it in a common way? Maybe, create two different builds – where one has the system.RunCodeAnalysis always TRUE and trigger it from the SonarQube build.

In this case, you end up having two almost duplicated builds that only exist because the “parent” can’t set properties of a “child”.

Reverse dependencies are here to help

With Reverse dependencies, it can!

This feature is not intuitively simple, and it doesn’t support TeamCity auto-substitution (as with using %%). So, you need to be careful with naming. This is how it works:

In the “parent” build (let’s call it SonarQube), you set to TRUE a parameter named

reverse.dep.PROJECT_NAME.PARAMETER_NAME

Here, PARAMTER_NAME is the exact name of a parameter that you want to rewrite in the “child” build. I.e. “system.RunCodeAnalysis”.

For this to work, you need to have a snapshot dependency to the “child” build enabled. In the “child” build you simply set system.RunCodeAnalysis=FALSE

When someone triggers the “parent” build. First, it rewrites the default (existing) value of the specified parameter (system.RunCodeAnalysis) in the “child” build with TRUE and starts this build.

In this case, you can use one build definition for sharing different tasks that couldn’t be done with one “core” build previously. A great use case is setting up a build with an automated TC trigger in which you can set parameters of a “child” build. Let’s say, you want to deploy Nightly to a specific environment using the same build definition that all developers use for building their projects.
To do so, you can set up a build with a trigger that builds a specific branch and also sets a target environment parameter through the Reverse dependency.

When using Reverse Dependencies, a handy way to check the actual values submitted to a build si the “Parameters” tab in the executed build – it shows which values were assigned to parameters, you can make sure that Reverse dependencies work as expected.

Caveats

  1. Reverse Dependencies are not substituted with actual parameter names from “child” builds – it is easy to make a mistake in the definition.
  2. When a project name is changed, you also need to change it manually in all Reverse dependencies.
  3. Try not to modify the build flow with Reverse Dependencies, touching only features that don’t affect build results in any way – otherwise you will get non-deterministic build configuration, in which the same build produces totally different artifacts. The best way to use it – is to specify some parameters which will be used by external parties, like setting einvrionemnts for Deployment or Publishing services, getting code analysis results etc.

Setting up LCM with DSC PullServer – cmdlets you need to know

Powershell DSC has a slightly steep learning curve – from the beginning, it is not that straight-forward to figure out how to read logs, how to trigger a consistency check or how to get updated configurations from the Pull server.

After building LCM.meta.mof file, you need to apply it to the machine – so that it enrolls itself with DSC PullServer and determines which configuration states to pull and apply. In fact, LCM is the heart of DSC on each node – and it requires some special treatment in order to deliver predictable results. So, to start with LCM, you need to point DSC engine to the folder where .meta.nof of LCM is stored and register it in the system. This works as follows:

Set-DscLocalConfigurationManager -Path PATH_TO_FOLDER

As far as I noticed, starting from Powershell 5.1 update, this command automatically pulls the resources from Pull server. Previously, it was necessary to trigger the resource update without waiting for the standard DSC consistency check interval – 15-30 minutes.

Update-DscConfiguration -Verbose -Wait

This command also comes handy when there was an update to resources in the server (a new configuration state was uploaded or a PS module version got updated).

When we want to start DSC consistency check (don’t mix it with Update-DscConfiguration, the latter only updates resources, it doesn’t run the consistency check) without waiting for the DSC scheduler to do it for us:

Start-DscConfiguration -UseExisting -Verbose -Wait

After updating resources, registering LCM and starting DSC check, we need to check the status. Here comes the trick – first we need to make sure that there is no consistency check in place – otherwise, we can’t get the status of LCM. So, we run:

Get-DscLocalConfigurationManager

This command returns a bunch of parameters – where we are mainly interested in LCMState.

Get-DscLocalConfigurationManager | Select-Object -ExpandProperty LCMState

It can be either “Idle”, “Busy” or report inconsistent configuration which leaves the LCM in a blocked state. When it is “Idle”, we are good to go – and check the actual result of applying a configuration State pulled from the Pull server.

Get-DscConfigurationStatus

The outputs are either “Failed” or “Success” – and this gives us an answer to the question whether the machine is in the desired state or something went terribly wrong.

And the last command – how to get rid of the DSC configuration:

Remove-DscConfigurationDocument -Stage Current,Previous,Pending

DSC stores two configurations for LCM – current (the last applied) and previous. When it ends up in the “pending” state, most likely, you have a problem with your LCM or State. After using this command for clean up, you may go and set updated LCM.

Making VMWare Integration Plugin 5.5 work in modern browser

When deploying a new OVF template to the vCenter 5.5 host from exported one, you may find yourself in trouble getting it done.

The desktop client may complain about invalid configuration for device 7 which is an extremely misleading message.

error7

According to support forums, to import OVF templates you must use a web interface. However, the web interface complains about the VMWare Integration plugin being not installed. It even offers you to download it.

But here is the trick – whenever you try to install its version 5.5 or 5.6 (since you need ESXi 5.5 compatibility), it simply never works with modern browsers – the plugin is not added to the browser extension list and not detected as installed. I assume this is caused by their enhanced security requirements to the installed extensions. Integration plugin from VMWare requires disk and system access and is silently blocked in newer browsers.

On the original requirements specification of the plugin, its stated its compatibility with IE 7 and 8.

The solution is simple – run your modern IE in the compatibility mode as IE 8! It works like a charm.

Nevertheless, I would recommend quitting the session after you finished using the plugin and come back to IE 11 to not expose your system to risk.

 

The Security Development Lifecycle book is available for downloading

Very recently, Microsoft has published online the foundation book describing SDL (Security Development Lifecycle).

security-development-lifecycle-no-cd

The principles behind the SDL were born as a response to the Windows Longhorn project reset in the early 2000s. Back then, the entire project was wiped out and started from scratch due to the presence of critical vulnerabilities in various components – according to MS insiders. At the time, Microsoft had a questionable reputation with regards to security of its products. Therefore, the company made a huge investment in security improvement. SDL was created as the common approach to developing products, starting from the very bottom to top – from design to release.

The book was published in good old 2006, which can be seen as the Stone age comparing to the threats and attack vectors present nowadays. Nevertheless, it still remains a valuable source of knowledge and actions for the teams and companies that struggle with improving a security of the products. In my opinion, it is impossible to deliver a secure solution without integrating SDL principles into every chunk of the development process.

The most recent overview of SDL can be found at the dedicated Microsoft page.

The best part of it is the set of tools and instruments designed and used by MS at each of the steps of SDL – with links for downloads. It can be seen as a great reference to the spectrum of problems that SDL solves – you don’t have to replicate it to your organization in the exact way it works at MS but at least it helps understand the challenges and possible solutions.

Install Powershell 5.0 on Windows 8.1 with a non en-US locale

Windows Management Framework 5 (aka Powershell 5) fails to install if your Windows was installed with a different locale than en-US. In my case, it was en-GB, so it is not a big deal, right?

Well, not exactly. After downloading WMF 5.0 update package,  it fails to apply the update   – saying “Update is not applicable to your computer”. Do not expect to get something more verbose, neither find any useful information in system logs.

After desperate surfing through MS support tickets and trying different fixes, it turned out that the last suspect was the locale configuration. Ironically, that MS support engineers from Seattle couldn’t verify the problem since they all have en-US Windows installed.

So far, the only working way to install WMF 5 (which means, PS 5.0) if it fails to apply the update is to change the locale setting – which is a non-trivial task. It requires running the system DISM utility in the Offline mode (when current Windows installation is not loaded). Also, it requires obtaining en-US language pack .cab archive. And finally, you may even brick your boot configuration if don’t run it properly. Sounds exciting, let’s start then!

  1. Set the default language and interface language to en-US (Control Panel – Language – Advanced Options)
  2. Prepare a bootable Win 8.1 USB installation drive – I used the same image as for the initial installation. Just write it to USB (Win32DiskImager is a great tool for this).
  3. Download en-US language pack. It can’t be found as a separate package from the official resources. What I did was to use MSDN subscription downloads page and grab an installation media of Windows 8.1 Language Pack – it is DVD with a bunch of language packs on it. After, mount the ISO and navigate to “langpacks/en-US”. save the .cab file from it to the convenient location on your drive. I.e. C:\lp.cab
  4. Boot into troubleshooting mode with command prompt – from running Windows session, press Restart while pressing the “Shift” key. The system will log out and troubleshooting options menu will be loaded from the USB. troubleshoot1 Navigate to Troubleshoot -> Advanced options -> Command Prompttroubleshoot
  5. In the command prompt, run the DISM utility: dism /Image:C:\ /Add-Package /PackagePath:C:\lp.cab.
  6. Do not change the locale here. It is possible and sometimes described as one of the steps to apply, i.e using dism ... /Set-Syslocale , but better don’t. – it made my machine to fail to boot until I reverted this fix back.
  7. Boot normally – the language pack was installed but not yet applied. Open “Control Panel\All Control Panel Items\Language”, select “English – United States” and click “Options” on the right side. Under “Windows display language” there will be a link to set the current locale to it. In a different situation, I’ve seen it being done from the “Control Panel -> Language -> Advanced Settings -> Change Locale” menu.
  8. After signing in and out – you may check that the locale has been changed from the elevated command prompt: `dism /Online /Get-Intl`.troubleshoot3
  9. Now, the WMF 5 update can be applied – it might firstly install some sort of fix and ask for a reboot. Afterward, run the installer again – and you will get your precious PS 5.0

 ps

This is it! I hope, MS folks will fix this issue soon – so the update can be applied to a system with any locale.