Automating alert response with Azure Security Center and Azure Logic Apps

Responding a security event is the core practice in the modern security frameworks. After a potential threat was detected, it is time to act. The shorter the response time is the less damage an attacker can deal to your cloud.

Detection in Azure

Azure Security Center in the Standard pricing tier ($15/VM node per month) comes with automated detection mechanisms. The core detection capability is built around real-time traffic and system logs parsing and applying machine learning algorithms to it:

security-center-detection-capabilities-fig1

A single dashboard can be found under Security Center -> Security Alerts blade and also on the main page of the Security Center:

alertsdetection

Alerts represent single or multiple security events of the same nature and time span. Incidents are created from multiple alerts which are classified as related to each other – for example, an attacker runs a malicious script, extracts local password hashes and cleans the event log. This sequence of action will generate one incident.

Incident forensics

Incidents can be investigated in a forensics tool Investigation Dashboard (in the preview, as of May 2018). This tool draws the relationships between alerts, events that caused the alert, affected resources, users. It also can help when reconstructing lateral movements of attackers within the network.

investigation.PNG

Automated response

Incident forensics represents a post-mortem investigation. An adversary event did happen, and the attackers have already done some damage to the enterprise. We don’t have to wait until malicious actors finish their job – we can start acting right after getting the first signals about the intrusion. Alerts are generated by Azure in real-time, and recently Security Center got a powerful integration with Azure Logic Apps.

Logic Apps in Azure represent workflows of actions with pre-built triggers, conditions, and actions which include a wide range of both native and 3-rd components. For example, your logic app can listen to RSS feed and automatically tweet once new pages are published to the feed. Or, run a custom powershell through Azure Automation.

One of the recent additions to Logic Apps – Security Center triggers. This feature turns Azure security alerts into the powerful tool for fighting attackers once they trip a wire.

You can Security-related Azure Logic Apps under Security Center -> Playbooks (Preview).

Building the logic

After adding a new playbook, a user gets presented with Loic App Designer. The trigger is pre-populated – When a response to Azure Security Center alert is triggered. Once we get an alert, the playbook is executed. Then, we add a condition – there are multiple parameters that the alert arrives with. Let’s take “Alert Severity” and set the condition to High:

trigger

Other alert parameters include Confidence Level, Alert Body, Name, Start or End Time and many more. The range is quite broad which makes it possible to generate very specific responses to almost any imaginable event.

Now, if the condition is TRUE – Alert Severity is High, we want to contain the threat. One of the ways to do so is to isolate a VM under attack. Let’s say, assign it to a different Network Security Group which has no connection to the internal company network or some of its segments. To do it, we would need to get a VM name from the alert and run some Azure Powershell performing the NSG re-assignment.

Creating the Automation Job

Now, we can go to Azure Automation and create an Automation Job for our needs. This can be done through the blades Automation Accounts -> Runbooks -> Add a runbook. As Runbook type, choose “Powershell”.

Then, we insert the following code:

Param(
[string]$VMName
)

$connectionName = "AzureRunAsConnection"

try
{
# Get the connection "AzureRunAsConnection "
  $servicePrincipalConnection=Get-AutomationConnection -Name $connectionName

  Add-AzureRmAccount `
    -ServicePrincipal `
    -TenantId $servicePrincipalConnection.TenantId `
    -ApplicationId $servicePrincipalConnection.ApplicationId `
    -CertificateThumbprint $servicePrincipalConnection.CertificateThumbprint
}

catch {
  if (!$servicePrincipalConnection)
  {
     $ErrorMessage = "Connection $connectionName not found."
     throw $ErrorMessage
} else {
  Write-Error -Message $_.Exception
  throw $_.Exception
  }
}

# Get VM object
$vm = Get-AzureRmVM -Name $VMName -ResourceGroupName AzureBootcamp
# Get NIC
$Nic = Get-AzureRmNetworkInterface -ResourceGroupName AzureBootcamp | Where-Object {$_.VirtualMachine.Id -eq $vm.Id}
# Change Network Security group to IsolatedNetworkNSG
$Nic.NetworkSecurityGroup = Get-AzureRmNetworkSecurityGroup -ResourceGroupName AzureBootcamp -Name "IsolatedNetwork-NSG"
# Apply changes
Set-AzureRmNetworkInterface -NetworkInterface $Nic

This code gets the VMName as a parameter, authenticates to your Azure account with Azure Run-As connection (requires preliminary configuration). Then, it get’s VM’s NIC and assigns it to the security group “IsolatedNetwork-NSG”. Save the automation runbook with name IsolateVM, for instance, and don’t forget to publish the changes after editing Powershell.

Putting it all together

The last step, adding the action to the Azure Logic App we-ve been building. Select “Azure Automation – Create job” and point it to the IsolateVM automation book.

logicapptrue.PNG

Here, we specified “Host Name” as Runbook parameter (notice, it automatically picked up parameter name VMName that we created int he runbook).

Save the logic – and this is it. Once an alert is generated a VM is expelled to the isolated security group with limited access.

Testing and tuning the playbook

To test this integration before an actual event happens, go to any of previous events in Security Center – Security Alerts (you can generate them, for example, by trying to downloadMimikatz from Github), click on the event, then click on “View playbooks” button. In the new window find your Logic app workflow and press “Run” under “Run playbook”:

runpalybook

This will send exactly same trigger as this alert would have done. From the playbook run window or Run history, you will be presented with a static view similar to Logic App Designer with the only difference that it contains the logic path that was taken in this run:

logicexecution

Actual inputs that were submitted with the trigger can be viewed by expanding “When a response to an Azure Security Center alert is triggered” section.

alertdescription

The Azure Security Center alerts integration with Logic Apps provides limitless capabilities not only for informing about detections (via email, Slack, Skype) but also for an automated response to potential attacks with auto-tuning cloud infrastructure and isolating the threat, a show in the example.

Have fun building your own playbooks and fighting the threats before they become incidents.

Stay secure!

Integrating security into DevOps practices

DevOps as the cultural and technological shift in the software development has generated a huge space for improvements in the neighboring areas. To name one – Application security.

Since DevOps is embedded into every step of an idea on its way to the customer, it can be also used as the framework for driving security enhancements with the reduced costs – since automation and continuous delivery are built for the CI/CD needs. With a gentle security seasoning, an existing infrastructure will bring value to securing the product.

Where to start

As I said, we want security to affect all or most of those steps where DevOps transformation is already bringing value. Let’s just take a look at what we have on an abstract DevOps CD pipeline:

ci_insecure

It is a pretty straightforward deployment pipeline. It starts with requirements that are implemented into the code, which is covered with unit tests and built. The resulting artifact is deployed to staging where tested with automation, and also a code review takes place. When it is all done and succeeded, the change is merged to master, integration tests are running on the merge commit and artifacts are deployed to Production.

Secure CI/CD

Now, making no changes to the CD flow, we want to make the application more secure. Boxes in red are security features proposed to be added to the pipeline:

ci_secure

Security requirements

On the requirements planning stage (it can be a backlog grooming or sprint meeting), we instruct POs and engineers to analyze the security impact of the proposed feature and put mitigations/considerations into the task description. This step requires the team to understand the security profile of the application, the attacker profile and also have in place a classification of threats based on different factors (data exposure, endpoints exposure etc). This step requires some preliminary work to be done and is often ignored in the Agile environments. However, with a security embedded into the requirements, it becomes so much simpler for an engineer to actually fix possible issues before it gets exploited by an attacker. According to the famous calculation of the cost of fixing a possible failure, adding security to the design costs the less and brings most of the value.

In my experience, a separate field in the PBI or a dedicated section in the PBI template needs to be added to make sure the security requirements are not ignored.

Secure coding best practices

For an engineer who implements the feature, it is essential to have a reference how to make this or that particular security-related decision basing on a best practices reference document or guidance.  It can be a best-practices standard maintained by the company or the industry – but the team must agree on which particular practice/standard to follow. It should answer simple but obvious questions – for example, how to secure API? How to store password? When to use TLS?

Implementing this step brings consistency into secure side of the team’s coding. Also, it educates engineers and integrates best security coding practices into their routines forming a security-aware mindset.

Security-related Unit testing

This step assumes that we cover the highly risked functions and features of the code with unit tests first. It is important to maintain the tests fresh and increase coverage alongside with the ongoing development. One of the options is that for some risky features adding security unit tests is required for passing Code review.

Security-related Automated testing

In this step, the tests cover different scenarios of using/misusing the product. The goal is to make sure the security issues are addressed and verified with automation. Authorization, authentication, sensitive data exposure – to name a few areas to start with.

This set of tests needs to exist separately from the general test set providing visibility into the security testing coverage. Needs for implementing new automated security tests can be specified on the Requirements design stage and verified during Code review.

Static code analysis

This item doesn’t exist on the diagram but can also be mentioned. Security-related rules need to be enabled in the Static code analysis tool and be part of the quality gateway which determines whether a change is ready for production. There is a vast amount of different plugins and tools that allow performing automated analysis and fix what human eye may miss.

Security Code review

This code review needs to be done by a security-minded person or security champion from a specific AppSec team (if there is any). It is important to differ it from an ordinary CR and focus on the security impact and possible code flaws. Also, a person performing the review makes sure the Secure requirements are addressed, required unit/system tests are in place and the feature is good to go into the wild.

Security-related Automated testing

Similarly to automated test in the previous step with the only difference that here we test the system as the whole, after merging the change to the master.

Results

After all, we managed to reuse the existing process with adding a few key points related to security with clear rules and visible outcomes. DevOps is an amazing way helping us build a better product, and adding more improvements on this way on-the-go has never been easier.

Scripts to find WannaCry vulnerable VMs in VMWare vCenter

WannaCry ransomware hit the news by infecting high-profile targets via a security hole that existed in Windows prior to 13.03.2017 when it was patched.

I created a script that connects to the vCenter and checks if the latest hotfix in the system was installed before or after Microsoft released the patch. This doesn’t give 100% protection since some fixes might have been installed manually but the required one omitted.

However, in centralized IT environments that rely on turned on Windows Update service that applies all important updates, it might be a good way to check for the vulnerability.

Link to my repo:

https://github.com/doshyt/Wannacry-UpdatesScan

I work on bringing similar functionality to PS Remoting and also looking for ways to figure out if a new patch was installed but not the one fixing the problem.

UPDATE 19.05:

I added a useful script that performs a remote check for SMBv1 being enabled for Windows 8 / Server 2012 + machines. It can be run against a list of computer names / FQDNs.

https://github.com/doshyt/Wannacry-UpdatesScan/blob/master/checkSmbOn.ps1

Returns $true if SMBv1 is enabled on a system level.

To turn off SMBv1, execute the following command on a remote machine

Set-SmbServerConfiguration -EnableSMB1Protocol $false

 

Benefiting from TeamCity Reverse dependencies

Reverse Dependencies is the feature of TeamCity that allows building more complex build workflows with setting parameters from “parent” build down to their own snapshot dependencies. I’ve been looking for this functionality for a while and recently accidentally discovered how to make it working. Back then, I felt like I found a treasure 🙂

Example: one of the builds can be run with code analysis turned on – for using the results in the “parent” code analysis build (i.e. with SonarQube).  But you only need it when manually triggering a SonarQube build, in any other case (i.e. code checked-in to the repo) you don’t want TC to spend time on running Code analysis. The code analysis is turned on with a System property system.RunCodeAnalysis=TRUE in the “child” build.

And here is the trick  – the “parent” wants to set a property of the “child” build but can’t access it outside of the scope of own parameters. How would you do it in a common way? Maybe, create two different

How would you do it in a common way? Maybe, create two different builds – where one has the system.RunCodeAnalysis always TRUE and trigger it from the SonarQube build.

In this case, you end up having two almost duplicated builds that only exist because the “parent” can’t set properties of a “child”.

Reverse dependencies are here to help

With Reverse dependencies, it can!

This feature is not intuitively simple, and it doesn’t support TeamCity auto-substitution (as with using %%). So, you need to be careful with naming. This is how it works:

In the “parent” build (let’s call it SonarQube), you set to TRUE a parameter named

reverse.dep.PROJECT_NAME.PARAMETER_NAME

Here, PARAMTER_NAME is the exact name of a parameter that you want to rewrite in the “child” build. I.e. “system.RunCodeAnalysis”.

For this to work, you need to have a snapshot dependency to the “child” build enabled. In the “child” build you simply set system.RunCodeAnalysis=FALSE

When someone triggers the “parent” build. First, it rewrites the default (existing) value of the specified parameter (system.RunCodeAnalysis) in the “child” build with TRUE and starts this build.

In this case, you can use one build definition for sharing different tasks that couldn’t be done with one “core” build previously. A great use case is setting up a build with an automated TC trigger in which you can set parameters of a “child” build. Let’s say, you want to deploy Nightly to a specific environment using the same build definition that all developers use for building their projects.
To do so, you can set up a build with a trigger that builds a specific branch and also sets a target environment parameter through the Reverse dependency.

When using Reverse Dependencies, a handy way to check the actual values submitted to a build si the “Parameters” tab in the executed build – it shows which values were assigned to parameters, you can make sure that Reverse dependencies work as expected.

Caveats

  1. Reverse Dependencies are not substituted with actual parameter names from “child” builds – it is easy to make a mistake in the definition.
  2. When a project name is changed, you also need to change it manually in all Reverse dependencies.
  3. Try not to modify the build flow with Reverse Dependencies, touching only features that don’t affect build results in any way – otherwise you will get non-deterministic build configuration, in which the same build produces totally different artifacts. The best way to use it – is to specify some parameters which will be used by external parties, like setting einvrionemnts for Deployment or Publishing services, getting code analysis results etc.

Setting up LCM with DSC PullServer – cmdlets you need to know

Powershell DSC has a slightly steep learning curve – from the beginning, it is not that straight-forward to figure out how to read logs, how to trigger a consistency check or how to get updated configurations from the Pull server.

After building LCM.meta.mof file, you need to apply it to the machine – so that it enrolls itself with DSC PullServer and determines which configuration states to pull and apply. In fact, LCM is the heart of DSC on each node – and it requires some special treatment in order to deliver predictable results. So, to start with LCM, you need to point DSC engine to the folder where .meta.nof of LCM is stored and register it in the system. This works as follows:

Set-DscLocalConfigurationManager -Path PATH_TO_FOLDER

As far as I noticed, starting from Powershell 5.1 update, this command automatically pulls the resources from Pull server. Previously, it was necessary to trigger the resource update without waiting for the standard DSC consistency check interval – 15-30 minutes.

Update-DscConfiguration -Verbose -Wait

This command also comes handy when there was an update to resources in the server (a new configuration state was uploaded or a PS module version got updated).

When we want to start DSC consistency check (don’t mix it with Update-DscConfiguration, the latter only updates resources, it doesn’t run the consistency check) without waiting for the DSC scheduler to do it for us:

Start-DscConfiguration -UseExisting -Verbose -Wait

After updating resources, registering LCM and starting DSC check, we need to check the status. Here comes the trick – first we need to make sure that there is no consistency check in place – otherwise, we can’t get the status of LCM. So, we run:

Get-DscLocalConfigurationManager

This command returns a bunch of parameters – where we are mainly interested in LCMState.

Get-DscLocalConfigurationManager | Select-Object -ExpandProperty LCMState

It can be either “Idle”, “Busy” or report inconsistent configuration which leaves the LCM in a blocked state. When it is “Idle”, we are good to go – and check the actual result of applying a configuration State pulled from the Pull server.

Get-DscConfigurationStatus

The outputs are either “Failed” or “Success” – and this gives us an answer to the question whether the machine is in the desired state or something went terribly wrong.

And the last command – how to get rid of the DSC configuration:

Remove-DscConfigurationDocument -Stage Current,Previous,Pending

DSC stores two configurations for LCM – current (the last applied) and previous. When it ends up in the “pending” state, most likely, you have a problem with your LCM or State. After using this command for clean up, you may go and set updated LCM.

Making VMWare Integration Plugin 5.5 work in modern browser

When deploying a new OVF template to the vCenter 5.5 host from exported one, you may find yourself in trouble getting it done.

The desktop client may complain about invalid configuration for device 7 which is an extremely misleading message.

error7

According to support forums, to import OVF templates you must use a web interface. However, the web interface complains about the VMWare Integration plugin being not installed. It even offers you to download it.

But here is the trick – whenever you try to install its version 5.5 or 5.6 (since you need ESXi 5.5 compatibility), it simply never works with modern browsers – the plugin is not added to the browser extension list and not detected as installed. I assume this is caused by their enhanced security requirements to the installed extensions. Integration plugin from VMWare requires disk and system access and is silently blocked in newer browsers.

On the original requirements specification of the plugin, its stated its compatibility with IE 7 and 8.

The solution is simple – run your modern IE in the compatibility mode as IE 8! It works like a charm.

Nevertheless, I would recommend quitting the session after you finished using the plugin and come back to IE 11 to not expose your system to risk.

 

The Security Development Lifecycle book is available for downloading

Very recently, Microsoft has published online the foundation book describing SDL (Security Development Lifecycle).

security-development-lifecycle-no-cd

The principles behind the SDL were born as a response to the Windows Longhorn project reset in the early 2000s. Back then, the entire project was wiped out and started from scratch due to the presence of critical vulnerabilities in various components – according to MS insiders. At the time, Microsoft had a questionable reputation with regards to security of its products. Therefore, the company made a huge investment in security improvement. SDL was created as the common approach to developing products, starting from the very bottom to top – from design to release.

The book was published in good old 2006, which can be seen as the Stone age comparing to the threats and attack vectors present nowadays. Nevertheless, it still remains a valuable source of knowledge and actions for the teams and companies that struggle with improving a security of the products. In my opinion, it is impossible to deliver a secure solution without integrating SDL principles into every chunk of the development process.

The most recent overview of SDL can be found at the dedicated Microsoft page.

The best part of it is the set of tools and instruments designed and used by MS at each of the steps of SDL – with links for downloads. It can be seen as a great reference to the spectrum of problems that SDL solves – you don’t have to replicate it to your organization in the exact way it works at MS but at least it helps understand the challenges and possible solutions.

Install Powershell 5.0 on Windows 8.1 with a non en-US locale

Windows Management Framework 5 (aka Powershell 5) fails to install if your Windows was installed with a different locale than en-US. In my case, it was en-GB, so it is not a big deal, right?

Well, not exactly. After downloading WMF 5.0 update package,  it fails to apply the update   – saying “Update is not applicable to your computer”. Do not expect to get something more verbose, neither find any useful information in system logs.

After desperate surfing through MS support tickets and trying different fixes, it turned out that the last suspect was the locale configuration. Ironically, that MS support engineers from Seattle couldn’t verify the problem since they all have en-US Windows installed.

So far, the only working way to install WMF 5 (which means, PS 5.0) if it fails to apply the update is to change the locale setting – which is a non-trivial task. It requires running the system DISM utility in the Offline mode (when current Windows installation is not loaded). Also, it requires obtaining en-US language pack .cab archive. And finally, you may even brick your boot configuration if don’t run it properly. Sounds exciting, let’s start then!

  1. Set the default language and interface language to en-US (Control Panel – Language – Advanced Options)
  2. Prepare a bootable Win 8.1 USB installation drive – I used the same image as for the initial installation. Just write it to USB (Win32DiskImager is a great tool for this).
  3. Download en-US language pack. It can’t be found as a separate package from the official resources. What I did was to use MSDN subscription downloads page and grab an installation media of Windows 8.1 Language Pack – it is DVD with a bunch of language packs on it. After, mount the ISO and navigate to “langpacks/en-US”. save the .cab file from it to the convenient location on your drive. I.e. C:\lp.cab
  4. Boot into troubleshooting mode with command prompt – from running Windows session, press Restart while pressing the “Shift” key. The system will log out and troubleshooting options menu will be loaded from the USB. troubleshoot1 Navigate to Troubleshoot -> Advanced options -> Command Prompttroubleshoot
  5. In the command prompt, run the DISM utility: dism /Image:C:\ /Add-Package /PackagePath:C:\lp.cab.
  6. Do not change the locale here. It is possible and sometimes described as one of the steps to apply, i.e using dism ... /Set-Syslocale , but better don’t. – it made my machine to fail to boot until I reverted this fix back.
  7. Boot normally – the language pack was installed but not yet applied. Open “Control Panel\All Control Panel Items\Language”, select “English – United States” and click “Options” on the right side. Under “Windows display language” there will be a link to set the current locale to it. In a different situation, I’ve seen it being done from the “Control Panel -> Language -> Advanced Settings -> Change Locale” menu.
  8. After signing in and out – you may check that the locale has been changed from the elevated command prompt: `dism /Online /Get-Intl`.troubleshoot3
  9. Now, the WMF 5 update can be applied – it might firstly install some sort of fix and ask for a reboot. Afterward, run the installer again – and you will get your precious PS 5.0

 ps

This is it! I hope, MS folks will fix this issue soon – so the update can be applied to a system with any locale.

DSC Pull Server for managing… Linux nodes

Let’s face it, Powershell DSC and Linux is definitely not the most favorite duo when speaking about CM. In this article, I will explain how to set up Linux node with existing DSC Pull Server and why this is a win-win for Win-dominated tech stack.

Powershell DSC is well-documented for Windows, you may find some information about using it in the Push mode (when the configuration is applied TO the node FROM the server via CIM session) but Linux Pull Server is rarely getting more than just a paragraph of attention. So, I’m here to fix it.

The main prerequisite of this exercise is a configured DSC Pull Server – you may want to check out this excellent MSDN tutorial. We run it on Windows Seever 2012 R2, with WMF (Windows Management Framework 5.0). The hostname for the means of this exercise is set to DSCPULLSERVER. Also, you got a Linux node – in rhis case, CentOS 7, with a hostname: DSCPULLAGENT.

The first discovery about DSC On Linux is that the client doesn’t run in Powershell for Linux. It is actually a set of Python and shell scripts that provide an interface to OMI (Open Management Infrastructure) CIM Server and control an execution of pulled configurations. Therefore, it works in a slightly different way comparing to “native” DSC.

To start with, we would need to obtain both OMI and DSC rpm’s in accordance with an $ openssl version installed – it might be either 0.98 (then  use rpms with ssl_98 in the name) or >=1.00 (use ssl_100 rpms).

rpm -Uvh https://github.com/Microsoft/PowerShell-DSC-for-Linux/releases/download/v1.1.1-294/dsc-1.1.1-294.ssl_100.x64.rpm
rpm -Uvh omi-1.1.0.ssl_100.x64.rpm

After installing the packages, DSC files can be found under /opt/microsoft/dsc/.

The next step is to prepare a DSC Local Configuration manager file. It tells DSC client where to pull the configuration from and which configuration. To generate it (and all kinds of other .mof files), I use a separate Windows machine.

Here is an example PS DSC script that creates a local configuration mof.

[DSCLocalConfigurationManager()]
configuration LinuxPullAgent
{
Node localhost
Settings {
RefreshMode = 'Pull'
ConfigurationMode = 'ApplyAndAutocorrect'
}
ConfigurationRepositoryWeb main {
ServerURL = 'https://DSCPULLSERVER:8080/PSDSCPullServer.svc'
RegistrationKey = "xxx-xxx-xxxxxx-xxxxxx"
ConfigurationNames = @('DSCLINUXSTATE')
}
}
}
LinuxPullAgent

Here, DSCLINUXSTATE is the name of the configuration placed to a Configurations folder of the DSC Pull Server. RegistrationKey is the key you obtain from the Pull Server. Also, you can specify here how ofter to pull the configuration, and a bunch of other parameters – read more here.

After running this snippet on a machine with PS 4.0 or higher, it produces a local configuration manager file localhost.mof that you need transfer to the Linux node. Basically, it should be the same file for all your Linux nodes using the DSCLINUXSTATE configuration.

Assume, the transferred file is located under ~/localhost.mof. Now, you need to install this Pull Server configuration on the node. We navigate to /Scripts where you can found various Python scripts for different tasks – for example, do not run Register.py, it is intended for use with Azure DSC server. Also, GetDscConfiguration.py and StartDscConfiguration.py – are the scripts for running DSC in the Push mode.

So here we are interested only in SetDscLocalConfigurationManager.py. This script performs a configuration of the pull agent basing on the .mof files generated for this agent. The second command in the next listing displays all the parameters of an active configuration of the LCM and helps to troubleshoot issues related to it.

SetDscLocalConfigurationManager.py -configurationmof ~/localhost.mof
GetDscLocalConfigurationManager.py

After installing the configuration, the pulls are done periodically with a 30-minute minimal interval. To trigger the pull of the configuration manually we need to execute

/opt/microsoft/dsc/bin/ConsistencyInvoker

This command pulls the configuration state from the pull server that we just installed and check if the current state corresponds to it.

And this is it! If everything went well, your Linux is now managed by the DSC. Using nx, nxNetworking and nxComputerManagement modules from Powershell Gallery, you can cover basic scenarios of configuration for Linux machines.

You can troubleshoot the issues related to DSC states or server connectivity by checking the error messages in the log file /var/opt/omi/log/dsc.log. OMI server instance log is located under /var/opt/omi/log/omiserver.log.

Pros and cons of DSC on Linux:

  • Pros:
    • Powershell and DSC ecosystem – very convenient when the company’s tech stack mainly includes Windows machines. Managing Linux with the same set of technologies sounds like a dream for decreasing complexity and adding transparency to the CM solution.
    •  Simplicity configuration of the Pull model – server only provides the state, the node does all the work.
    • Extensibility of DSC framework – allows to create custom scripts and utilize existing modules, but for Linux!
    • Microsoft’s open-source direction – it looks very determined, and after Powershell release for Linux, I believe, the integration between these two worlds will only improve, adding more functionality and features to both. Therefore, it might be that tomorrow DSC for Linux will be the next big thing.
    • Free of charge comparing to 120$+/node for Puppet or Chef.
  • Cons:
    • nx family of modules is still limited comparing to what can do native CM tools (Puppet, Chef, Ansible, Salt).
    • Poor logging and transparency of a node state – it is hard to trace the results of applying states, especially from the DSC Pull server. Logs need to be examined on the agent. Also, requires setting up a reporting server for DSC which is also not being the most user-friendly interface.
    • Requires an orchestration tool for machine deployment (in the case of using VMs). DSC can provision a new node by itself.

In general, it worked for us quite well – we were aware of general DSC limitations and didn’t find anything unexpected in its Linux specific implementation – given our tech stack it turned out to be a perfect solution.

In the next post, I will cover the examples of using nx modules for configuring CentOS node.

4 ways of developing DevOps competences

I’ve been in DevOps for about 5 years so far and recently summed up the main ways of learning in this business.

The main challenges of DevOps learning are related to extremely wide technology and skills coverage that are presumed to be useful in this occupation.

While the toolset of DevOps highly depends on the technology stack, there are common areas of knowledge that (basing on my fails and trials) turned out to be crucial for system vision and approach.

First of all, DevOps is not only about development, as we can see from the name but also there is an Ops part. Therefore, any reading related to Operations Management is beneficial. You need to know how the company works from the inside out in order to be efficient in DevOps when improving delivery processes. One of my favorites are:

They will tell you how to build operations of your company as a factory – and you may argue that you work in lean/agile/cool startup and have nothing to do with blue collars from industrial areas – and would be mistaken. It is important to start looking at work being done by your company, as at something that you can measure and improve – not ephemerally but by means of time,  throughput and quality.

The second thing I learned about DevOps – is to always keep asking yourself – am I doing a right thing? Am I delivering value and improving the processes?  Do it all the time even out of work – for example, when attending tech conferences and reading literature. Try to make sure you are on track with the latest trends, and here I mean being on track conceptually. Do not try to chase each and every new tools or release – it rarely makes a really big difference comparing to using the old ones. Concentrate on following the concepts – whenever someone introduces a new way to doing DevOps – it is time to start digging in. When you introduce new concepts and take the best ideas to work – you change the rules of the game and you may get much better results than simply replacing the tools.

The third point – is to constantly and continuously improve your OWN operations. Think about how you handle the work that lands on your desk, think about yourself as a factory and the output that you produce. What do you need to improve? You may start with Time management, Memory skills, and Communication.

And the last, fourth way of learning is to always communicate your ideas – in blogs, comments, discussions with colleagues, and see how people react to them. This will help you understand the main pain points and weak spots of your concepts and improve them, with an assistance of others and from their perspective.