Developers and IT-Pros: who will be left in the past? How to work together efficiently

devvsitpro

Last weekend, I was invited to Global Azure Bootcamp in Linköping where among the other activities also participated in the panel discussion about the future of Developers and IT-Pros collaboration and ways to make it more efficient. It was a brilliant and sharp discussion the main points of which (or rather my view on them:) I would like to share in this post.

IT-Pros

To start with – who are these mysterious creatures? I like the term “IT-Pro” since it covers more than more common dev counterparts – Operations or Ops. It is important to understand that Ops are related to the infrastructure people, while from my perspective, IT Pros include everyone who is NOT a developer. Such brave folks like Security, DBA, Cloud Architects, Consultants, UX designers (why not?!) and many more. Why define a special group for them? It helps to define the boundaries of the conflict – Devs vs !Devs without actually negating developers, since there are more similarities between these two groups than it seems at first.

The key misconception

“Devs create. IT-Pros don’t create”. This is true… in the wrongly built organizations – and there are way too many of them, in the reality. The problem comes from the fact that IT-Pros are usually understaffed. From a dumb manager’s perspective, IT-Pros need the headcount just enough to put down fires. This is a grave mistake – only after the fires are down, the REAL work starts. The upgrades and improvements to everything which was under fire and replacing it with something that won’t catch fire at all the next time. So, IT-Pros are creative, and they do create – when the company is smart enough to let them do so.

The world we live in

The world of modern computing adopted (and spoiled) the famous *aaS – as-a-Service abbreviation. We quickly move to the state of technology where everything could be offered as a service. DBs, compute resources, APIs, security tools, you name it. Which make Devs happy and threatens IT-Pros whos yesterdays tasks were just replaced by a new Azure/AWS/Google Cloud service. So how Devs and IT Pros respond to these changes?

The shift

The DevOps movement and its rapid adoption caused the famous shift to the left which basically empowered Developers with the tools which previously were managed by Operations. Simultaneously, Cloud has awakened and brought all power of quick deployments, testing in production and advanced telemetry to the developers. Devs became almighty and now can (almost) do their thing – write the code and never be bothered.

What happened to IT-Pros?

The shift hasn’t avoided the IT-Pro zone – I observe a similar change. Pros start writing their automation and deployment scripts, learning more efficient ways of doing yesterdays tasks by borrowing best methods of development and adopting them for IT-Pro work. They read the code and write own code. UX designers create mindblowing frameworks of design atoms and molecules, put them into source control and use Continuous Integration. There is no more distinction between Devs and IT-Pros based on writing the actual code.

Does it mean, there will be no IT-Pros eventually? Will Devs replace them?

No, not at all. If we look at the root of what Devs do and love doing for a living – it is not about spawning VMs to Azure. They consider it as a necessary evil or something that enables them. What they do is to create. It is like a painter who loves drawing but also has to go to IKEA, buy and assemble her easel. The core knowledge of Devs is the development itself – building complex distributed systems, efficient workflows, secure APIs – for the needs of modern world.

On the other hand, we have IT-Pros who in fact love deploying machines in Azure, and also know thousands of ways of doing it for hundreds of use cases. And now they also start to code and automate. What we get is a powerful combo that can build the virtual world for the products that Devs are writing.

Together, forever

It is obvious, they can’t survive without each other. The world will require more and more complex products – more secure, more resilient, more flexible. And while someone has to build it, others have to create architectures where these products can work at their best. It is not about deploying just a bunch of VMs to Azure and installing SQL server on them. It is about building an identity-controlled cloud with fully automated threat detection, where the product runs in a couple of dozen containers with replication to a bunch of regions, backup and data retention strategy.

And with ever-changing fluffy cloud landscape (it’s cloud, after all), new features become available weekly and sometimes completely change the game in one night. IT-Pros need to be aware of them before they become GA and have the adoption plan.

Continuous Integration

Cloud offers so many possibilities, all the cutting-edge tech is there up for grabs. But does your product architecture supports it. It should. But it doesn’t. The very common answer which causes months of refactoring, releasing. And – BAM – a newer and cooler tech is out there, and we’re back to the square one. Who could help them devs? IT-Pros! If a dev team integrates a Cloud Architect into their architecture meeting, they will be able to plan the future functionality of the product, target a specific cloud, align with its roadmap and get the best description of limited preview features that will be GA at the time of the product release.

Next steps

To adapt, IT-Pros need to become more efficient. Previously, Developers solved this issue for themselves by taking part of the Ops work and learning basics of what they did. It is time for IT-Pros to do the same to Devs. With the automation and coding skills, IT-Pros will be able to level up the complexity of cloud deployments and at the same time cut the time required for them.

To adapt, Devs need to integrate with IT-Pros when it comes to Cloud, Security, Design and make this integration continuous that starts from the design stages, goes through development testing and … actually lasts forever.

To adapt, organization management needs to staff IT-Pro teams properly and focus them on creating value instead of putting the fires down.

Identifying threats: Software inventory of Azure VMs

Azure VMs recently got a bunch of new features – Inventory, Change tracking and Update management (became GA on 8th of March, 2018). These features fill the gap in identifying the software that is deployed to IaaS clouds – the information necessary for securing these resources. The features are based on the Azure Automation capabilities and require an Automation account to run the workloads.

Inventory feature provides visibility into software installed on a VM (can be accessed from the individual VM blade), services, Linux daemons and also the timeline of events as part of the change the tracking view.

in2

VM view:

Inv1

In this example, we can see Adobe Flash Player and Steam client installed on the machine – both are increasing the attack surface for this infrastructure.

If you want to get more detail overview – proceed to Log Search tab. For example, this query will yield all non-Microsoft applications inventoried in the past day:

ConfigurationData
| where ConfigDataType == “Software”
| where SoftwareType == “Application”
| where Publisher != “Microsoft Corporation”
| order by TimeGenerated desc

logsearch1

The Change tracking view provides the visibility into the software changes and allows to track them efficiently. Also, you can add particular files or registry entries to watch. Watching entire folders is not yet supported for Windows VMs.

ch1.GIF

To get an overview of multiple machines either:

  • Click “Manage multiple machines” on Inventory or Change tracking view blade
    ch2
    This view also can be used to add non-Azure VMs through the Hybrid worker functionality.
  • Or go to the automation account that was associated with Inventory and Change tracking when first enabled. It also provides a convenient view of associated machines.

Update management can find machines with missing OS updates and fix it by scheduling a deployment.

upd1.PNG

Update management is integrated into the Security Center so that critical updates are never left out for all machines in the cloud. But it is important to remember to turn on these features first.

At the moment, only Update management allows taking an action based on the gathered data. Inventory doesn’t have a built-in functionality to remove unwanted applications or perform an on-demand scan. Also, it is lacking reporting capabilities and global overviews of applications in the cloud. Change tracking data can be used only for setting up custom alerts (requires Log Analytics knowledge).

At the same time, based on the direct connection of these features to the Azure Automation, I expect there will be more functions added to make it possible to fix found issues and secure a resource based on the Inventory data. It is clear that Microsoft is taking an important action in securing IaaS – and providing the data is only the first step.

P.S. For more advanced Inventory and Software Asset Management solutions, one may look into 3-rd party providers such as Snow Software (where I work at the moment).

 

Using ELK stack for vulnerability management

Have you tried to keep all data of discovered vulnerabilities in the same place? When you have about 10 different tools, and also manual records in task tracker, it becomes a bit of a nightmare.

My main needs were:

  1. Have all types of data in the same place – vulnerable computers, pentest findings, automated scan discoveries, software composition analysis etc.
  2. Have visibility over time in each of categories for each product – I need to talk to people with graphs and good visualizations to make my points.
  3. Have a birds-eye view on the state of security in products – to be able to act on the trend, before it materializes into the flaw or breach.
  4. Easily add new metrics.
  5. The answer to these questions needs to be free or inexpensive.

After weeks of thinking and looking for a tool that will save me, I figured out that the solution has always been just next to me. I simply spin up an instance of ELK (ElasticSearch, Logstash, Kibana) and set up a few simple rules with Logstash filters. They work as data hovers – by simply sucking in everything that is sent in as json payloads. Then, transformed data ends up in Elastic, and Kibana visualizes the vulnerabilities into trends and shows them in real-time dashboards. Also, it has a powerful search engine where I can find specific data in a matter of seconds.

I wanted to create a configuration that can process a generic json payload that was assembled by a script after parsing any vulnerability detection tool report.

Here is the example of logstash pipeline configuration:

input {
http { }
}

filter{
json{
source => “message”
}
}

filter {
date{
match => [ “date”, ISO8601 ]
}
}

output { elasticsearch {
hosts => localhost
index => “vuln-%{+YYYY.MM.dd}”
}
}

Here, we ingest http inputs posted to the logstash endpoint as json, set the @timestamp to the “date” field from the payload (it has to be submitted in ISO8601 format), and send this data to the ElasticSearch instance, to the index with name “vuln-<today’s date>”.

Let’s say we have 3 branches that are submitting open-source component analysis results on every build. Here is how the resulting report may look like (Y-axis represents discovered vulnerable components, X-axis represents scan dates, split in 3 sub-plots):

v1

Here, each of 3 subplots represents a separate branch analysis dynamics.

Or, for separate branches:

v2

And finally, we can build some nice data tables:

v3

This example only includes one case – OSS vulnerability reports. But various vulnerability data can be submitted to ELK, and then aggregated into powerful views for efficient management.

New way of managing on-prem Windows servers securely – Project Honolulu

Project Honolulu – a new tool that attaches UI to Powershell WMI capabilities for managing your servers securely.

I don’t have to explain why connecting with RDP to a remote server is a really really bad security practice. By default, Windows has no timeout on a disconnected RDP session. In fact, after you close your RDP session, your user (some kind of admin, right?) stays logged in to the server and God knows what happens when you don’t watch! For example, anyone who gains access to the same server as a non-privileged user can dump in-memory credentials and steal your remote session (i.e. with help of the infamous Mimikatz tool).

How to mitigate this problem? Don’t connect to servers with RDP. Ever! Microsoft believes that the solution is using WMI (Windows Management Instrumentation) via Powershell. At least, it protects you from those guys who wait in the server for you to log in to steal your credz. Sounds great but I want back my GUI, right?

Luckily, we’ve just got a tool for it – Project Honolulu. It executes PowerShell WMI commands in the backend and streamlines framework’s capabilities through the lean (and flat, of course!) UI. It allows you to perform most of the operations you would typically make in RDP. Hyper-V and Failover clusters are also supported.

Nowadays, the tool is in the Technical Preview but “it perfectly works in my environment” (c). Download it here.

Some screenshots from my Honolulu:

honolulu

honolulu2

It covers most common operations such as modifying firewall rules, local groups, checking logs, registry, resource utilization, installing new roles and features, and so much more!

Have fun!

Pandora FMS server with docker-compose

Docker-compose is amazing, this tool allows you to literally deploy complex clusters of containers with one command. Previously, I had a seamless experience running ELK (Elastic, Logstash, Kibana) with docker-compose and now decided to give a try to Pandora with it.

Pandora FMS is a great tool for monitoring and securing the infrastructure since it provides insights into anomalies that may happen to your servers. And it is open source and free!

To start with, I found 3 containers required to run the Pandora Server on the Docker Hub of Pandora FMS : MySQL DB instance initialized with Pandora DB, Pandora Server, and Pandora Console.

All the deployment magic happens within each container, and my task was only to create some infrastructure and orchestration for them with help of docker-compose.

I put the result on Github – pandora-docker-compose.

Here is a quick overview of  what it does:

  • Creates a dedicated network and assigns IPs to containers
  • Configures Postfix for sending emails to admins (int he default container it was not working)
  • Synchronizes time with Docker host
  • Maps Pandora DB files to local host folder so that you can back them up and restore

 

Integrating security into DevOps practices

DevOps as the cultural and technological shift in the software development has generated a huge space for improvements in the neighboring areas. To name one – Application security.

Since DevOps is embedded into every step of an idea on its way to the customer, it can be also used as the framework for driving security enhancements with the reduced costs – since automation and continuous delivery are built for the CI/CD needs. With a gentle security seasoning, an existing infrastructure will bring value to securing the product.

Where to start

As I said, we want security to affect all or most of those steps where DevOps transformation is already bringing value. Let’s just take a look at what we have on an abstract DevOps CD pipeline:

ci_insecure

It is a pretty straightforward deployment pipeline. It starts with requirements that are implemented into the code, which is covered with unit tests and built. The resulting artifact is deployed to staging where tested with automation, and also a code review takes place. When it is all done and succeeded, the change is merged to master, integration tests are running on the merge commit and artifacts are deployed to Production.

Secure CI/CD

Now, making no changes to the CD flow, we want to make the application more secure. Boxes in red are security features proposed to be added to the pipeline:

ci_secure

Security requirements

On the requirements planning stage (it can be a backlog grooming or sprint meeting), we instruct POs and engineers to analyze the security impact of the proposed feature and put mitigations/considerations into the task description. This step requires the team to understand the security profile of the application, the attacker profile and also have in place a classification of threats based on different factors (data exposure, endpoints exposure etc). This step requires some preliminary work to be done and is often ignored in the Agile environments. However, with a security embedded into the requirements, it becomes so much simpler for an engineer to actually fix possible issues before it gets exploited by an attacker. According to the famous calculation of the cost of fixing a possible failure, adding security to the design costs the less and brings most of the value.

In my experience, a separate field in the PBI or a dedicated section in the PBI template needs to be added to make sure the security requirements are not ignored.

Secure coding best practices

For an engineer who implements the feature, it is essential to have a reference how to make this or that particular security-related decision basing on a best practices reference document or guidance.  It can be a best-practices standard maintained by the company or the industry – but the team must agree on which particular practice/standard to follow. It should answer simple but obvious questions – for example, how to secure API? How to store password? When to use TLS?

Implementing this step brings consistency into secure side of the team’s coding. Also, it educates engineers and integrates best security coding practices into their routines forming a security-aware mindset.

Security-related Unit testing

This step assumes that we cover the highly risked functions and features of the code with unit tests first. It is important to maintain the tests fresh and increase coverage alongside with the ongoing development. One of the options is that for some risky features adding security unit tests is required for passing Code review.

Security-related Automated testing

In this step, the tests cover different scenarios of using/misusing the product. The goal is to make sure the security issues are addressed and verified with automation. Authorization, authentication, sensitive data exposure – to name a few areas to start with.

This set of tests needs to exist separately from the general test set providing visibility into the security testing coverage. Needs for implementing new automated security tests can be specified on the Requirements design stage and verified during Code review.

Static code analysis

This item doesn’t exist on the diagram but can also be mentioned. Security-related rules need to be enabled in the Static code analysis tool and be part of the quality gateway which determines whether a change is ready for production. There is a vast amount of different plugins and tools that allow performing automated analysis and fix what human eye may miss.

Security Code review

This code review needs to be done by a security-minded person or security champion from a specific AppSec team (if there is any). It is important to differ it from an ordinary CR and focus on the security impact and possible code flaws. Also, a person performing the review makes sure the Secure requirements are addressed, required unit/system tests are in place and the feature is good to go into the wild.

Security-related Automated testing

Similarly to automated test in the previous step with the only difference that here we test the system as the whole, after merging the change to the master.

Results

After all, we managed to reuse the existing process with adding a few key points related to security with clear rules and visible outcomes. DevOps is an amazing way helping us build a better product, and adding more improvements on this way on-the-go has never been easier.

Building secure CI/CD pipeline with Powershell DSC. Part 2: CI for IaaC

In the previous post, I described how to build CI-as-a-code with DSC Pull server with a focus on the security by using partial configurations and updating the on-demand.

Here comes the next challenge. Now, we want the DSC States that define Infrastructure configuration to be easily modifiable and deployable. As if we wanted to patch against Wannacry simply by adding a corresponding Windows update patch to the DSC Security state. “git push” – and in a short while all nodes in the CI pipeline would be secured. Below, I show how it can be done with script examples that can help you kick-start your CI for IaaC.

Before we start…

First, I’ll remind you some terminology.

DSC Configuration (State) is a Powershell-compiled file that needs to be placed on the Pull server. It defines the system configuration and is expected to get updates frequently.

Local Configuration Manager (LCM) is a Powershell-compiled file that has to be deployed to a target node. This file tells the node about the Pull server and which Configurations to get from it. This file is used once to register the node. Updates to LCM are rare and happen only in the case when the Pull server needs to be changed or new states added to the node configuration. However, we previously split the states into three groups – CI configuration state, Security state, and General system configuration. Given this structure, you can update only states without creating new types of them. Therefore, no LCM changes are required.

Also, keep in mind the fact that LCMs are totally different for targets with Powershell v4 (WMF4) and v5. And you need different machines to build them.

Looking from the delivery perspective, the difference between LCM and States is that LCMs need to be applied with administrative permissions and require running some node-side Powershell. In one of my previous posts, you can find more info on the most useful cmdlets for LCM.

On the contrary, States are easy to update and get to work – you only need to compile and drop them to the Configurations folder of the Pull server. No heavy-weight operations required. So, we design our CI for IaaC for frequent deployment of States in mind.

Building CI for IaaC

For the CI the first thing is always the source control. One of my former colleagues loved to ask the question at interviews: “For which type of a software project would you use source control?” And one and the only correct answer was: “For absolutely any”.

So, we don’t want to screw up that interview, and also our CI pipeline, therefore, we got the DSC States and LCMs under source control. Next, we want States and LCMs to be built on the code check-in time. The states will be published to the Pull server immediately,  while LCMs can be stored on the secure file share without direct applying them to the node.

ci_for_dsc

Building the artifacts is not a big deal – I’ll leave the commands out of this blog post. But what is still missing is how the nodes get LCMs. My way of doing it is to have a script that iterates over nodes and applies corresponding LCMs from the file share to them. I call it “Enroll to DSC”. Which is pretty fair since it happens when we need either to enroll a new node to a server or get some new states into it.

Here is the example of such script that uses Windows remoting in place from my Github. You can find details in README.md

Summary

By creating CI for IaaC we bring the best of DevOps practices to the way we handle the build infrastructure. In fact, having an abstract CI in place already simplifies our job, and after we are done – the CI itself becomes more reliable and controllable structure. You can deliver any updates to CI with CI it within seconds – isn’t it what CI supposed to be for? Quite a nice recursion example, I think 🙂

Building secure CI/CD pipeline with Powershell DSC. Part 1: overview

From my experience, way too often CI/CD pipelines suffer from the lack of security and general configuration consistency. There still might be an IaaC solution in place but it usually focuses on delivering a minimal functionality that is required for building a product and/or recreating the infrastructure if needed as fast as possible. Only a few of CI pipelines were built with security in mind.

I liked Powershell DSC for being native to the Windows stack and intensively developing feature modules to avoid gloomy scripting and hacking into the system’s guts. This makes it a good choice for delivering IaaC with Windows-specific security in mind.

DSC crash course

First, a short introduction of Powershell DSC in the Pull mode. In this configuration, DSC States or Configurations are deployed to and taken into use by the Pull server. States define what our node system configuration needs to look like, which features to have, which users to be admins, which apps installed etc – pretty much anything.

Configuration FirewallConfig
{


Import-DscResource -ModuleName PSDesiredStateConfiguration -ModuleVersion 1.1
Import-DscResource -ModuleName xNetworking -ModuleVersion 3.1.0.0

 xFirewall TCPInbound
 {
     Action = 'Allow'
     Direction = 'Inbound'
     Enabled = $true
     Name = 'TCP Inbound'
     LocalPort = '443'
     Protocol = 'TCP' 
 }

}

Each State needs to be built into the configuration resource of specific “.mof” format. Then, the state.mof need to be placed into “Configurations” folder on the Pull server together with its checksum file. Once the files are there, they can be used by nodes.

The second piece of config is the Local Configuration Manager file. This is the basic configuration for a Node that instructs it where to find the Pull server, how to get authorized with it and which states to use. More information is available in the official documentation.

To start using the DSC, you need to:

  • setup a pull server (once)
  • build a state .mof and checksum file and place it on the pull server (many times)
  • build a LocalConfigurationManager .mof and place it on the node (once or more)
  • instruct node to use LCM file (once or more)

After this, a node contacts the Pull server and receives one or more configuration according to its LCM. Then, a node starts a consistency against the states and correcting any difference it finds.

Splitting states from the Security perspective

It is the states that are going to be changed once we want to modify the configuration of enrolled nodes. I think it is a good practice to split a node state into pieces – to better control security and system settings of machines in the CI/CD cluster. For example, we can have the same security set of rules and patches that we want to apply to all our build machines but keep their tools and environment configurations corresponding to their actual build roles.

Let’s say, we split Configuration of Build node 1 into the following pieces:

  • Build type 1 state – all tools and environment settings required for performing a build (unique per build role)
  • Security and patches – updates state, particular patches we want to apply, firewall settings (same for all machines in the CI/CD)
  • General setting – system setting that (same for all machines in the CI/CD)

dsc_states

In this case, we make sure that security is consistent across CI/CD cluster no matter what role a build machine has. We can easily add more patches or rules, rebuild a configuration mof file and place it on the Pull server. The next time the node checks for the state, it will fetch the latest configuration and perform the update.

In the next post, I will explain how to build a simple CI pipeline for the CI – or how to deliver LCMs to nodes and configuration mof’s to Pull server with using a centralized CI server.

 

Scripts to find WannaCry vulnerable VMs in VMWare vCenter

WannaCry ransomware hit the news by infecting high-profile targets via a security hole that existed in Windows prior to 13.03.2017 when it was patched.

I created a script that connects to the vCenter and checks if the latest hotfix in the system was installed before or after Microsoft released the patch. This doesn’t give 100% protection since some fixes might have been installed manually but the required one omitted.

However, in centralized IT environments that rely on turned on Windows Update service that applies all important updates, it might be a good way to check for the vulnerability.

Link to my repo:

https://github.com/doshyt/Wannacry-UpdatesScan

I work on bringing similar functionality to PS Remoting and also looking for ways to figure out if a new patch was installed but not the one fixing the problem.

UPDATE 19.05:

I added a useful script that performs a remote check for SMBv1 being enabled for Windows 8 / Server 2012 + machines. It can be run against a list of computer names / FQDNs.

https://github.com/doshyt/Wannacry-UpdatesScan/blob/master/checkSmbOn.ps1

Returns $true if SMBv1 is enabled on a system level.

To turn off SMBv1, execute the following command on a remote machine

Set-SmbServerConfiguration -EnableSMB1Protocol $false

 

Benefiting from TeamCity Reverse dependencies

Reverse Dependencies is the feature of TeamCity that allows building more complex build workflows with setting parameters from “parent” build down to their own snapshot dependencies. I’ve been looking for this functionality for a while and recently accidentally discovered how to make it working. Back then, I felt like I found a treasure 🙂

Example: one of the builds can be run with code analysis turned on – for using the results in the “parent” code analysis build (i.e. with SonarQube).  But you only need it when manually triggering a SonarQube build, in any other case (i.e. code checked-in to the repo) you don’t want TC to spend time on running Code analysis. The code analysis is turned on with a System property system.RunCodeAnalysis=TRUE in the “child” build.

And here is the trick  – the “parent” wants to set a property of the “child” build but can’t access it outside of the scope of own parameters. How would you do it in a common way? Maybe, create two different

How would you do it in a common way? Maybe, create two different builds – where one has the system.RunCodeAnalysis always TRUE and trigger it from the SonarQube build.

In this case, you end up having two almost duplicated builds that only exist because the “parent” can’t set properties of a “child”.

Reverse dependencies are here to help

With Reverse dependencies, it can!

This feature is not intuitively simple, and it doesn’t support TeamCity auto-substitution (as with using %%). So, you need to be careful with naming. This is how it works:

In the “parent” build (let’s call it SonarQube), you set to TRUE a parameter named

reverse.dep.PROJECT_NAME.PARAMETER_NAME

Here, PARAMTER_NAME is the exact name of a parameter that you want to rewrite in the “child” build. I.e. “system.RunCodeAnalysis”.

For this to work, you need to have a snapshot dependency to the “child” build enabled. In the “child” build you simply set system.RunCodeAnalysis=FALSE

When someone triggers the “parent” build. First, it rewrites the default (existing) value of the specified parameter (system.RunCodeAnalysis) in the “child” build with TRUE and starts this build.

In this case, you can use one build definition for sharing different tasks that couldn’t be done with one “core” build previously. A great use case is setting up a build with an automated TC trigger in which you can set parameters of a “child” build. Let’s say, you want to deploy Nightly to a specific environment using the same build definition that all developers use for building their projects.
To do so, you can set up a build with a trigger that builds a specific branch and also sets a target environment parameter through the Reverse dependency.

When using Reverse Dependencies, a handy way to check the actual values submitted to a build si the “Parameters” tab in the executed build – it shows which values were assigned to parameters, you can make sure that Reverse dependencies work as expected.

Caveats

  1. Reverse Dependencies are not substituted with actual parameter names from “child” builds – it is easy to make a mistake in the definition.
  2. When a project name is changed, you also need to change it manually in all Reverse dependencies.
  3. Try not to modify the build flow with Reverse Dependencies, touching only features that don’t affect build results in any way – otherwise you will get non-deterministic build configuration, in which the same build produces totally different artifacts. The best way to use it – is to specify some parameters which will be used by external parties, like setting einvrionemnts for Deployment or Publishing services, getting code analysis results etc.