"Reduce duplication, bugs and shotgun surgery, all while increasing testability by pushing behavior into the domain!"
"How many times a day do you find yourself trying to figure out what variables represent because they are named ambiguously?"
It's difficult in software development to shift the focus to problems and goals instead of features, features, features. As I've written about before, it's ideal to focus on the value first, and match the features to the value.
Sometimes we encounter feature requests that seem reasonable enough that we don't feel the need to back track to discover the value. This happens for many reasons:
However, I've been surprised how many times:
Just this week I encountered a request that upon further inspection was prompted by a subset of the problems in a past project. Subconciously, I almost implemented the feature with the full set of problems in mind from the past. By inquiring about situations that prompted the request, we were able to create a solution that was a subset of the past solution! This reduced the time and money necessary to deliver value to the customer.
TLDR: Even if a feature seems reasonable, ask about the situations that prompted it. I've often been pleasantly surprised :)
Null checks aren't fun, but even worse are the ever ambiguous run-time NullReferenceExceptions we might otherwise receive.
Take the following code:
public class Family
{
public List<string> Names;
}
// consumer creating a family
var family = new Family();
family.Names = new[] {"John", "Jane"}.ToList();
// consumer adding a name
family.Names = family.Names ?? new List<string>();
family.Names.Add("Baby");
// consumer searching names
var searchName = "Bob";
var hasSomeoneNamed = family.Names != null && family.Names.Contains(searchName);
// consumer
comment delineates a separate example of using the Family
type&&
operator.Invariant - "never changing"
A few simple changes can make things much simpler for consumers. If we require a list of names upon creation of a family, we can do the following:
public class Family
{
public readonly List<string> Names;
public Family(IEnumerable<string> names)
{
Names = names.ToList();
}
}
readonly
, which means it can't be changed after creation. readonly
alone is a great start.Look at the impact on consumers:
// consumer creating a family
var names = new[] { "John", "Jane" };
var family = new Family(names);
// consumer adding a name
family.Names.Add("Baby");
// consumer searching names
var searchName = "Bob";
var hasSomeoneNamed = family.Names.Contains(searchName);
I'd much rather maintain this code!
We could also encapsulate the Names list:
public class Family
{
protected readonly List<string> Names;
public Family(IEnumerable<string> names)
{
Names = names.ToList();
}
public void AddName(string name)
{
Names.Add(name);
}
public bool HasSomeoneNamed(string searchName)
{
return Names.Contains(searchName);
}
}
Now our consumers don't even have to be aware of the fact that Names exists let alone that it might be null:
// consumer creating a family
var names = new[] { "John", "Jane" };
var family = new Family(names);
// consumer adding a name
family.AddName("Baby");
// consumer searching names
var searchName = "Bob";
var hasSomeoneNamed = family.HasSomeoneNamed(searchName);
However, I usually don't go this far:
I prefer the invariant only approach, giving consumers the guarantee that Names won't be null and letting them take it from there.
readonly
can cause a lot of friction in serialization, if it does, try an auto property with a public getter and protected/private setter, but be aware that deserializers may leave this null.Null check insanity is often a sign of design smell, invariants are a great first step in the direction of creating a solid contract between producers and consumers of a type.
It may seem like work to enforce invariants, but the dividends in maintainability and readability are worth it. Think how often you stumble upon null checks, or the lack thereof. Understanding these patterns will make them second nature.
One of the drawbacks of Feature Branching is the likelihood of merge conflicts as time moves forward. Even with the best of intentions we all get busy and forget to merge our integration branch into our feature branch or vice verse. A mechanism to let us know when conflict occurs would be helpful.
Continuous Integration can help mitigate this risk by automatically merging changes in a feature branch with the integration branch and testing the result. Conflict may arise from the merge itself, or from compilation, testing and other stages of a deployment pipeline.
Add a new Build Configuration
Automatically Merge and Test Feature Branches
Git
master
+:refs/heads/(feature_*)
()
s denote what to show in the TeamCity UI, ie: refs/heads/feature_one
will show as feature_one
Automatically on agent (if supported by VCS roots)
checked
Click "Add Build Step"
Command Line
Merge
Custom script
Custom script:
"%env.TEAMCITY_GIT_PATH%" fetch origin
"%env.TEAMCITY_GIT_PATH%" checkout -b master origin/master
"%env.TEAMCITY_GIT_PATH%" config --local user.email "automerge@merge.com"
"%env.TEAMCITY_GIT_PATH%" config --local user.name "Auto Merge"
"%env.TEAMCITY_GIT_PATH%" merge --no-commit %teamcity.build.branch%
The script above is crude to demonstrate the process, feel free to modify it to fit your needs
"VCS Trigger"
-:refs/heads/master
Some teams like the idea of pushing if the merge succeeds and tests pass. I guess that depends on your particular team and how you work together. That said here are my thoughts:
If auto pushing is of value, here's how you can modify the above to accomplish that:
Obviously this can't mitigate all forms of conflict. As Martin Fowler points out, semantic conflict will be very difficult to detect, especially if there isn't much test coverage.
Nonetheless, checking the basics can help us catch issues faster and remind us to keep the flow of information between integration and feature branches open.