Tags: , , , , , , , , , , , , | Categories: Event Posted by bsstahl on 8/11/2016 1:26 PM | Comments (0)

The organizing team of AZGiveCamp recently announced that we would be hosting a one-day Hackathon for Humanitarian Toolbox on Saturday, August 27th, from 8:30 am to 5pm at Ticketmaster in Scottsdale, AZ.  This event is a bit of a departure for us.  We have been looking for ways to evolve the organization to host more and different coding-for-charity events while continuing our mission to to help charitable and non-profit organizations in our community meet their technology needs.  We hope you’ll join us for this first experiment with other event types at AZGiveCamp.

AZGiveCamp’s flagship event is our Hackathon of Help. We have had the privilege of hosting 7 such events in the Valley of the Sun so far, with our 8th scheduled for March of 2017.  These events take up an entire weekend and are designed to put  multiple charity and non-profit organizations together with multiple development teams.  The teams are tasked with taking a project from idea to completion in the course of one weekend.  During these events, participants may chose to camp out at the event facility, stay up and work on their projects, or go home at night, returning to continue the project in the morning until the final turnover on Sunday afternoon.  These events are technology agnostic, with the specific technologies to be used determined by the teams themselves.

By contrast, the AZGiveCamp Humanitarian Toolbox Hackathon will be only a 1-day event.  Participants will work on a single project, the Humanitarian Toolbox (htBox) allReady project, for which the technologies, design, and many of the features have already been chosen and implemented.  We will be lending our support to this worthy organization by adding features, upgrading tooling, and writing tests against the existing code base.  This event will not be judged by how many projects we complete, but by how much better-off the project is when we are done.

For those not familiar with Humanitarian Toolbox, they are an organization that sets up projects to assist humanitarian organizations.  Their current project, dubbed allReady, is designed to organize the preparedness campaigns of the Red Cross and other disaster response groups.  The project is implemented in ASP.NET Core MVC with a Cordova client. Participants need to have at least a basic comfort level with one or both of these technologies, along with the appropriate development tools, to be an effective contributor to this project.  Specifics of the required tools can be found on the event page on Meetup.

We hope you’ll join us at this and future AZGiveCamp events.

Tags: , , , , , | Categories: Development Posted by bsstahl on 7/1/2016 10:49 PM | Comments (0)

Dynamic Programming (DP) is a mathematical tool that can be used to efficiently solve certain types of problems and is a must-have in any software developer's toolbox. A lot has been written about this process from a mathematician's perspective but there are very few resources out there to help software developers who want to implement this technique in code. In this article and the companion conference talk "Dynamic Optimization - One Algorithm All Programmers Should Know", I attempt to demystify this simple tool so that developer's can implement it for their customers.

What is Combinatorial Optimization?

Mathematical or Combinatorial Optimization is the process of finding the best available solution to a problem by minimizing or eliminating undesirable factors and maximizing desirable ones.  For example, we might want to find the best path through a graph that represents the roads and intersections of our city.  In this case, we might want to minimize the distance travelled, or the estimated amount of time it will take to travel that distance.  Other examples of optimization problems include determining the best utilization of a machine or device, optimal assignment of scarce resources, and a spell-checker determining the most likely word being misspelled.

We want to make sure that we do not conflate combinatorial optimization with code optimization.  It is certainly important to have efficient code when running an optimization algorithm, however there are very different techniques for optimizing code than for optimizing the solution to a problem. Code optimization has to do with the efficiency of the implementation whereas combinatorial optimization deals with the efficiency of the algorithm itself.  Efficiency in both areas will be critical for solving problems in large domains.

What is Dynamic Programming?

Ultimately, DP is just a process, a methodology for solving optimization problems that can be defined recursively1.  It is really about a way of attacking a problem that, if it were addressed naïvely, might not produce the best possible answer, or might not even converge to a solution in an acceptable amount of time.  Dynamic Programming provides a logical approach to these types of problems through a 2-step process that has the effect of breaking the problem into smaller sub-problems and solving each sub-problem only once, caching the results for later use2.

The steps in the process are as follows:

  1. Fill out the cache by determining the value of each sub-problem, building each answer based on the value of the previous answers
  2. Use the values in the cache to answer questions about the problem

Since we fill-out the entire cache for each problem3, we can be 100% certain that we know what the best possible answers to the questions are because we have explored all possibilities.

Dynamic Programming in Action

Let's look at one of the canonical types of problems that can be solved using Dynamic Programming, the knapsack problem.  A knapsack problem occurs in any situation where you have a limited capacity that can be consumed by a number of different possible options.  We need to look for the best fit and optimize for the maximum based on the definition of value in our problem.  This class of problem gets its name from the story of the archeologist in the collapsing ruin.  She has a knapsack that can hold a known weight without tearing and she needs to use it to rescue artifacts from the ruin before it collapses entirely.   She wants to maximize the value of artifacts she can save, without exceeding the capacity of her knapsack, because it would then tear and she wouldn't be able to carry anything.

We can solve this type of problem using Dynamic Programming by filling-out a table that holds possible capacities, from 0 to the capacity of our known knapsack, and each of the possible items to use to fill that space, as shown below.

In this example, there are 3 items with weights of 4, 5 and 2.  These items have values of 5, 6 and 3 respectively and can be placed in a knapsack with capacity of 9. The leftmost column of the table represents the capacities of knapsacks from 0, up to and including the capacity of our knapsack.  The next column represents the best value we would get in the knapsack if we had the option of putting 0 items in our knapsack. The next, the best value if we had the option of taking the 1st item, the next column, the option to take the 2nd item on top of any previous items, and so forth until we complete the table.  As you can see, the most value we can get in our knapsack with the option of picking from these 3 items is 11, as found in the last row of the last column. That is, the cell that represents a knapsack with our known capacity, with the option to chose from all of the items.

To calculate each of these cells, we build on the values calculated earlier in the process.  For the 1st column, it is easy. If we can chose no items, the value of the items in our knapsack is always 0. The rest of the cells are calculated by determining the greater of the following 2 values:

  • The value if we didn't take the current item, which is always the value of the same capacity knapsack from the previous column
  • The value if we took the current item, which is the value of the current item, added to the value of the knapsack from the previous column if the weight of the current item were removed

So, for the cell in the column labeled "1" with a knapsack capacity of 6, we take the greater of:

  • 0, since we wouldn't have any items in  the knapsack if we chose not to take the item
  • 5, the value of the current item, added to the value of the other items in the knapsack, which was previously empty

For the cell in column "2" with a knapsack capacity of 9, we take the greater of:

  • 5, which is the value of the knapsack with capacity 9 from column "1" indicating that we didn't take the 2nd item
  • 11, which is the value of the current item added to the best value of the knapsack with capacity 4 (subtract the weight of our current item from the capacity of the current knapsack) with the option of taking only the previous items.

Each cell in the table can be filled out by doing these simple calculations, 1 addition and 1 comparison, using the values previously calculated as shown in the annotated table below.


So we've filled out the table and know, from the cell in the bottom right that the maximum value we can get from this knapsack with these items is 11. Great, but that only answers the question of maximum value, it doesn't tell us which items are chosen to achieve this value.  To determine that, we need to work backward from the known best value.

Starting at the known best value in the bottom-right cell, we can look one cell to the left to see that the value there is the same.  Since we know that taking an item would increase the value of the knapsack, we can know that we must not have chosen to take the item in the last column.  We can then repeat the process from there.  From the bottom cell in the column labeled "2", we can look left and see that the value in the previous column did change, so we know we need to take the item in column "2" to get our maximum value.  Since we know that item 2 had a weight of 5, we can subtract that from the capacity of our knapsack, and continue the process from that point, knowing that we now only have 4 more units of capacity to work with.  Comparing the item in the column labeled "1" and a knapsack capacity of 4 with the value of the equivalent knapsack in column "0", we can see that we need to include item 1 in our knapsack to get the optimum result.

 

What did we actually do here?

There is no magic here. All we did was take a problem that we could describe in a recursive way, and implement a process that used easy calculations that built upon the results of previous calculations, to fill-out a data cache that allowed us to answer the two primary questions of this problem:

  1. What is the maximum value of the knapsack with capacity 9 and the option to take the 3 previously described items up to the capacity of the knapsack?
  2. Which items of the 3 do we need to take to achieve the maximum value described in question

You can probably see that if both axes of this table, the capacity of the knapsack, and the number of items we can chose from, are extremely large, we may run into memory or processing-time constraints when implementing this solutions.  As a result, this may not be the best methodology for solving problems where both the capacity of the knapsack and the number of items is extremely high.  However, if either is a reasonable number, Dynamic Programming can produce a result that is guaranteed to be the optimum solution, in a reasonable amount of time.

Continue the Conversation

I am happy to answer questions or discuss this further. Ping me on Twitter with your comments or questions. I'd love to hear from you.  I am also available to deliver a talk to your conference or user group on this or other topics. You can contact me via my blog, Cognitive Inheritance.

 

1 In mathematical terms, DP is useful for solving problems that exhibit the characteristics of Overlapping Subproblems and Optimal Substructure.  If a problem is able to be described recursively, it will usually exhibit these traits, but the use of the recursion concept here is a generalization to put the problem in software developer's terms.

2 The process of storing a value for later use is known in mathematics as memoization, an operation which, for all intents and purposes, is equivalent to caching.

3 Variants of certain DP algorithms exist where the process can be cut-off under certain conditions prior to fully populating the cache.  These variants are not discussed here.

Tags: , , , , , | Categories: Development Posted by bsstahl on 3/5/2016 9:55 PM | Comments (0)

One of the reasons to use TDD over test-later approaches is that you get a better validation of your tests.

When the first thing you do with a test or series of tests is to run them against code that does nothing but throw a NotImplementedException, you know exactly what to expect. That is, all tests should fail because the code under test threw a NotImplementedException. After that, you can take iterative steps to implement the code. Along the way, you should always see your tests fail in appropriate ways.  Eventually, all of your tests should pass when the code is complete.

If tests start passing before they should, continue to fail when they shouldn’t, or fail for reasons that are different than what you’d expect at that point in the development process, you have a good indication that the test may not be doing what you want it to be doing.  This additional information about the tests can be very helpful in making sure your unit tests are properly testing your code.

Think about what happens when you add tests after the code has already been written.  Suppose you write a test for existing code, and it passes.  What do you really know about the test?  Is it working because it is adequately exercising your code? Did you forget to do an assert? Is it even testing the  proper bit of code? Code coverage tools can help with some of this but they can only help if the code under test is not already touched by other tests.  Stepping through the code in debug mode is another possibility, a third option is to comment out the code as if you were starting from scratch, effectively doing a TDD process without any of the other benefits of TDD

What about when you write a test for previously written code, and the test fails?  At this point, there are 2 possibilities:

  1. The code-under-test is broken
  2. The test is broken

You now have 2 variables in the equation, the code and the test, when you could have had only 1.  To eliminate 1 of the variables, you have to again perform the TDD process without most of its benefits by commenting out the code and starting from ground zero.

Following a good TDD process is the best way to be confident that any test failures indicate problems in the code being tested, instead of the tests themselves.

Tags: , , , , , | Categories: Development Posted by bsstahl on 2/27/2016 7:44 PM | Comments (0)

I had an experience this past week that reminded me of both the importance of continuing the Test Driven Development process beyond the initial development phases of a application's life-cycle, and that not all developers have yet fully grasped the concepts behind Test Driven Development.

One of the development teams I work with had a bug come-up in a bit of complex logic that I designed. I was asked to pair-up with one of the developers to help figure out the bug since he had already spent several hours looking at it. When I asked him to show me the tests that were failing, there weren't any. The bug was for a situation that we hadn't anticipated during initial development (a common occurrence) and he had not yet setup any tests that exposed the bug.

We immediately set out to rectify the situation by creating tests that failed as a result of the bug. Once these tests were created, it was a fairly simple process to use those tests as a debug platform to step through the code, find the problem and correct the bug. As is sometimes the case, fixing that bug caused another test to fail, a situation that was easily remedied since we knew about it due to the failing test.

After the code was complete and checked-in for build, the developer I was working with remarked on how he now "got it". He had heard the words before, "…write a test to expose the bug, then fix the bug." but they were empty words until he actually experienced using a test to do the debugging, and then saw existing tests prevent a regression failure in other code due to our bug fix. It is an experience all TDD practitioners have at some point and it is easy to forget that others may not yet have grokked the concepts behind the process.

Coincidentally, that very night, I got a ping from my friend Jeremy Clark (blog, twitter) asking for comments on his latest YouTube video on TDD. After watching it, I really couldn't offer any constructive criticism for him because there was absolutely nothing to criticize. As an introduction to the basics of TDD, I don't think it could have been done any better. If you are just getting started with TDD, or want to get started with TDD, or want a refresher on the basics of TDD, you need to watch this video.

Jeremy has indicated he will be doing more in this series in the future, delving deeper into the topic of TDD. Perhaps he will include an example of fixing a bug in existing code in a future video.

Tags: , , , , , | Categories: Development Posted by bsstahl on 1/26/2016 2:07 AM | Comments (0)

Good API design requires the developer to return responses that provide useful and understandable information to the consumers of the API.  To effectively communicate with the consumers, these responses must utilize standards that are known to the developers who will be using them.  For .NET APIs, these standards include:

  • Implementing IDisposable on all objects that need disposal.
  • Throwing a NotImplementedException if a method is on the interface and is expected to be available in the future, but is not yet available for any reason.
  • Throwing an ArgumentException or ArgumentNullException as appropriate to indicate that bad input has been supplied to a method.
  • Throwing an InvalidOperationException if the use of a method is inappropriate or otherwise unavailable in the current context.

One thing that should absolutely not be done is returning a NULL from a method call unless the NULL is a valid result of the method, based on the provided input.

I have spent the last few weeks working with a new vendor API.  In general, the implementation of their API has been good, but it is clear that .NET is not their primary framework.  This API does 2 things that have made it more difficult than necessary for me to work with the product:

  1. Disposable objects don’t implement IDisposable. As a result, I cannot simply wrap these objects in a Using statement to handle disposal when they go out of scope.
  2. Several mathematical operators were overloaded, but some of them were implemented simply by returning a NULL. As a result:
    1. I had to decompile their API assembly to determine if I was doing something wrong.
    2. I am still unable to tell if this is a permanent thing or if the feature will be implemented in a future release.

Please follow all API guidelines for the language or framework you are targeting whenever it is reasonable and possible to do so.

Tags: , , , , , , , , , , | Categories: Development Posted by bsstahl on 10/27/2015 3:43 PM | Comments (0)

I don't create collection objects anymore.

I know, I know. I was they guy always preaching that every entity that was being collected had to have its own collection object. It was the right thing at the time; if you needed to take an action on an enumeration or list of objects, those actions needed to be done within a strongly-typed collection object to maintain encapsulation. Even if all that was happening was that an inherited List<T> function was being called, that functionality needed to be called on the TCollection object because, if it wasn't, it was likely that the next time logic needed to be performed on the collection, there wouldn't be a place to put it. Collection logic would end up being spread-out around your code rather than encapsulated in the collection. It was also possible that the implementation might change and need to be updated everywhere, instead of in one place.

Today however, that has all changed. Extension methods now allow us, at any time, to add functionality to ICollection<T>, IList<T>, IEnumerable<T> or any other interface or class. We can attach our list or enumeration based actions directly to the list or enumeration class, and do so at any time, since the methods appear the same to the developer as methods directly on the collection type. Thus, the "no place to put it" fear no longer exists. I've even started using this technique for my factory methods to make it clear that what I am creating is, in fact, an IEnumerable<T>, as shown below.

    var stations = (null as IEnumerable<Station>).Create();
    var localStations = stations.GetNearby(currentLocation);

In this example, both the Create and GetNearby methods are extension methods found in a static class called StationExtensions.

So, the big advantage here is that these methods can be added anytime, meaning we don't need to create an object that we MAY need in the future. This is better adherence to the YAGNI principle so it is a better pattern to follow. But what about disadvantages? Does it hurt us in any way to perform our collection actions this way? I'm not comfortable answering that question with an absolute "no" yet because I don't think I've been using this technique long enough to have covered enough ground with it, but I can certainly say that I haven't found any disadvantages yet. It seems like these extension methods are basically perfect for this type of activity. These methods do everything that the methods of a collection object do, can (and should) be put in a separate module to keep the code together, can be navigated to by Visual Studio in the same way as other methods, and have the same access (private, internal, public) restrictions that collection objects have. About the only thing I can say that is not 100% positive about using these techniques is that the (null as IEnumerable<T>) syntax to create a local variable instance to call the class factory from is not quite as elegant as I'd like it to be.

So you tell me, do you still create collection objects? Have you found any reason why using extension methods in this way is not as good as putting those methods into a strongly-typed collection? Sound off on Twitter and let's talk about it.

Tags: , , , , , , , , | Categories: Event Posted by bsstahl on 10/22/2015 2:19 AM | Comments (0)

I hope you’ve had an opportunity to see my presentation, “Dynamic Optimization – One Technique all Programmers Should Know” at a Code Camp or User Group near you.  If so, and you want to have a copy of the slide deck for your very own, you can see it embedded below, or use the direct link to the Powerpoint here

The subject of this presentation is using a technique called Dynamic Programming to solve problems that have more than one possible solution.  This technique works very well when used to solve problems that are recursive in nature.  One of the best things about this technique is that it guarantees that the solution it produces is the best possible solution.

We look at three examples during the presentation, the first is done only “on paper” and is an example of using this technique to solve a knapsack problem.  The second example is done in pseudo-code and solves a linear best-path problem in the game of Chutes & Ladders.  Finally, we drop into Visual Studio to solve a 2-dimensional best-path problem.  Sample code for both of the last 2 examples can be found in GitHub.

Keep an eye on my Speaking Engagements Page for opportunities to see this presentation live. If you are a user group or conference organizer, you can contact me to schedule an in-person presentation.  This presentation is a lot of fun to deliver and has been received extremely well at Code Camps and User Groups across the country.

Tags: , , , , , , , , , , | Categories: Development Posted by bsstahl on 10/12/2015 10:15 PM | Comments (0)

If you are building an API for other Developers to use, you will find out two things very quickly:

  1. Developers don't read documentation (you probably already know this).
  2. If your API depends on its documentation to get developers to understand and discover its features, it is likely that it will not be used.

Fortunately, there are some simple mechanisms for wrapping complex APIs and making their functionality both easy to use, and highly discoverable. An API that uses tools like IntelliSense in Visual Studio to make its features discoverable by the downstream developer is far more likely to be adopted then one that doesn't. In recent years, additions to the C# language have made creating a Domain Specific Language that uses a fluent syntax for nearly any API into a simple process.

Create the Context

The 1st step in simplifying any API is to provide a single starting point for the downstream developer to interact with. In most cases, the best practice is to use the façade pattern to define a context that holds our entity collections. Each collection of entities becomes a property on the context object. These properties all return an IQueryable<Entity>. For example, in the EnumerableStack demo solution on GitHub (https://github.com/bsstahl/SimpleAPI), I created an object Bss.EnumerableStack.Data.EnumerableStack to provide this functionality. It has two properties, Posts and Questions, each of which returns an IQueryable<Post>. It is these properties that will be used to access the data from our API.

The context object, on top of becoming the single point of entry for downstream developers, also hides any complexities in the construction logic of the underlying data source. That is, if there is any configuration or other setup required to access the upstream data provider (such as web service access or database connections), much of the complexity of that construction can be hidden from the API user. A good example of this can be seen in the FluentStack demo solution from the same GitHub repository. There, the Bss.FluentStack.Data.OData.FluentStack context object wraps the functionality of constructing the connection to the StackOverflow OData web service.

Extend Our Language

Now that we have data to access, it's time for us to extend our domain specific language to provide tools to make accessing this data simpler for the API caller. We can use Extension methods on IQueryable<Entity> to create custom filters for our data. By creating extension methods that accept IQueryable<Entity> as a parameter and return the same, we can create methods that can be chained together to form a fluent syntax that will perform complex filtering. For example, in the EnumerableStack solution , the Questions, WithAcceptedAnswer and TaggedWith methods found in the Bss.EnumerableStack.Data.Extensions module, can all be used to execute queries on the data exposed by the properties of our context object, as shown below:

var results = new EnumerableStack().Posts.WithAcceptedAnswer().TaggedWith("odata");

In this case, both the WithAcceptedAnswer and TaggedWith filters are applied to the data. The best part about these methods are that they are visible in Intellisense (once the namespace has been brought into scope with a Using statement) making the functionality easy to discover and use.

Another big advantage of creating these extension methods is that they can hide the complexity of the lower level API. Here, the WithAcceptedAnswer method is wrapping a where clause that filters for those posts that have an AcceptedAnswerId property that is non-null. It may not be obvious to a downstream API consumer that the definition of a post with an "accepted answer" is one where the AcceptedAnswerId has a value. Our API hides that implementation detail and allows the consumer to simply request what is needed. Similarly, the TaggedWith method hides the fact that the StackOverflow API stores tags in lower-case, within angle-brackets, and with all tags on a post joined into a single string. To search for tags, the consumer would need to know this, and take all appropriate actions when searching for a tag if we didn't hide that complexity in the TaggedWith method.

Simplify Query Predicates

A predicate is a function that accepts an entity as a parameter, and returns a boolean value. These functions are often used in the Where clause of a query to indicate which objects should be included in the result set. For example, in the query below

var results = new EnumerableStack().Posts.Where(p => p.Parent == null);

the function expression p => p.Parent == null is a predicate that returns true if the Parent property of the entity is null. For each entity passed to the function, the value of that property is tested, and if null, the entity is included in the results of the query. Here we are using a Lambda Expression to provide a delegate to our function. One of the coolest things about Linq is that we can now represent this expression in a variable of type Expression<Func<Entity, bool>>, that is, a Lambda expression of a function that takes an Entity and returns a boolean. This is pretty awesome because if we can store it in a variable, we can pass it around and enable extension methods like this one, as found in the Asked class of the Bss.EnumerableStack.Data library:

public static Expression<Func<Post, bool>> InLast(TimeSpan span)
   {
   return p => p.CreationDate > DateTime.UtcNow.Subtract(span);
   }

This method accepts a TimeSpan object and returns the Lambda Expression type useable as a predicate. The input TimeSpan is subtracted from the current DateTime UTC value, and compared to the CreationDate property of a Post entity. If the creation date of the Post is later than 30-days prior to the current date, the function returns true. Since this InLast method is static on a class called Asked, we can use it like this:

var results = new EnumerableStack().Questions.Where(Asked.InLast(TimeSpan.FromDays(30));

Which will return questions that were asked in the last 30 days. This becomes even simpler to understand if we add a method extending Int called Days that returns a Timespan, like this:

public static TimeSpan Days(this int value)
   {
   return TimeSpan.FromDays(value);
   }

allowing our expression to become:

var results = new EnumerableStack().Questions.Where(Asked.InLast(30.Days());

Walking through the Process

In my conference sessions, Simplify Your API: Creating Maintainable and Discoverable Code, I walk through this process on the FluentStack demo code. We take a query created against the StackOverflow OData API that starts off looking like this:

var questions = new StackOverflowService.Entities(new Uri(_serviceRoot))
   .Posts.Where(p => p.Parent == null && p.AcceptedAnswerId != null
   && p.CreationDate > DateTime.UtcNow.Subtract(TimeSpan.FromDays(30))
   && p.Tags.Contains("<odata>"));

and convert it, one step at a time, to this:

var questions = new FluentStack().Questions.WithAcceptedAnswer().
   Where(Asked.InLast(30.Days)).TaggedWith("odata");

a query that is much simpler, easier to understand, easier to create and easier to maintain. The sample code on GitHub, referenced above, and available at http://github.com/bsstahl/SimpleAPI, contains the FluentStack.sln example which shows how to simplify an API created with an OData source. It also contains the EnumerableStack.sln project which walks through the same process on a purely enumerable data source, that is, an implementation that will work with any collection.

Sound Off

Have you used these tools to simplify an API for downstream programmers? Do you have other techniques that you use to do the same, similar, or additional things to make your APIs better? If so, Tweet it to me @bsstahl and let's keep the conversation going.

Tags: , , , , | Categories: Tools Posted by bsstahl on 10/6/2015 6:24 AM | Comments (0)

One of the things I do to take better control of my online presence is to use a different email address for every online service that I use. I do this for 3 main reasons:

1. To Reduce Spam

If a single alias starts receiving spam, I have a number of options:

  • Since the alias is only used with 1 service, I can create a new alias for that service, update my profile on their website, and delete the old alias
  • If I no-longer feel like I need to receive email from that service, I can simply delete the alias.

I can also determine if a company is selling my email address to spammers. If I have to recreate the alias for a single service more that once or twice due to spam, I can probably assume they are selling my address and take the appropriate steps. Finally, since I am using non-standard email addresses (not my name@, or info@, etc) they are harder for spammers to guess and therefore less susceptible to spam.

2. To Help Prevent Companies from Tracking Me Across Sites

One of the ways companies can line-up data about me across multiple services or websites is by my email address. Since many people use the same email address across all services, it can becomes an easy way to be confident that a user of one site, is the same person as the user of another site. It is common today for a single company to have many different brands and properties, and to combine data from all of them (or sell that data to others) in order to learn more about us. As a result, it can be a benefit to our privacy if we use a different email address for each.

That being said, it should be noted that there are a number of other ways companies can track us across sites. To truly do what you can to protect your privacy, there are several other steps you should take to prevent your data from being tracked across sites. Using different email aliases is just one step. Perhaps I will make this the subject of a future post.

3. To Help Protect Me in Case of Data Breach

Perhaps the most important reason for using a different email alias for every service is that eventually, my data at one or more of these services, will be compromised. Like companies who legally have access to my data, hackers can use their illegally obtained data to also try to match-up my accounts across multiple breaches, or across multiple sites. For example, a single data set can provide thousands of email/password combinations that can be tried at common sites like Twitter, or at banking, government and other key service sites. It makes sense that we do everything we reasonably can to protect our own information since we can't assume that the companies holding it will be able to protect it forever.

Pick a Method and Use It

I recommend using Outlook.com to create email aliases since that service allows you to create truly distinct aliases and tie them to the same account. Gmail can also create many aliases per account, but they all start with the same alias and just end with a plus sign and then the unique portion of the alias (i.e. myaccount+Guid1@gmail.com and myaccount+otherstuff@gmail.com both work as aliases for myaccount@gmail.com). This is better than nothing, but this pattern is easily identifiable and can be filtered-out using software.

A good pattern is to use GUIDs as the email addresses. That is, an address like B99C3900-157A-45F7-AD20-67EF83ED6776@outlook.com or B99C3900157A45F7AD2067EF83ED6776@outlook.com will almost always be available and is impossible to guess. If you create a number of such aliases and keep them with you, perhaps in a OneNote notebook, you will have functional email addresses to give whenever you are asked for a new one. Then you just need to associate that alias with the service in your notebook so you know not to use it again, and so you know where each alias was used.

Do you have a recommendation of an email service or alias pattern that has worked well for you? Sound off on Twitter using the hashtag #OneAliasPerAccount.

Tags: , , , , | Categories: Event Posted by bsstahl on 10/1/2015 2:23 AM | Comments (0)

I am really looking forward to October because I have 3 awesome events that I’ll be speaking, and learning, at:

  • The first event for the month  is Code Camp NYC in Manhattan on October 10th.  I have attending this event once before and loved it. I’m really looking forward to being there again.
  • Next up is Atlanta Code Camp on October 24th.  This will be my 1st time at this event, and my 1st time in Atlanta in many years.  Hopefully, people will have some helpful suggestions for what to see and do when I am not at the Code Camp.
  • Finally, I’ll be speaking at .NET Group – Southern Nevada’s .NET User Group in Las Vegas on October 29th.  I’ve spoken in Las Vegas at the Code Camp there before, but have never had the privilege of attending their user group.

I have several other event possibilities in the works for November and beyond. I’ll announce them here periodically, but you can always see my schedule, as well as past events and the talks I am currently giving, using the “Speaking Engagements” link above.