Tags: , , , , , , , , , , , , , , , | Categories: Development Posted by bsstahl on 6/30/2015 4:45 AM | Comments (0)

In the last episode of “Refactoring my App Development Mojo”, I explained how I had discovered my passion for building Windows Store applications by using a hybrid solution of HTML5 with very minimal JavaScript, bound to a view-model written in C# running as a Windows Runtime Component, communicating with services written in C# using WCF.  The goal was to do as much of the coding as possible in the technologies I was very comfortable with, C# and HTML, and minimize the use of those technologies which I had never gotten comfortable with, namely JavaScript and XAML.

While this was an interesting and somewhat novel approach, it turned out to have a few fairly significant drawbacks:

    1. Using this hybrid approach meant there were two runtimes that had to be initialized and operating during execution, a costly drain on system resources, especially for mobile devices.

    2. Applications built using this methodology would run well on Windows 8 and 8.1 machines, as well as Windows Phone devices, but not  on the web, or on Android or iDevices.

    3. The more complex the applications became, the more I hand to rely on JavaScript anyway, even despite putting as much logic as possible into the C# layers.

      On top of these drawbacks, I now feel like it is time for me to get over my fear of moving to JavaScript. Yes, it is weakly typed (at least for now). Yes, its implementation of many object-oriented concepts leave a lot to be desired (at least for now), yes, it can sometimes make you question your own logical thinking, or even your sanity, with how it handles certain edge-cases. All that being said however, JavaScript, in some form, is the clear winner when it comes to web applications. There is no question that, if you are building standard front-ends for you applications, you need JavaScript.

      So, it seems that it is time for me to move to a more standard front-end development stack.  I need one that is cross-platform, ideally providing a good deployment story for web, PC, tablet & phone, and supporting all major platforms including Android, iDevices & Windows phones and tablets.  It also needs to be standards-based, and work using popular frameworks so that my apps can be kept up-to-date with the latest technology.

      I believe I have found this front-end platform in Apache Cordova. Cordova takes HTML5/JavaScript/CSS3 apps that can already work on the web, and builds them into hybrid apps that can run on virtually any platform including iPhones and iPads, Android phones and tablets, and Windows PCs, phones and tablets. Cordova has built-in support in Visual Studio 2015, which I have been playing with for a little while and seems to have real promise.  There is also the popular Ionic Framework for building Cordova apps which I plan to learn more about over the next few weeks.

      I’ll keep you informed of my progress and let you know if this does indeed turn out to be the best way for me to build apps. Stay tuned.

      Tags: , , | Categories: General Posted by bsstahl on 4/8/2015 10:58 PM | Comments (0)

      File this post under saving you some time that I spent worrying.

      Recently, I started having problems with my OneNote notebooks not syncing on my primary laptop. Or at least, that’s what it seemed like was happening.  I depend a lot on OneNote since I use it for all of my notes, on all of my devices, so this was a very big deal for me. The notebooks all had the little red “not-synced” icon on them, so I would request a manual sync.  OneNote went through the process and looked like it was syncing, but at the end, all of the notebooks still had the little red icon on them saying they weren’t current.

      The problem turned out to be that I had accidentally changed the radio-button at the top of the sync dialog from “Sync automatically whenever there are changes” to “Work offline - sync only when I click ‘Sync All’”.  As a result, the notebooks always looked as if they were not up-to-date (because they might not have been) and were listed as “Not connected”.  Of course, if I had looked at the last sync time, I would have seen that all notebooks had been synced as of the last manual sync.  Everything was working just fine, I had just changed the setting, making it so my notebooks only synced when I forced it.  By changing the radio button setting back to “Sync automatically…”, everything worked as I expected.

      OneNote Sync

      Tags: , , , , , | Categories: Development Posted by bsstahl on 9/23/2014 3:04 AM | Comments (0)

      To allow ourselves to create the best possible services for our clients, it is important to make those services as flexible and maintainable as possible.  Building services in an agile way helps us to create better services, however it makes it more likely that our service interface will, at some point, have to change.  Changing a service interface after publication is, and should be, a well gated, well thought-out process. By changing the interface, you are changing the contract your service has with all of your clients, and you are probably requiring every one of the service consumers to change.  This should not be done lightly. However, there are a few things that can be done to minimize the impacts of these changes. Several of these things require agreements with the clients up front.  As a result, these items should be included in the Service Level Agreement (SLA) between the service providers and the consumers.

      Caveat: I am a solution architect, not an expert in creating service level agreements.  Typically, my only involvement with SLAs is to object when I can’t get what I need in one from a service provider. My intent here is to call-out a few things that all service providers should include in their SLAs to maintain the flexibility of their APIs. There are many other things that should be included in any good SLA that I will not be discussing here.

      The two items that I believe should be included in all service SLAs are the requirements that the clients support both Lax Versioning and Forward Compatible Contracts.  Each of these items is discussed in some detail below.

      Lax Versioning

      Lax Versioning allows us to add new, optional members to the data contract of the service without that change being considered a breaking change. Some modern service frameworks provide this behavior by default and many of the changes we might make to a service fall into this category.  By reducing the number of changes that are considered breaking, we can lessen the burden on our implementation teams, reducing coordination requirements with service consumers, and shortening time to market of these changes.

      One of the major impacts that Lax Versioning has is that it requires us to either avoid schema validation altogether, or to use specially designed, versionable schemas to do our validation.  I recommend avoiding schema validation wherever reasonable and possible.

      Forward Compatible Contracts

      Forward Compatible Contracts, also known as the Round-Tripping of Unknown Data, requires that the service round-trip any additional data it gets, but doesn’t understand, back to the client and that clients round-trip any additional data they get, but don’t understand, back to the server.  This behavior reduces the coupling between client and server for changes that are covered by Lax Versioning, but need to retain the additional data throughout the call life-cycle.

      For example, suppose we were version a contract such that we added an additional address type to an employee entity  (V1 only has home address, V2 has home and work addresses).  If we change the service to return the V2 employee prior to changing the client, the client will accept the additional (optional) address type because we have already required Lax Versioning, but it will not know what to do with the information.  If a V1 client without round-tripping support sends that employee back to the server, the additional address type will not be included.  If however, the V1 client supports this round-tripping behavior, it will still be unable to use the data in the additional address field, but will return it to the server if the entity is sent back in a subsequent call.  These behaviors with a V1 client and a V2 service are shown in the diagram below.

        image

      If the same practice is used on the server side, then we can decouple the client and server from many implementation changes.  Clients would be free to implement the new versions of contracts as soon as they are ready, without having to wait for the service to roll-out.  Likewise, many changes at the service side could be made knowing that data sent down to the clients will not be lost when it is returned to the server.

      Summary

      Making changes to the contract of existing services is a process that has risk, and requires quite a bit of coordination with clients. Some of the risks and difficulties involved in the process can be mitigated by including just 2 requirements in the Service Level Agreements of our services.  By requiring clients to implement Lax Versioning and making our contracts Forward Compatible, we can reduce the impact of some changes, and decouple others such that we significantly reduce the risk involved in making these changes, and improve our time-to-market for these deployments.

      Tags: , , , , , | Categories: Development Posted by bsstahl on 7/28/2014 10:42 PM | Comments (0)

      While working on the OSS project mentioned in my previous post, I have run across a dilemma where two of the principals I try to work by are in conflict. The two principals in question are:

      1. YAGNI - You aint gonna need it, which prescribes not coding anything unless the need already exists. This principal is a core of Test Driven Development of which I am a practitioner and a strong proponent.
      2. Standardization - Where components, especially those built for use by other developers, are implemented in a common way in order to shorten the learning curve of future developers who will use the component and to reduce implementation bugs.

      I have run across this type of decision many times before and have noted the following:

      • YAGNI is usually correct, if you don't need it now, you are unlikely to need it in the future.
      • Standard implementations which are built incompletely tend to be implemented badly later because there tends to be more time pressure further along into projects, and because it is often implemented by someone other than the original programmer who may not be as familiar with the pattern.
      • The fact that there is less time pressure early in projects is another great reason to respect YAGNI because if we are always writing unnecessary code early in projects, a project can quickly become late.
      • Implementing code that is not currently required by the use-cases being built requires the addition of unit tests that are specific to the underlying functionality rather than user requested features. While often valuable, the very fact that we are writing such tests is a code smell.
      • Since I use FxCop Code Analysis built-in to Visual Studio, not supplying all features of a standard implementation may require overriding one or more analysis rules.

      Taking all of this into account, the simplest solution (which is usually the best) is to override the FxCop rules in the code, and continue without implementing the unneeded, albeit standard features.

      Do you disagree with my decision? Tell me why on Twitter @bsstahl.

      Tags: , , , , , , , , , , , , | Categories: Development Posted by bsstahl on 7/11/2014 8:53 PM | Comments (0)

      I recently started working on a set of open-source projects for Code Camps and other community conferences with my friend Rob Richardson (@rob_rich). In addition to doing some good for the community, I expect these projects, which I will describe in more detail in upcoming posts, to allow me to experiment with several elements of software development that I have been looking forward to trying out. These include:

      • Using Git as a source control repository
      • Using nUnit within Visual Studio as a test runner
      • Solving an optimization problem in C#
      • Getting to work on a shared project with and learning from Rob

      As an enterprise developer, I have been using MSTest and Team Foundation Server since they were released. My last experience with nUnit was probably about 10 years ago, and I have never used Git before. My source control experience prior to TFS was in VSS and CVS, and all of that was at least 6 or 7 years ago.

      So far, I have to say I'm very pleased with both Git for source control, and nUnit for tests. Honestly, other than for the slight syntactical changes, I really can't tell that I'm using nUnit instead of MSTest. The integration with Visual Studio, once the appropriate extensions are added, is seamless. Using Git is a bit more of a change, but I am really liking the workflow it creates. I have found myself, somewhat automatically, committing my code to the local repository after each step of the Red-Green-Refactor TDD cycle, and then pushing all of those commits to the server after each full completion of that cycle. This is a good, natural workflow that gives the benefits of frequent commits, without breaking the build for other developers on the project. It also has the huge advantage of being basically unchanged in a disconnected environment like an airplane (though those are frequently not disconnected anymore).

      The only possible downside I can see so far is the risk presented by the fact that code committed to the local repository, is not yet really safe. Committing code has historically been a way of protecting ourselves from disc crashes or other catastrophes. In this workflow, it is the push to the server, not the act of committing code, that gives us that redundancy protection. As long as we remember that we don't have this redundancy until we push, and make those pushes part of the requirements of our workflow, I think the benefits of frequent local commits greatly outweigh any additional risk.

      As to the other two items on my list, I have already learned a lot from both working with Rob and in working toward implementing the optimization solution. Even though we've only been working on this for a few days, and have had only 1 pairing session to this point, I feel quite confident that both the community and I will get great benefit from these projects.

      In my next post, I'll discuss what these projects are, and how we plan on implementing them.

      Tags: , , , , | Categories: Event, Development Posted by bsstahl on 6/29/2014 1:08 AM | Comments (0)

      For those who saw my code camp presentation, “SOA – Beyond the Buzzwords”, you can find the slide deck here

      There is much more to building a Service Oriented Architecture than just creating services. SOA services can be much more difficult to build, requiring more analysis and design work up-front than a non-service-enabled system or a system that relies on CRUD-style data services. In this session, we will look at real-world examples of SOAs, examining what a good SOA might look like, what conditions present a good opportunity to use a Service Oriented Architecture, and how we can make the process more agile. We will also look at some practical tips to help make your services more extensible and maintainable.

      For those who haven’t yet seen this presentation, I will be giving this session at several other code camps and user groups around the US between now and the end of the year.  Keep an eye on my Speaking Engagements page or have your user group leader request me as a speaker via Ineta.

      Tags: , , | Categories: General Posted by bsstahl on 6/14/2014 9:57 PM | Comments (0)

      Submitted 6/14/2014

      There is no question that the Internet has played, and will continue to play an ever increasing role in our lives, both in terms of our daily activities, and in how we guarantee and monitor our freedoms. More and more of our citizens' speech occurs on the Internet every day. Additionally, more and more new businesses are starting up on, and because-of, the Internet.

      If a small number of individuals or companies are allowed to determine which speech is heard, or which companies are allowed to thrive, much of what we strive for in our society will be lost. Gone will be the opportunity for a free and open debate, the type of debate that helps our citizens protect their rights. Gone will be the ability for anyone with the skills and drive to start a business and participate in our economic growth. It is up to the Federal Communications Commission, the representatives of We the People of the United States, to protect our rights and guarantee equal opportunity for everyone to use, and be heard, on the Internet.

      I urge you to deny any proposal that would create an Internet "fast-lane" for anyone able and willing to pay bribes to the few communications providers who make up the Internet backbone in this country, and to protect the public's rights by classifying the Internet as a public utility.

      Tags: , , , , , | Categories: Development Posted by bsstahl on 4/4/2014 9:26 PM | Comments (0)

      One thing I've noticed during my 30 years in software engineering is that everything old eventually becomes new again.  If you have a particular skill or preferred methodology that seems to have become irrelevant,  just wait a while, it is likely to return in some form or another.  In this case, it seems that recent announcements by Microsoft about how developers will be able to leverage the power of Cortana, are likely to revitalize the need for text processing as an input to the apps we build.

      At one time, many years ago, we had two primary methods of letting the computer know what path we wanted to take within an application; we could select a value from a displayed (textual) menu, or, if we were getting fancy, we could provide an input box that the user could type commands into.  This latter technique was often the purview of text-only adventure games and inputs came in the form "move left" and "look east".  While neither of these input methods was particularly exciting or "natural" to use today's parlance, it was only text input that allowed the full flexibility of executing nearly any application action from any location.  Now that Microsoft has announce that developers on Windows Phone, and likely other platforms, will be able to leverage the platform's built-in digital assistant named "Cortana" and receive inputs into their applications as text input translated from the user's speech (or directly as text typed into Cortana's input box) it makes sense for us to start thinking about our application inputs in this way again. That is, we want to consider, for each action a user might take, how the user might trigger that action by voice command.

      It should be fairly easy to shift to this mindset if we simply imagine, on our user interfaces, a text box where the user could type a command to the app.  The commands that the user might type into this box are the commands we need to enable using the provided speech input APIs.  If we start thinking about inputs in this way now, it might help to shape our user interfaces in ways that make speech input more natural, and our applications more useful, in the coming years. Of course, this also gives us the added benefit of allowing us to reuse our old text parsing skills from that time when we wrote that adventure game…

       

      Tags: , , , | Categories: Development Posted by bsstahl on 8/5/2013 6:04 PM | Comments (0)

      As a follow-up to my posts here and here on the missing “Create Unit Test” feature in VS2012, I point you to this post from the Visual Studio ALM & TFS blog announcing the Release Candidate of their new Unit Test Generator for Visual Studio.  According to the post, this extension

      “…adds the “create unit test” feature back, with a focus on automating project creation, adding references and generating stubs, extensibility, and targeting of multiple test frameworks.”

      I am installing the extension now and will comment on how well it works for my TDD workflow in a future post. 

      Tags: , , , , , , , , | Categories: Development Posted by bsstahl on 6/28/2013 1:05 AM | Comments (0)

      On at least 2 occasions recently, I have heard speakers tell their audience that you cannot reference a target-specific .NET library (such as a .NET Framework 4.5 library) from a Portable Class Library. While this is technically true, it doesn't tell nearly the whole story. Even though we can't reference target-specific libraries, we can still USE these libraries. We can call their methods and access their properties under the right circumstances. We can gain access to these libraries via an abstraction. My preferred method of doing this is known as Dependency Injection.

      I'm going to give some quick background on PCLs and DI before getting into the details of how they can be used in this context. If you are familiar with Dependency Injection and .NET Portable Class Libraries you can skip these sections.

      .NET Portable Class Libraries (PCLs)

      Portable Class Libraries are .NET assemblies designed to be used by multiple target platforms in the .NET application space. You can specify which targets you want to be able to use, such as .NET 4.5, Silverlight 4, Windows Phone 8, etc. The compiler then does the work to limit the APIs you have at your disposal in that library to only the intersection of all of the selected targets. This guarantees that any code written in that library will work in all of those targets, but no target-specific (device-specific) functionality will be available. These libraries are great for business-logic and other platform-independent services but are not useable for code that requires direct access to device features like the UI, camera, GPS, etc. This code can be compiled and tested once, and then accessed from any of the selected target contexts.

      Dependency Injections (DI)

      Dependency Injection is a way of maintaining loose-coupling between application components. Instead of having a piece of code have a direct knowledge of one of its dependencies, the code only has knowledge of an abstraction of that dependency, usually an interface. Since the client is unaware of the implementation and only has knowledge of the abstraction, the implementation of the dependency can change, and as long as it maintains compliance to the interface, the client code is unaware of the change and continues to function normally. The correct dependency must then be "injected" into the calling code prior to being used. The client only knows that the dependency implements the needed interface, but is unaware of the actual implementation. This becomes extremely useful in unit-testing since a fake dependency such as a mock data-provider can be injected by the test context, allowing the tests to focus on the layer being tested without having to test the dependencies as well. While this is not nearly the only reason to use DI, it is an example of an excellent benefit of its use.

      Injecting Target-Specific Code into PCLs

      Let's suppose we have a .NET Portable Class Library that implements the business logic of our application. We want the application to be able to run on the web under ASP.NET, on Windows 8 as a Modern Windows Store App, and on Windows Phone 8. We built the PCL using these specific targets so we know (the compiler guarantees) that this code will run in any of those platforms. However, this code needs to get its data from somewhere, and that somewhere is different depending on what environment we are running in. In ASP.Net for example, we may want to get the data from Session State, or from a back-end SQL Server, while in Windows Phone 8 and Windows 8 we want to use their (different) implementations of isolated storage. We can accomplish this by defining an interface that is usable by all 3 targets in a PCL. We can then create our 3 different implementations of the storage library using target-specific code and inject the appropriate one into the constructor of one or more of the classes in the business-logic PCL. This injection can be done directly by the parent application, which is going to be target-specific so it would have knowledge of which target is needed, or it can be done indirectly using a DI Container such as Microsoft Unity.

      A sample app that is available in the 3 targets previously described may look something like this. The business-logic and domain layers (interfaces, exceptions, entities, etc) are both PCLs and exist for use in all 3 targets. The UI layer and Infrastructure layers (in this case, storage) are target-specific and require a separate implementation for each target platform. A system designed in this way can maximize the use of common, shared code while still making platform specific features available in a type-safe way.

      If you are interested in seeing this implementation done live, you can come to one of my Code Camp talks on the subject, or request me as a speaker for your User Group via Ineta.