Tuesday, April 5, 2011

Network tune-up for Windows Home Server (WHS) performance

It's all about the cables

Although I've been very pleased with my Windows Home Server, one area that has been a little disappointing is performance - until today that is.

Recently I tried restoring a 120 GB backup onto a new hard drive.  Unfortunately, WHS reported that it was going to take upwards of 22 hours - yes HOURS - to complete.  I figured there was something wrong with that so I cancelled and proceeded to investigate.  I found some interesting things on Google that said do this or that but none of those suggestions seemed to work for me.

Since my router was only capable of 100 Mbps and since my WHS box has a Gigabit LAN port I figured I would try upgrading my router.  After setting up my new Netgear gigabit router I noticed that my WHS box was only connecting to the network at 10 Mbps.  That would certainly explain where the 22 hours to restore a 120 GB backup was coming from - 120 gigabytes at 10 megabits per second would take about that long to transfer across the network.  But I had a gigabit router and a gigabit LAN port - why was the WHS box only connecting at 10 Mbps?

Well, it turns out, it was the cable.  My network, which I built several years ago, was wired with CAT5 cable.  Apparently cabling has come a long way since then and I was unaware.  But, when I swapped out the CAT5 cable from my router to my WHS box with the shielded CAT6 cable that came with the new router my WHS box was now connecting to the network at the 1 Gbps speed.  Yeah!  And, the restore of that 120 GB backup now took less than 30 minutes to complete.  Wow, what a difference.

So, if you're having trouble with performance from your WHS check your LAN cables.

I would also like to mention that the Netgear N600 router I bought has an awesome feature that I was unaware of when I bought it as it doesn't seem to be described in the product literature.  There is a button on the front where you can turn off the wireless portion of the router - very cool since all of my connections are currently wired connections.

If you want to see how to restore a backup to new/different hardware see my post on 'Windows Home Server to the rescue'

Sunday, March 6, 2011

Agile Thoughts : Sprint Length

team maturity and work definition are key factors

There are many factors that can/should influence sprint length, such as delivery schedules, resource availability, customer requirements, need for feedback, etc., but two often overlooked and perhaps most important factors are team maturity and how well the requirements/work are defined.

If I were putting together a new team or implementing scrum/agile processes for the first time with an existing team I would lean towards shorter sprints, perhaps on the order of a week or two.  I believe this would allow a team to mature much more quickly as there are more opportunities to exercise the full sprint process and more opportunities to use feedback to more rapidly move toward becoming a high-performing team.
Another key factor affecting sprint length is how well the work to be performed is defined and understood.  This includes both the business and technical aspects.  If the requirements are vague or unclear or if the technologies to be used are new or not widely known by the team then it might be a good idea to shorten the sprints to flush out more detail and get more rapid feedback from the customer on whether the team is on or off course.  Likewise, shorter, more focused sprints might help the team determine whether technology or architecture choices were appropriate and correct as well as helping to minimize risk or wasted effort.

As you can see from the above chart, mature, high-performing teams with poorly defined requirements and new, immature teams with outstanding requirements are in virtually the same place - they both need shorter sprints, for different reasons of course, but shorter sprints none-the-less.

See also:  agile thoughts : backlog preparation

Saturday, March 5, 2011

Windows Home Server to the rescue

restoring a PC to new hardware

Several months ago I built a Windows Home Server box partly to back up the family's PCs - one of which is an aging Windows XP machine that I built seven or eight years ago.  As luck would have it the last remaining SCSI hard drive in that old XP box started to fail last week, corrupting the OS and causing the machine to fail to boot.

Since I had this new WHS box I figured I had nothing to lose so I decided to try my first restore.  It was dirt simple and it worked, for a day or two, until the OS was corrupted again.  I ended up swapping out my SCSI controller for a SATA controller, added a new SATA hard drive and performed a restore from WHS onto my new hardware.  It looked like it was going to work just fine - until the first reboot after the restore.  As most of you probably guessed, the backup image did not have the drivers for my new PCI SATA card and thus Windows failed to boot.

I tried numerous things and finally discovered the recipe that would let me successfully restore the backup for my old hardware onto my new hardware:

1.  Restore the PC from WHS onto the new hardware

2.  Boot from the Windows XP CD, pressing F6 at the right time to install the SATA drivers for the new hardware

3.  Choose to install Windows XP (do not enter the XP recovery console)

4.  When prompted, choose to 'Repair' the current installation

Windows will appear to be performing a fresh install (and to some extent it is), but all of your programs and data will be left intact.  If you goof up along the way and accidentally do a full reinstall instead of a repair don't fret, simply go back to step 1 and start over by restoring the PC from WHS again.

5.  Once the repair is complete reboot into the OS and run Windows Update to recover all the patches and updates that were lost by the repair (in my case Windows was set back to SP2 from SP3 since SP2 is the service pack level of my installation CD)

6.  I would advise performing a manual backup to WHS at this point

In hindsight it seems like a pretty simple process, and it is, but it did take some trial and error to figure out.  Needless to say I am very pleased with Windows Home Server and my decision to add a WHS box to my home network.

See my post on 'network tune-up for WHS' to find out how to make the above process much faster and smoother.

Saturday, February 26, 2011

Agile Thoughts : Backlog Preparation

Two hours can make a huge difference.

I've been working in an agile development shop using the Scrum methodology for over four years now and have a few thoughts on what works well, what doesn't work so well and some thoughts on how to improve the process. The first topic I would like to discuss is backlog preparation and where that fits/should fit into the sprint schedule. 

For those of you unfamiliar with Scrum/agile a 'sprint' is a short iteration, somewhere in the 2 to 4 week range (+- a week) that consists of work selection, planning, design, implementation/development, testing, presentation to the client and a team retrospective - usually fairly rigid and in that order.

The team works from the 'backlog' - a list of features or capabilities (called stories) that need to be researched, developed or integrated into the software.  This list is created and prioritized by the 'solution owner' in cooperation with the client/customer.  But since we are talking about agile, this list can be changed frequently based on customer feedback and changing priorities.

Usually, these stories start out as nothing more than simple one line statements or short paragraphs of the form 'as a user I need to be able to do X.'  At some point in this agile/scrum process these stories need to be flushed out in enough detail so that (a) the story can become actionable by the team and (b) the amount and type of effort required to complete the story can be estimated with some degree of accuracy.  In my experience this usually occurs at backlog selection (the kickoff meeting for the new sprint where work is selected).  This usually, without exception, leads to meetings that are long, frustrating, and less productive than they need to be.

Agile/scrum teams usually try to combat this by holding 'backlog grooming' meetings throughout the sprint to flush out some of the details of these future stories and make some preliminary design decisions.  This, however, has several shortcomings that I have seen time and again:  (1) it interrupts the flow/focus of the current sprint, (2) team members are distracted by the current sprint's work and don't fully focus/participate in the thought process for developing future stories and (3) the team many times invests time in preparing stories that they will never actually work or that change dramatically by the time they do.

I use to work in manufacturing and one of the key concepts was 'just in time' - you bring the materials, machinery, and manpower together at just the right time so that inventory isn't building up or so that people and machinery aren't sitting idly by.  It's a great concept and aptly applies to software development and agile processes.  In this context I believe there is one, and only one, place for backlog preparation and that is sometime between when the team has completed its work on the current sprint and prior to the next backlog selection meeting.

The purpose of these backlog preparation meetings is for the solution owner to present the team with the stories that are to be worked in the coming sprint, for the team to ask some initial questions, and for the team to then go off and do some initial brainstorming.  The result should be stories that have a clearer 'definition of done' with some initial high-level tasking from which reasonable estimates of effort can be made.  This meeting should be short, perhaps no more than an hour with the solution owner present and perhaps another hour for the team to brainstorm and come up with an initial tasking, estimates, additional questions for the solution owner and, if need be, alternative implementations/paths forward.

The benefits to this approach are that the team is constantly focused on the work they are to be performing at any given point in time, resources are more efficiently and effectively utilized, the actual backlog selection meeting is more productive, estimates are more accurate, teams are happier and more engaged, and sprints get started off on the right foot and have a higher probability of success.

Two hours spent in backlog preparation - at the right time - can make a huge difference.

See also:  agile thoughts : sprint length

Standalone ExtJS XTemplate classes

ExtJS XTemplates are awesome!  They provide an easy way to combine custom presentation markup with simple or complex data on the client.  Sometimes that markup needs to be more dynamic than simply plugging the data straight into the template.  But, the Ext folks already thought of that and allow you to add methods to your XTemplate definition.  This is great, but can lead to gangly template definitions with scoping issues.

In a recent situation at work we had a 400+ line template definition - only about 20 lines of that was the presentation template, the rest being methods to manipulate/interpret the data (beyond the conversions we had already applied to the data).  In our situation we needed to interpret the same piece of data in different ways depending on where we were in the template (context) as well as the type of view the user wanted to see.  For those of you familiar with XTemplates you will realize that the 400+ lines of template definition are in the constructor call to the XTemplate class - basically a huge constructor parameter.  Obviously it was time for some refactoring.

I have written numerous custom components in javascript, but never one extending the XTemplate, so I decided to try making our template a custom class that extended the ExtJS XTemplate.  Turns out it worked beautifully with very little modification to the original template (other than relocating it to its own file and doing some minor restructuring).  The template markup became part of the call to the super constructor in my new class' constructor and the methods became first class citizens of my new class (which ext accomplishes behind the scenes anyway in the original implementation).

As a result the client code using the template only needed a single line to create an instance of the template, the template is now reusable if needed, the code is cleaner all around, and the scope/context inside the template methods is more natural and easier to understand.

See also:  injecting extjs components via html templates

Monday, January 17, 2011

Injecting ExtJS components via an html template

Use Ajax to load an html page as a template for ExtJS and then plug ExtJS components into it.

Sometimes a web page layout may be too complicated or time-consuming to develop purely in ExtJS or perhaps you want to convert an existing html page to use ExtJS components. In either case there is a simple and straightforward way to inject ExtJS components into a complex html page. There are only a few simple steps needed to accomplish this:
  • create the html
  • fetch the html
  • load the html
  • plug in the ExtJS components
Here is a snippet from myPage.html. Notice the {idBase} included as part of the id. That is a template param that will be replaced when the ExtJS XTemplate is processed. The purpose of {idBase} is to help make sure that each div section has a unique ID and is not really germain to this article.
    
...
The following methods are from myScript.js. This method loads the html using an Ajax request:
     initStructure : function() {
         Ext.Ajax.request({
             url : 'myPage.html',
             disableCaching : false,
             method : 'GET',
             success : this.onStructureLoaded.createDelegate(this)
         });
     } // initStructure()
This success handler puts the html text into an ExtJS XTemplate and then loads that into the body of this component (an ExtJS panel or window):
     onStructureLoaded : function(response, options) {
         var template = new Ext.XTemplate(
             response.responseText
         });

         this.body.update(template.apply({
             idBase : this.id
         }));

         this.initMyButton();
     ...
     } // onStructureLoaded()
Once the html has been loaded into the DOM we can start plugging our ExtJS components into it:
     initMyButton : function() {
         new Ext.Button({
             applyTo : this.getCustomId('myButton'),
             text : 'My Button',
             handler : this.onMyButtonClick.createDelegate(this)
         });
     } // initMyButton()

     getCustomId : function(name) {
         return String.format('{0}_{1}', name, this.id);
     } // getCustomId()

Wednesday, January 12, 2011

Spring-loading and injecting external properties into beans

Let's say you have a Spring managed bean that contains some properties that you would like to externalize from your application, say perhaps in a JBoss 'conf' folder properties file.  Apparently you can do this via annotations in Spring 3, but it's also fairly straightforward in Spring 2.5:

From the context.xml file:

    <bean id="propertyConfigurer"
          class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
        <property name="location" value="classpath:my_app.properties"/>
        <property name="placeholderPrefix" value="$prop{"/>
    </bean>
 
    <bean id="someBeanWithProps" class="my.class.with.Props">
        <property name="myPropA" value="$prop{prop.file.entry.prop.A}"/>
        <property name="myPropB" value="$prop{prop.file.entry.prop.B}"/>
    </bean>

JBoss, JNDI and java:comp/env

On startup JBoss will process any xyz-service.xml files it finds in the deploy folder before it processes any war or ear files, etc.  One thing this could be useful for is to preload configuration values into JNDI, thus making them available to web applications when they start up.  It may sound simple but it consists of a non-obvious four step process:

1.  Create a JNDIBindingServiceMgr mbean in the xzy-service.xml file.

2.  In the WEB-INF/jboss-web.xml file map a resource-env-ref entry over to a JNDI value bound in step 1.

3.  In the WEB-INF/web.xml file create a resource-env-ref entry for each JNDI bound value.

4.  Access the JNDI value from somewhere, such as a servlet filter, using 'java:comp/env'

First, the xyz-service.xml file:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE server PUBLIC "-//JBoss//DTD MBean Service 4.0//EN"
    "http://www.jboss.org/j2ee/dtd/jboss-service_4_0.dtd">
<server>

    <mbean code="org.jboss.naming.JNDIBindingServiceMgr"
           name="netcds.cas.client:service=JNDIBindingServiceMgr">

        <attribute name="BindingsConfig" serialDataType="jbxb">

            <jndi:bindings
                xmlns:xs="http://www.w3.org/2001/XMLSchema-instance"
                xmlns:jndi="urn:jboss:jndi-binding-service:1.0"
                xs:schemaLocation="urn:jboss:jndi-binding-service:1.0 resource:jndi-binding-service_1_0.xsd">

                <jndi:binding name="my/jndi/property">
                    <jndi:value type="java.lang.Boolean">false</jndi:value>
                </jndi:binding>

            </jndi:bindings>
        </attribute>
        <depends>jboss:service=Naming</depends>
    </mbean>

</server>

Next, the resource-env-ref entry in the jboss-web.xml file:

    <resource-env-ref>
        <resource-env-ref-name>my/jndi/property</resource-env-ref-name>
        <jndi-name>my/jndi/property</jndi-name>
    </resource-env-ref>

And the associated web.xml entry:

    <resource-env-ref>
        <resource-env-ref-name>my/jndi/property</resource-env-ref-name>
        <resource-env-ref-type>java.lang.Boolean</resource-env-ref-type>
    </resource-env-ref>

Finally, accessing the JNDI value from a servlet filter:
      
    boolean result = false;
    try {
        InitialContext context = new InitialContext();
        result = (Boolean)context.lookup("java:comp/env/my/jndi/property");
    } catch (final NamingException e) {
        // log and/or sys out
    }