Technology

Dan Rumney

07/16/2013

0

Results To Quality From ‘Code Complete’

Dan Rumney // in Technology

Earlier this year, twenty intrepid Vodorians embarked upon our first-ever reading group. Since a large part of what we do at Vodori is software development, we elected to read “Code Complete.” Granted, it’s nearly a decade old, but weighing in at more than 1,000 pages, it’s a pretty comprehensive overview of good development practices.

I have little to add to the innumerable voices on the Internet that have reviewed “Code Complete” as a piece of technical writing. The true test of its worth is reflective of the impact it has had on the way we work at Vodori. The results are that, yes, the team has improved as a consequence of reading this book as a group.

Most obviously, having 20 people read a book about developing better quality code has improved our code quality. At the risk of being reductive, the message from “Code Complete” is about communication. Clarity, consistency and correctness in code and communication are key. When everyone understands everyone else, along with the code they write, code quality goes up.

There were less obvious benefits too. On a day-to-day basis, conversations between developers tend to be about the task at hand; developers rarely have the chance to discuss high-level development issues. By providing a weekly forum to discuss code quality and how to be a good developer, we provided much needed time and space for these types of discussions.

Finally, once you're out of school and in the workplace, it can be hard to continue to grow as a developer. Learning new libraries and languages is all very well, but there's more to being a good developer than knowing how to code. Knowing what to code is just as important.

 

Share Article

Alex Pemberton

07/10/2013

0

Choosing the right repository: SVN or Git?

Alex Pemberton // in Technology

For a while, Vodori developers used Subversion (SVN) for our version control and code management system. We recently made the switch to Git for a variety of reasons.

SVN has a centralized repository (repo) structure, whereas Git has a distributed repo structure. While this allows SVN to have better out-of-box central repo management, this model can easily be emulated in Git by using a centrally accessible repo as the “canonical” repo. Git is often considered overkill for one person or small development groups, but for larger groups, the choice often depends on the workflow of the group. While not applicable to Vodori, project forking in Git (typically useful for open source projects) is rather trivial compared to SVN.

Image Source: http://betterexplained.com/articles/intro-to-distributed-version-control-illustrated/

Development Workflow Features

A downfall of centralized repos like SVN is that many parts of development workflow cannot be performed if the central repo is unreachable. For instance, version history is stored only in the central repo and hence not viewable so it is impossible to commit code since a commit must go straight to the central repo. With Git, the full repo history is copied to a newly cloned repo. Git’s multi-step process of committing code to your local repo and then pushing it to a remote repo allows you to continue committing code locally if the remote repo is unreachable (and then push the commits later). There are minor downfalls for Git in this workflow. For instance, the initial clone of a Git repo takes longer than a SVN checkout due to additionally copied data, like full repo history. Additionally, if one wants to check out a subset of a Git repo, the entire repo history is still downloaded. However, these are only one time inconveniences.

Branching Model Benefits

Git’s branching model also aids multi-tasked development. Branching in Git is cheap and trivial, which is advantageous for Vodori since developers often work on multiple bug tickets at a time (and can have a separate branch for each). In SVN, if you have changes from two different tickets that affect the same file, keeping these changesets separate while also being able to switch between them seamlessly is much more difficult. The separate branches in Git also make peer code reviews easy. 

While neither system is universally better than the other, Git has features that make it a better choice for Vodori. Its branching abilities are particularly useful for us, while its disadvantages are of little hindrance to our workflow.


 

Share Article

David Wolverton

06/04/2013

0

Helpful Mustache Template Tips

David Wolverton // in Technology

On Vodori's Pepper front-end development team, one of the libraries we commonly use is the Mustache engine for rendering HTML templates in JavaScript. By design, Mustache has a simple and limited templating language. It brands itself as "Logic-less templates". For our team, this means we've had to be creative in meeting every templating challenge with only a simple set of rules. For the benefit of the world, I would like to share some of the helpful solutions we found for some of the more common challenges we've had when using Mustache. 

Note: For a full introduction to Mustache see the official documentation.

Tip #1: Render a block ONCE if an array is not empty.

To iterate over a list of images, use the standard {{#images}}...{{/images}}. But what if you run into a situation where you want to conditionally render something only once if there are images? You might have already tried to put it inside a  {{#images}}...{{/images}} block and found it will render multiple times – once for each image. The solution is to reference the array length, like this {{#images.length}}..{{/images.length}}. Here's an example:

TEMPLATE
{{#images.length}} <h3>The images: this should only be rendered once.</h3> <ul> {{#images}} <li><img src="{{src}}"/></li> {{/images}} </ul> {{/images.length}} {{#anEmptyArray.length}} <h3>The empty array: this should NOT be rendered.</h3> {{/anEmptyArray.length}}
 
DATA
{
    images: [
        { src: "http://www.fpoimg.com/20x20" },
        { src: "http://www.fpoimg.com/30x30" },
        { src: "http://www.fpoimg.com/40x40" }
    ],
    anEmptyArray: []
}

var template = document.body.innerHTML;
document.body.innerHTML = Mustache.render(template, data);
 
See this example in JSFiddle. 

Tip #2: Render simple elements in a list (i.e. the current context).

The standard examples of Mustache show how to iterate over a list of objects, such as the "colorObjects" in the JavaScript tab below. But what if you have an array of simple elements, such as "colors" below? Mustache provides the less-documented {{.}} syntax for accessing the current context.

TEMPLATE
Color Objects:
    {{#colorObjects}}
        <span style="color: {{color}}">{{color}} </span>
    {{/colorObjects}}<br/>
Colors:
    {{#colors}}
        <span style="color: {{.}}">{{.}}</span>
    {{/colors}}<br/>

DATA
{
    colorObjects: [
        { color: "red" },
        { color: "green" },
        { color: "blue" }
    ],
    colors: [ "red", "green", "blue" ]
}

var template = document.body.innerHTML;
document.body.innerHTML = Mustache.render(template, data);
 
See this example in JSFiddle.

Tip #3: Default values.

What if you need to render a default string when a value isn't present? Use this syntax:
<h1>{{title}}{{^title}}Default Title{{/title}}</h1>
{{title}} will "print" nothing if it is empty, and the negation block will only render when title is empty.

TEMPLATE
<h1>Title: {{title}}{{^title}}Default Title{{/title}}</h1>
<h2>Sub-title: {{subtitle}}{{^subtitle}}Default Sub-title{{/subtitle}}</h2>
<h3>Third title: {{thirdTitle}}{{^thirdTitle}}Default Third Title{{/thirdTitle}}</h3>
<h4>Forth title: {{fourthTitle}}{{^fourthTitle}}Default Fourth Title{{/fourthTitle}}</h4>

DATA
{
    title: "Real Title",
    subtitle: "", //blank
    thirdTitle: null,
    // fourthTitle not defined
}

var template = document.body.innerHTML;
document.body.innerHTML = Mustache.render(template, data);
 
See this example in JSFiddle.

Tip #4: Access the parent context.

Lastly, you can access variables from outside the current context. Just use a variable name and Mustache steps up the chain until it finds a match. See for yourself:

TEMPLATE
<ul>
    {{#slides}}
    <li>
        Title: {{title}} <br/> <!-- this comes from the current context--the current slide -->
        Color: {{color}} <br/>
        Author: {{author}} <br/> <!-- this is also available from the parent context -->
    </li>
    {{/slides}}
</ul>
<span>Author: {{author}}</span> <!-- used again directly in it's context -->

DATA
{
  author: "Jones",
  slides: [
    { title: "Hat", color: "black" },
    { title: "Cat", color: "red" }
  ]
}

var template = document.body.innerHTML;
document.body.innerHTML = Mustache.render(template, data);
 
See this example in JSFiddle.

We hope you find these tips useful as you tackle Mustache for your library needs.

 

Share Article

Josh Newman

01/08/2013

0

Development made faster: JavaScript build with AMD modules

Josh Newman // in Technology

Prelude

Before I get into my advice about the benefits of using an Asynchronous Module Definition or AMD, there are a few things to note. First, this article assumes you have a working knowledge of JavaScript. Second, my namespace is located as /static/src/article and that I'm happy serving it from /static/dist/article (similar to Java's output). Third, and finally, I'm using grunt.js, mostly because it's easy to set up. I recommend checking to see if your back-end build system has tools available or if you're short on time and my setup is ok with you, check out the source used by this article.

What is an AMD?

An AMD (Asynchronous Module Definition) is a JavaScript module wrapped in a bit of boilerplate that allows a separate script to manage its dependencies in a sane way. It attempts to take the place of one of JavaScript's greatest issues, namely the ability to define modules in a reusable way.

It is, almost certainly, the most widely adopted system for providing programmatic loading of JavaScript in the browser. The definitive API documentation describes it as:

...a mechanism for defining modules such that the module and its dependencies can be asynchronously loaded. [AMD JS API Wiki]

Fortunately, unlike most standards, AMD is capable of working with modules that are outside of the standard. Unpackaged resources are generally a bit more work to deal with, but the loader will load them. In addition, nearly all AMD loaders support an easy-to-use plugin architecture, such that loading arbitrary things, and later building them, is both possible and plausible.

Why do I need it?

The importance of the asynchronous component cannot be overstated. Because dependencies are explicitly listed, it's possible to make very granular inclusions. Surely, you're reading this because you know about and want to compress your JavaScript, and AMD is a great platform for allowing this, however it's helpful for development too, since the source files will load faster while you're implementing new features.

More important than this though, it cuts down what actually goes in your built file. When loading separate files in dev and building them for production becomes easy, developers are freed to make the choice to break up a module based solely on logical groupings.
For example, if I only want to manipulate arrays, then I can simply define my Array manipulation module:

// Notice I can omit the module name. The loader assumes modules map to a file
// path unless it's told otherwise.
define(function () {
   'use strict';
   
   var methods = {},
       // There are plenty of others, but I want to keep this brief.
       EXPOSED = ['map', 'forEach', 'every', 'some', 'reduce'];

   var aProto = Array.prototype;
   var aSlice = aProto.slice;
   var methodName = '';

   var makeMethod = function (name) {
       return function (array) {
           //noinspection JSReferencingMutableVariableFromClosure
           return aProto[name].apply(array, aSlice.call(arguments, 1));
       };
   };

   for (var i = 0, len = EXPOSED.length; i < len; ++i) {
       methodName = EXPOSED[i];
       /**
        * Create a wrapper for whatever array methods are listed.
        * NOTE: This is not an appropriate real world solution.
        */
       methods[methodName] = makeMethod(methodName);
   }

   return methods;
});

Then, I can include just that module and start working with arrays in some fantastic way. Assuming I put the array tools at /static/src/article/util/array, I would put:

/*global document, alert */
require([
   'article/util/array'
], function (
   array
) {
   'use strict';

   var titleText = array.map(['Build JS, ',  'World', '!'], function () {
       // Interesting manipulations here.
       return item;
   };

   alert(titleText);
});

What do you need to incorporate AMD into your build?

Generally, the ideal output is a single, monolithic file that contains all the dependencies of your app. In order to achieve this, you'll want to define a build profile which should look like:

exports.config = ({
       // The name of this layer.
       name: 'article',
       // Where the output will go when we build.
       dir: 'dist',
       // Where the unbuilt JS resides.
       appDir: 'src',
       // Path from this to the path you'll use for relative declarations of packages.
       baseUrl: '.',
       // A list of packages you're using
       packages: [
           {
               name: 'article'
           },
           {
               name: 'mustache',
               // I'm setting main here, since mustache doesn't use the default, which would be mustache/main
               main: 'mustache'
           },
           {
               name: 'has',
               // Same as Mustache
               main: 'has'
           }
       ],
   
       // Code branching.
       has: {
           // All has('love-for-ie-6')'s in the code will be replaced w/ true, which lets UglifyJS or Closure-compiler remove
           // the if statement around them.
           'love-for-ie-6': true
       }
   });​

What's going on here? Well, first off, I am using exports.config so that I can use this in my grunt.js file. This only matters if you're using a NodeJS based build tool; otherwise, you should be able to get away with a simple object.

Next, I'm providing the name of my app, where the sources come from, and where they go once they've been built. I then enumerate all the packages with the minimum amount of information my build tool will need (in this case r.js). Finally, I list out things I know about the layer I'm building. In this case, I know my customer loves IE6, so I set that to true.

Let's break down this last bit a little further. The order of operations is (roughly):

  1. The build tool concatenates all the modules, recursively looking up their dependencies.
  2. The resources are all move into the dist directory.
  3. All has tests, that have an entry in the profile, are replaced with the boolean values you've given.
  4. The minifier runs and its dead code removal tool takes out the short-circuited branches.


So, the sources when staged, look something like:

// The has integration will take this from the profile above.
if (has('love-for-ie-6')) {
   alert("I dislike you.");
}
else {
   alert("You're awesome.");
}

// Then after the has replacements.
if (true) {
   alert("I dislike you.");
}
else {
   alert("You're awesome.");
}

// Finally, once compiler finishes it off.
alert("I dislike you.")

So, what's in a built file and why do I care?

Basically, they are a collection for function calls which take:

  1. The name of the module in fully qualified terms.
  2. An optional Array listing the modules dependencies.
  3. The function the returns the module (or kicks off the side-effects associated with the module).

The reason that this is interesting is that AMD loaders essentially have only one objective and that's to ensure that all modules have their dependencies loaded before they're executed. Whenever a define call happens, that sticks another key in the loaders list of available modules. So, if all the modules our script needs are in the built file, then the loader won't try to load them.

This process works a lot like an impromptu party. If Mike and James are already at your party and you call Tom to join in, you wouldn't send him to Mike's house to pick him up. Sure, you could, but that would be rude and a waste of time. It's the same with modules, once they are at the party (your clients browser), there's no reason to try and get them again.

What tools are available for building with AMD?

RequireJS
As I mentioned above, r.js is an excellent tool for building AMD modules and it is normally coupled with RequireJS to provide a full range of loading solutions.

Dojo
The primary toolkit of Vodori, Dojo is available at dojotoolkit.org. It comes fully AMD packaged and ready to build out of the box (as of 1.8). Indeed, Dojo 1.8.1's sources are rumored to be compatible with RequireJS (the issues were pretty minor previously). For AMD + Dojo examples, checkout our FE challenge and dojo-boilerplate.

curl.js
Though it seems less used than Dojo and RequireJS, curl.js has a rather nice compliment of plugins and solid documentation about using it with virtually any commonly deployed JS stack. One benefit of curl.js is that by providing script loading synchronously but hidden behind an asynchronous API, it can lead to significantly faster page load during development. [John Hann on David Walsh's blog]

What if my build project is truly massive?

The quick answer is to build many layers, which can be done with multiple profiles, then at the expense of one or two extra requests, you can load up your remaining resources in a conditional way. Some found this insufficient and came up with a radical alternative. Vodori has some pretty neat tricks up its sleeves and even more on the way. I'm hoping to share some of this in the near future.

Are there any gotchas you've come across?

You bet! Below are a couple that have really been sore points.

Random script blocks, with dependencies, in HTML
The most serious, because it is endemic to the pattern, is that if you have to deal with legacy code, where JavaScript, that relies on an AMD module, is buried inside the HTML, then the dependency might not load before you reach that unscrupulous scripter's tag. This should never come up with new construction, but if it does, or if you're simply improving existing code, the simplest option is to concatenate your files server side even when you're in development mode. If you're in a slick browser and your build tool offers source maps, then you can still hide the difference from yourself.

Deploying to several varied directories is more trouble than it's worth
Notice in this project, I put all my dependencies into a single src and transfer them to a single dist. We wound up with a more complicated directory structure for Pepper and it was too much work. If you take only one thing away from this blog post, this should be it. Use one source directory and one dist directory. Trust me, the configuration overhead is not worth it.

 

Share Article

Technology Team

12/14/2012

0

Holiday gift guide for Developers

Technology Team // in Technology

Searching for the perfect gift for that special developer in your life? Look no further than Vodori’s Gift Guide for Developers.

 

1. The Original Beard Hat

Never code with a cold face again. The Beard Hat offers the functionality of a balaclava while simultaneously intimidating a passersby.

2. The Art of Computer Programming

Written by a pioneer of algorithms and programming techniques, this series serves as a handy companion to programmers of all skill levels. It’s considered a must-have for anyone who is serious about computer science, or for those who want a smarter looking bookshelf.

3. Makerbot Replicator 2 3D Printer

Why settle for printing in boring 2D? This bad boy brings your 3D ideas to life, no glasses required!

4. Focal Upright Desk

Put your money where your mouse is. This ergonomic desk allows you to code in comfort and style for days on end.

5. Normal Distribution Ornaments

Research shows that 15.8% of developers love these normal distribution ornaments, while 68.2% have little opinion, and another 15.8% aren’t so keen on them. Get it?

 

Share Article

Jeremy Arnold

08/29/2012

0

Our search for the perfect search engine: How elasticsearch made its way into Pepper

Jeremy Arnold // in Technology

In the world of web development, we have seen some really interesting and complex search requirements from our clients. In our ongoing effort to enhance our product, we began researching flexible search engine solutions that we could easily incorporate into the core of our platform, Pepper. Before diving into the research, we determined the most important criteria that we wanted in a search engine.

We wanted:

  • Lightning-fast speed
  • Superior Java support
  • Scalability for various customer needs
  • Extremely responsive search
  • Painless configuration

After much experimentation and research, elasticsearch emerged as the winner that met all of our needs (and let’s face it, made our lives a little easier!).

Now that we’ve found our product, how do we get started?

Once we determined what we wanted, the fun part was defining how to get us there. We needed to map our existing data models to elasticsearch documents for easy integration. Two of the big reasons that we chose elasticsearch over other options are its support for a schema-less search and its utilization of JSON over HTTP.

With schema-less search, we could let our code define the schema. While we were still obligated to put some definition around the objects that we wanted to search, we were able to define that structure from within our code base, rather than storing the mappings as part of the node configuration. Through this approach, our developers could use familiar methodology and define model objects and their basic schema, all within the same Java class. 

In order to map documents for use in elasticsearch, we developed a simple set of annotations. Using these annotations, we were able to define the elasticsearch object type for a field, as well as the index analyzer, search analyzer, and field searchability.

@ElasticSearchProperty(filterable = true, freeTextSearchable = false, type = ElasticSearchPropertyType.DATE)
    public Date getPostDate() {
        return postDate;
    }
     public void setPostDate(Date postDate) {
        this.postDate = postDate;
    }

@ElasticSearchProperty(freeTextSearchable = true, type = ElasticSearchPropertyType.STRING)
    public String getMessage() {
        return message;
    }
     public void setMessage(String message) {
        this.message = message;
    }

Then, we built a simple annotation processor to generate an elasticsearch mapping from the annotations.

 "postDate":
            {"index":"not_analyzed",
             "type":"date"},
"message":
            {"index":"not_analyzed",
             "type":"string"}

Configuration Success! Onto the Indexing…

Once we enabled elasticsearch mappings from within our application, we began putting documents into our index. Our solution was to write an elasticsearch service that relies on the Jackson framework for converting our model objects to JSON. After obtaining an instance of an elasticsearch client, the code to index an object would look like this:

public void indexObject(T object, String indexName, String objectType) throws Exception {
        if (object == null) {
            return;
        }
        String json = JSONUtil.serializeToString(object);
        getClient().prepareIndex(indexName, objectType, id)
                .setSource(json)
                .setRefresh(true)
                .execute()
                .actionGet();
}
 

Vodori’s expansion on elasticsearch

When adding elasticsearch into our products, we learned the value it added to our sites. However, to fully utilize the features of elasticsearch, we needed to do a little development of our own. The potential for object mapping seemed like a great opportunity to build on elasticsearch’s features and share our findings with other developers through an open source community. We hope that our project space, where we provide some of the cooler parts of the code we developed, can be a great collaboration area for developers and will hopefully grow into a reliable solution for document-oriented mapping in elasticsearch.

If you’re interested in learning more, check out the samples project, or send a pull request!

 

Share Article

Salvador Gaytan

05/11/2012

0

Your code is being watched: observers and singleton patterns

Salvador Gaytan // in Technology

In the web development world, it’s not uncommon to see new projects incorporate frameworks to facilitate the development effort, such as Spring, EJB and Zend. Usually, learning the framework itself takes a while for new developers. So when I recently worked on a system that made use of Zend, I was already expecting a learning curve. However, this project proved to have more curves than San Francisco’s Lombard Street. Upon code stepping, it seemed that code was getting executed when certain variables or conditions changed. After exhaustive debugging and several face-palms, I came to realize that my code was being observed by an entity other than myself.

The observer pattern

The observer pattern is a programming structure that lets other objects know when a certain event occurs. This is especially useful when you have several objects depending on a single object. Instead of making multiple calls from all over your code to check and see if a certain variable or condition changed (adding additional overhead since it may be that nothing has changed at all), the object being observed simply notifies every object observing it that it has changed. This will have the effect of automatically having  all objects up to date without redundantly checking at random (or set) intervals trying to guess when something has changed. The reason this project was using observers was so that when model data would get saved, all view data using the model would be updated. (Part of the project’s MVC structure)

The singleton pattern

Another pattern that I noticed while working with this project was the singleton pattern. This allows only a single copy of an object to be instantiated. This is useful for data that will be used in several places, but will not change. The manner in which singletons were used in this project was to encapsulate xml configuration into an object, so that the information could be more easily read. Singletons are also used extensively in other frameworks (like Spring) to minimize the number of objects in memory when they don’t need to maintain state.

These are only two of many programming design patterns that I learned existed. The Gang of Four (GOF) were the first authors to document these design patterns in their book: Design Patterns: Elements of Reusable Object-Oriented Software. Once I realized that these patterns were in play within this project, and most likely used in potentially every future project, I began buffing up on them. Frameworks, it seemed, make heavy use of these patterns because they abstract ideas and don’t tie structure to specifics. Familiarizing myself with design patterns allowed me more readily identify the structure and flow of the framework and make better use of it. Upon concluding my short detour in design pattern land, I was able to efficiently continue my task, now able to understand how the code was connected. 

 

Share Article

Technology Team

01/27/2012

0

Delete these 10 developer pet peeves from your work habits

Technology Team // in Technology

Developers have a challenging enough job without bad habits getting in the way, but everyone winds up falling victim to a few. That's why it's valuable to take a step back every so often, examine your work patterns, and remedy the faults and blind spots you're bound to find. Here are some of the biggest pet peeves our programming team identified. 

1. Wearing rose-colored blinders

Any system is susceptible to a wide range of internal and external errors. Yet many developers tend to write code that's structured to handle only successful scenarios gracefully. They fail to test for those times when user or system errors occur. When unsuccessful cases inevitably arise, large code revisions may be required to accommodate them. 

2. Adhering dogmatically to patterns/conventions

Code conventions and styles are great guidelines for ensuring your code is readable and consistent across a codebase. But as with any other system, following them inflexibly can restrict your productivity. Falling too deeply into habits can lead developers to shoehorn certain patterns or conventions into situations where they don't belong. Remember, guidelines aren't ironclad rules—you need to apply your best judgment, too. 

3. Missing the big picture

No system is an island—a code change in one portion can impact related components in ways you don't expect. The best way to stay on top of these ripple effects is to write solid unit tests. 

Managing interdependencies can be accomplished through a technique called "separation of concerns," in which methods and classes are focused toward the concrete task at hand instead of trying to accomplish a wide range of business functions. Separating concerns allows for easier testing, and gives developers confidence that their changes won't trigger adverse side effects. 

4. Over-engineering

Eager developers often try to solve problems that don't exist, or build systems to handle situations far outside the scope. While the impulse to solve a core problem in an elegant way is a good one, it needs to be tempered with a realistic grasp of further development opportunities and current resources. Limitations on a project's scope and responsibilities exist for a reason. 

5. Under-documenting

Everyone here is guilty of this at some point, so it bears repeating: Including good comments in your code makes life easier all around. Clarity is the key here. For instance, a comment such as, "this will loop over person objects and call remove on each" is terribly unhelpful, because that's clear from looking at the code. Instead, write something additive like "we need to remove this person object because the user no longer belongs to X group when Y circumstance is met." 

This doesn't just apply to team projects, either—if you're working solo, You Five Months From Now will appreciate the effort Present You puts in. 

6. Developing extraneous features

Don't simply put your head down and charge into the programming. Establish the technical design first—without understanding the problem at hand, you'll never solve it. 

7. Over-optimizing

Everyone wants to write performant code that's efficient and simple. Often they'll spend much of their initial development pass trying to optimize this function to within an inch of its life, or devising neat tricks to cut out cycles. But with modern hardware—both processors and memory—excessive optimization isn't worth the effort.  

For instance, take a developer who's determined to move a code segment from taking 0.7 seconds to taking 0.5 seconds. If that effort eats up two days worth of working hours, the code segment would need to be run roughly 300,000 times to pay back the time investment. Furthermore, those extra 0.2 seconds are only worthwhile if 1) the segment is the biggest speed issue in your system, and 2) it's noticeable to the user—neither of which you'll know until deployment. 

8. Resting on your laurels

They say the half-life of any developer's skill set is two years—which means every other year, half of what you know becomes obsolete. Devote a portion of your working hours to brushing up on the latest technologies, frameworks, approaches, etc. Read blog posts and trade publications, apply new concepts in your work, swap ideas with your peers, and embrace the fact that a developer's training is never complete. 

9. Getting a bit too clever

Sometimes there's a fine line between "creative problem-solving" and "coming up with a crazy hack that nobody else understands." If your clever solution can never be replicated, improved, or repaired, was it really all that clever? 

10. Reinventing the wheel

Sure, putting your own spin on simple processes can be fun. But it's awfully easy to get carried away without adding any real value. How many more approaches to "namespace patterns" in JavaScript does the world need? 

What do you think?

Which bad habits should developers try to minimize? Which good ones should they adopt? Let us know in the comments.

 

Share Article

Mike Sullivan

01/18/2012

0

NTLMv2 authentication from Java: A developer's odyssey

Mike Sullivan // in Technology

A project in the works here at Vodori involves making a set of SOAP web service calls to an external system. This is generally a pretty routine exercise: set up the correct client code from the WSDL, integrate some kind of delivery mechanism, and then make the calls. At least, it's routine when it isn't hitting one roadblock after another. 

Getting started: Axis to Axis2 to Spring-WS

For our purposes, we started out using Apache Axis since it was a pre-existing piece of our platform. We used the existing WSDL2JAVA process to generate the client and connectivity code. 

However, when we tried to actually connect to the system, we received a 401 response—our credentials were denied for failing an NTLM authentication. Digging around, I determined that the most likely culprit was a version mismatch between the NTLM support in Axis and the latest NTLMv2 support by the web service server. 

We tried switching out to the newer Axis2 project, which boasted better NTLM support. Unfortunately, a change in the WSDL2JAVA process consolidated all of the code into a massive 11 MB, 226k-line Java source file. Including that file in our project ground IntelliJ to a halt, and it started throwing OutOfMemoryException errors whenever it would try to compile. 

At this point, we switched to JAXB2's xjc jar to recreate the client files and Spring's Spring-WS package to handle the transport. This provided a simpler code setup and much greater visibility into how the calls were being made. Since we're living in 2012 and not 2006, we probably should have started here in the first place. 

Programmer vs. programming, Part 1
Programmer vs. programming, stage 1: The die is cast.


More problems

With the simpler setup in place, we were able to isolate the source of our troubles: Apache's HTTPClient. This library is pretty rock-solid and has been around for a while, and everyone uses it. It is the default (i.e., only) option for Axis, Axis2, and Spring-WS's latest release. 

One drawback: the library doesn't support the latest authentication schemes, and Apache has since replaced it with the HTTPComponents project and its own HTTPClient class. A major side effect of this change was a wholesale break from the old 3.X HTTPClient codebase and package structure, precluding its use as a drop-in replacement. 

Of course, we aren't the only people encountering such snags. The ubiquity of these tools within the Java and Spring ecosystems has generated plenty of discussion and advice to draw upon. Looking at the Spring-WS site and its JIRA, I came across issue SWS-563, which is a request for exactly what I needed.  

The bad news is, it won't see the light of day until 2.1 (or 2.1M1 for now), and Spring hasn't yet published that version's release calendar. The good news is, attached to SWS-563 are the three files updated by the Spring team to solve this issue. Downloading them and plugging them into my local codebase overwrote them in the Classpath, thus enabling me (for now) to use the most recent HTTPClient code. 

Believing I had solved the issue, I ran my unit tests once more to validate the connection. 

They failed.

Programmer vs. programming, Part 2
Programmer vs. programming, stage 2: This time, it's personal.


Third
Fourth Eighth time's the charm

Being stubborn—and a geek—I decided to take a deeper look at the authentication messages. I logged out the actual NTLM handshake messages, ran them through a Base64Decoder, and analyzed the structure. 

I found that part of the NTLMv2 handshake, our Type-3 message, was failing due to incorrect formatting. I chalked this up to the HTTPClient still not working correctly, so I went back to the Apache site and noticed this message in their NTLM guide

"HttpClient as of version 4.1 supports NTLMv1 and NTLMv2 authentication protocols out of the box using a custom authentication engine. However, there are still known compatibility issues with newer Microsoft products as the default NTLM engine implementation is still relatively new."

Luckily, they include some sample code that uses the Samba team's JCIFS implementation as their NTLM engine. Pulling down that code, creating the relevant classes, and wiring it in was easier than I expected. With all of these components in place—JAXB2 2.X generated files, Spring-WS transport code, the Spring 2.1M1 files, HTTPComponents, and JCIFS—the web services connected successfully.

 

Share Article

Nathan Kurtyka

01/10/2012

0

Responsive web design: How one size can fit all

Nathan Kurtyka // in Technology

How many times have you tried to browse a website on your smartphone or tablet, only to find that the layout is a mess?  Text too large or too small, graphics crowding out the copy, intrusive ads constantly sneaking under your thumb as you scroll. These all-too-common sites are relics of a time when a desktop or laptop computer was the primary (or only) means of engaging with the Internet. 

That sort of web design is untenable. Today's surfers are used to shifting from one device to another without skipping a beat. If the websites they visit aren't able to shift among devices just as easily, they'll leave those websites behind. 

Enter responsive web design.

Responsive web design is an approach to developing page layouts that ditches hard-coded values in favor of fluid, situational rules. These enable a website to automatically adapt its appearance to the size of the screen displaying it—preserving an attractive and readable user interface on desktops, smartphones, tablets, and wide screens alike. Jonathan Verrecchia offers a nice summary of responsive web design on his Initializer blog, complete with demo (resize the browser to try it out). 

Fluid page design adjusts page layout as well as page elements
 Fluid page design adjusts the layout as well as page elements
according to screen size.

 

The method also makes websites more inherently adaptable to the endless procession of new mobile devices coming down the pike. A responsive site doesn't need to bother with recognizing the make and model of the device it's being displayed on, and then adjusting accordingly (or failing to adjust, if that device isn't recognized). 

Yet responsive design won't always be the right solution. In some cases, a mobile site needs to present content and functionality that's entirely different from its desktop counterpart, rather than simply finding a tidier way to display the same material. In those cases, it will make more sense to build each version as a separate site.  For instance, a user visiting an airline's website may have vastly different needs if she's at home on a desktop browser than if she's on a mobile browser while waiting in the terminal.   

What does all this mean for developers?

We need to reevaluate some old habits and adopt a few new ones. With each site we build, we should consider a responsive strategy right from the earliest planning stages. Does it make sense for our client and their users? Even if a mobile experience isn't necessary at the outset, might it become so in the future?

Vodori's own website is undergoing just such a transition. Our goal is to optimize our visitors' experiences, whether they're studying Phantom Limb at the office or brushing up on Design Trends while riding the bus.

What does this mean for clients?

This requires a bit more time, planning, and effort early on, since we'll be testing multiple UIs rather than one. Done properly, though, it will save time in the long run, by allowing us to make incremental tweaks to a UI rather than rebuild a new site from the ground up. 

Website users are more flexible than ever, so website creators must be equally flexible. Developers at Vodori are constantly learning and adapting to keep pace. In part two of this post, we'll explore how our designers incorporate the responsive approach as well.

 

Share Article