Eppur Si Muove

 
 
Everyone has their favorite mediums and places for project documentation. A sampling of what I have seen is
-Word in Visual Source Safe
-Wikis along PowerPoint in CVS and Sharepoint
-Wikis along with Word in Sharepoint
Word turns out to be a reasonable medium for documentation as long as when edits are made one turns on change tracking. I have only seen this done at one place that I have worked though, so I don't think most groups get as much value as they can by keeping docs and specifications in Word. PowerPoint as a medium for documentation and specification, at least in my experience, is a loser proposition. I find the communication power of PowerPoint low. It's a medium for entertainment, not engineering.
Wikis, as medium and storage, are an OK solution for documentation. About 5 years ago there was a lot of buzz about wikis and I was thinking that wikis were the future of documentation systems. I have not seen them deliver on their promise. Its harder to get people to contribute to a wiki, usually one person buys in and the rest of the team contributes when they are forced to. Since everyone owns the wiki, no one has any responsibility for the wiki with predictable results. Wikis also get disorganized over time and accumulate out of date cruft.
For storage companies seem to love Sharepoint. I don't see anything to recommend in Sharepoint. I suppose people can write code and create Sharepoint applications but people should have better things to do with their time. Admins can set security/ access restrictions on Sharepoint but why why why would any group want to place access restrictions on documentation and specifications? But groups do all the time. And then, like Wikis, its hard to find stuff in Sharepoint, but harder than finding stuff in a Wiki.
What I like to use for documentation are text based files in a version control system. I'm sure proposing a text based media format sounds primitive but it has the advantages that it is search-able with tools like grep and its also diff-able. Text based formats are also transformable so there are tools that can change text based formats to HTML which can then be published to a web site. Also there are tools to convert text to LaTex and then rather nice pdfs. For storage the most important thing is to put all documents and specs in one place. And if that one place is a version control system, then one can pull documentation for different points in time or different versions of a product. For specifications being able to do diffs on changes that are going on, to observe how a specification evolved over time, or to check who made changes is a huge time and fuck-up saver. As developers we all seem to have found value in putting code in version control systems. I don't make a strong distinction between documentation, specifications, and code. To me they are all product development artifacts that change over time. Dev artifacts that change over time defines what I want in source control.
Above I listed three documentation solutions that I have seen a various companies. Word in VSS worked the best and what I choose on my own personal projects is text files (Emacs org mode) or LaTex files kept in git. Infinitely more simple, less costly, and more efficient than anything I have used else where.
 
I've been fussing a bit today to get YUI 3.x and JsTestDriver-1.3.1 working together. Here are just some notes on what worked for me.
First we need a config file:
server: http://localhost:4224
load:
  - lib/yui/yui/yui-debug.js
  - lib/yui/oop/oop-debug.js
  - lib/yui/event-custom/event-custom-debug.js
  - lib/yui/attribute/attribute-base-debug.js
  - lib/yui/pluginhost/pluginhost-debug.js
  - lib/yui/base/base-debug.js
  - lib/yui/dom/dom-base-debug.js
  - lib/yui/dom/selector-native-debug.js
  - lib/yui/dom/selector-css2-debug.js
  - lib/yui/event/event-base-debug.js
  - lib/yui/dom/dom-style-debug.js
  - lib/yui/dom/dom-style-ie-debug.js
  - lib/yui/dom/dom-screen-debug.js
  - lib/yui/node/node-debug.js
  - lib/yui/event/event-base-ie-debug.js
  - lib/yui/dump/dump-debug.js
  - lib/yui/event/event-delegate-debug.js
  - src/*.js
  - test/*.js

I found that I had to manually add the dependencies for my classes in to the config file as I couldn't rely on dependencies being pulled in by the YUI framework / JsTestDriver combo. The dependencies are easy enough to get as the YUI team has a tool for figuring out the dependencies manually.  Just a copy and paste and find/replace.

Next I need a class to test. This is what I used:
YUI.add("myclass", function(Y) {

    function MyClass(config){
        MyClass.superclass.constructor.apply(this, arguments);
    }
   
    MyClass.NAME = "myClass";   
    MyClass.ATTRS = {
        id: {},
        gender : {},
        age : {}
    };
   
    Y.extend(MyClass, Y.Base, {
        bark: function() {
            return "woof";
        },
       
        initializer: function(config) {
            var id = this.get("id");
            var loc = document.getElementById(id);
            if (loc) {
                var wdgt = Y.Node.create("<div 'wdgt'>widget stuff here</div>");
                Y.one(loc).appendChild(wdgt);
            }
        }
       
    });
   
    Y.namespace("test").MyClass = MyClass;
   
}, "3.3.0", {requires:["base", "node"]});

And lastly there is a test class which I had to play with a bit:
YUI().use("myclass", function(Y){

    TestCase("MyClassTest", {
   
        testMyClassInitialization: function(){
            var my = new Y.test.MyClass({});
            assertNotNull(my);
            assertEquals("woof", my.bark());
        },
       
        testMyClassAttributesInitialization: function(){
            var my = new Y.test.MyClass({
                id: "1f33a2b987",
                gender: "Male",
                age: 23
            });
            assertEquals("Male", my.get("gender"));
            assertEquals(23, my.get("age"));
        },
       
        "test_the widget should add to the page": function(){
       
            /*:DOC += <div id="1f33a2b987">you be the man</div> */
            var div = document.getElementById("1f33a2b987");
            var parent = Y.one(div);
            assertNotNull(parent);
           
            var test = Y.one('.fine');  //this can find the div but selecting with #1f33a2b987 doesnt work
           
            var my = new Y.test.MyClass({
                id: "1f33a2b987",
                gender: "Male",
                age: 23
            });
           
            div = document.getElementById("1f33a2b987");
            parent = Y.one(div);
            cs = parent.get('children')
            assertNotNull(cs);
            assertEquals(1, cs.size());
        }
       
    });
   
});
Things to be aware of here are to be sure to add the 'myclass' dependency. I also had no success getting Y.one("#1f33a2b987") to find the node that I created in the 3rd test case. Although Y.one did not have any trouble finding by class. (I have run into this cant find by id problem before.)  From here everything worked fine although I did run the tests with the --reset option. There was one time where I didn't use --reset and I missed tests.  But it may very well have been unrelated to the --reset flag.

do your best, Marco
 
This weekend I ran a little experiment between using Clojure and Ruby. The results were a bit surprising.
I started working on some problems in SCIP; in this case it was electronic circuit simulation. I was doing problems and following along with the text using Clojure. And I spent most of Saturday fussing with code and moving through the material. On Sunday I decided I would start programming in Ruby. And it was a lot easier to get readable code up and running and move further along through the examples, that I hadn't gotten to the day before, using Ruby.
Now I haven't done a lot of Clojure programming in the past few months but I spent most of last year doing Clojure in my spare time and I even have one web project under my belt in Clojure. I have never written anything larger than a one file script in Ruby, I have written very little Ruby ever, and have only written any Ruby since mid January.
If my next experiment between Ruby and Clojure comes out with Ruby visibly more productive than Clojure. I will still use Clojure but I will have a hard time justifying its use outside of "for my own pleasure" programming.

 
The past month I have been in San Jose as a communication facilitator between teams, and since the real world motivates a lot of my posts, I'm going to review my thoughts on the overseas experience.
-- We are creating a formal process for QA testing in the company -  mostly ui testing or whole system integration testing.  The expectations of the formalisation process is that it will improve productivity and time to market.  Unfortunately I dont really think we will achieve those goals for the following reasons:
1. Return on investment: I have nothing full stack environment, ui types of tests. They catch bugs and they bring confidence. But at the same time there is a high maintenance cost to testing software through the ui just because uis tend to have high rates of changes. Ui tests can also be difficult to write so that they have high reliability. Often there is a much higher return on investment to focusing on unit tests with mocks to create internal states of interest in software. So I think the the company is going to find that the more tests they write the more time they will spend time trying to keep those tests stable, eating more and more resources with no long term gain in large picture software quality.
2. It already too late: Ui type tests hit the software at the end of the production cycle. That's not where we want to be figuring out if software is ok or not. If the goal is quality software then the emphasis has to be on teaching developers to write software correctly at the point of construction. That can be done and it doesn't take any fancy resources. If software is not correct when it is written quality cant be tested into it.
3. This formalization process is starting to get very custom and rigid. Everyone has to use the same tools. Everyone has to use our special home grown repository. Everyone has to package up their software in a special in house package format. Sorry, reality is that none of us want to or has the time to learn a special set of technologies just to test our software. Junit -- fine -- its simple and so its successful. If testing is going to be more complicated than using junit forget it -- junit is already to complicated for some developers. Mandating certain tools? Well, there is a trade off between uniformity (and its maintenance benefits) and rigidity by demanding that everyone fit into the same mold.  Productivity is improved by doing by finding better methods and adopting those methods. My heart is with keeping the door to new, better methods open.

-- The testing system we are building is an engineering support system but I dont believe people have thought clearly about the process.
1. Continuous integration not just a popular phrase: We are putting together a system to support continuous integration testing and its strange to me to see how this system doesn't match up to continuous integration systems that I have worked on in the past. In this company the whole focus of the testing is that there will be some deployable module and it will be tested in environment A-1 and once it passes it will go to be tested in environment A-2 and....tested in environment A-N. Where upon passing the module can be deployed to production (after manual testing ;))  Basically I feel our continuous integration process is "just what we do now only more automated so its faster". Continuous integration actually means, unless there is good reason, that software is tested in environments A-1...A-N in parallel. Its tested immediately after a commit. not a couple hours later after its passed test suites 1...N. We want to notify people of possible problems as close as possible to the introduction of problems not later when additional confounding or just the mists of time obscure the problem.
2. Prefer parallel to stepwise as a general process design principle: Efficiency is gained by creating processes that are continuous and parallel and not batch and stepwise. Unfortunately the human mind seems to have batchwise and stepwise conceptual tendencies, so people start thinking batches and steps and never get past that.

-- This month the development team spent a lot of time developing software that was later discarded as the wrong design.
1. Agile development is not an excuse to be stupid:  There are problems that have well known solutions, problems whose solution is discoverable, and problems whose solution is best discovered through an evolutionary attack. People who are enamoured of being agile want to evolve a solution to all their problems. This past month we were dealing with a well know problem with a few discoverable elements, a distributed team, and a distributed system. The architects in charge are trying to evolve this system.  There are large, knowable implementation grey areas where people are saying we'll think out the details later,  There are no specification documents - esp. for the distributed communication part which really annoys me. And one day of thinking could have saved us one month of coding.

-- And lastly, we are not testing our systems as we build them and we are not using our system to test.  Sorry, people who are building a process or a system should be using that process or system as they build what they are building. If the builders are not using what they are building then the builders are not serious about creating a product. They are just playing with company money while they congratulate themselves on their good ideas.

Do your best, Marco


 
These days it seems hard to find enough time to read a book from cover to cover - more often I read - skim - read - skim until I'm finished.  Part of the reason is that in a developers job it seems that every three months or so I have to learn a whole new set of technologies for the next project.  That said, some books were good enough to read all the way through, they held their own against the need for skimming.  Of those here are some reviews:
Perl Testing: A Developers Notebook (chromatic & Langworth)
Turned out to be a very good and up to date book on Perl testing.  This book covers everything under the sun about testing with Perl.  But beyond that all the techniques are generally applicable for testing in any language, and from that point of view this book is probably the most compete book on testing that I have ever read.  Top marks for this book and I strongly recommend it for people who are coding in Perl.
Getting Started With Grails (Davis & Rudolph)
I really like Groovy and I picked this up because I was looking into Google App Engine and I wanted to see what I could do with GAE and Groovy.  Grails turned out to be a pretty good framework.  I worked through this book in a weekend and my reaction was "Wow, I'm getting a lot of functionality and I'm not writing hardly any code."  This book is one, well written tutorial to Grails.  Its also free as a download from Infoq.  A very good resource.
The Art of Game Design: A Book of Lenses (Schell)
This book came across the me as a cross between New Age Philosphy and Software Development.  I'm looking at the table of contents right now and here are some of the chapter sections "Dissect Your Feelings", "All That's Real is What You Feel".  Well, I dont like my tech books to be too touchy feely but I was able to put that aside and get something out of pretty much every chapter in this book.  This is a good book if your looking for a tech book that you dont have to read with a computer handy, to write code along with.  Try it on your next plane trip.
Programming Clojure (Halloway)
The pragmatic programmer books are always good quality and this was no exception.  This is a good solid introduction to Clojure.  I would recommend it for anyone starting out with the language.
The Structure and Interpretation of Computer Programs (Abelson, Sussman & Sussman)
I am working through this implementing everything in Clojure instead of Scheme as I go through.  I haven't finished yet - Im near the end of chapter 4 of 5 chapters.  This is a book that one has to spend some time with.  And after one spends the time one will have to put a bit more time aside to digest.  For people who are interested in languages and lisps in general this is a good book to start a journey towards deeper understanding of CS fundamentals.
Object Oriented Software Construction (Meyer)
I have this on my kindle and I have pulled this out and read a chapter here and there at different points this year.  But I have probably read each chapter in this book twice since I first found it about 3 years ago.  This book is a classic.  I can pick any chapter in this book and always discover something new to think about, or better ideas to add to my tool box.  If you are writing software you owe it to yourself to read this book.
And.....Lastly a few non tech books
Solo Guitar Playing (Noad)
Excellent, Excellent, Excellent.  For those interested in classical guitar, I cant say enough good things about this book.
Inner Work (Johnson)
I'm very partial to books with a Jungian bent.  This is a good practical discussion of active imagination.  I've never read a book where active imagination was well explained; this one is very thorough.  Highly recommended.
Making Great Decisions in Business and Life (Henderson)
I read this through all in one sitting and occasionally I find myself going back for rereads of particular chapters.  People who take an interest in economics will have extra appreciation for this book.

So, there you are, some reading possibilities depending on you interests....For 2011 lets continue to expand our horizons:)

Marco


Limits

12/19/2010

0 Comments

 
I was reading the above article this morning.  The undercurrent of the article is that certain evolved systems grow in size while at the same time growing in efficiency.  Examples are animals and cities.  On the other hand, some systems grow in size while losing efficiency; not surprisingly, one example is mature companies.
My interpretation of the results was that strong evolutionary pressure, coupled with the continuous experiments in creative destruction are the prerequisites for growing a system that wont destroy itself in the long run.
There's also another type of system that tends to kill itself off as it grows; these are software systems and to counteract this tendency software engineers also introduce evolutionary pressure and creative destruction.
Unit tests are the technology that enables creative destruction, or refactoring in software talk.  A lot of talk promotes unit tests as a way to cut down on defects.  That's a lower level use of the technology.  A far more important use is to allow an engineer to take existing code and rework the design to support change.  Without units tests our only option for enhancing existing code is to kludge an addition in the safest looking place and hope we have understood all the side effects.
As for evolutionary pressures, software engineers often introduce those into systems they are working on in order to drive the creative destruction and force the systems they build to have superlinear (see the article above) growth characteristics.
Continuous deployment is a pressure that forces a system to evolve along the delivery axis.  I have read that Paul Graham keeps track of lines of code as he develops his Arc language.  He wants to create a succinct and expressive language and that's one metric that he uses to force his language to grow in fitness.  The developers of project Oberon wanted a compiler that was both powerful and simple so they forced themselves to add functionality to their compiler while maintaining limits for self compilation speed and self compilation object size.
The Haskell language is also an evolutionary experiment with corresponding pressures.  They wanted to see what software development innovations they could discover in an environment that would only allow functional, lazy programming techniques.  Im sure that decision has forced many discoveries but one that I'm aware of is Haskell's use of Monads.
If we think about our own software efforts we can also think of places where we can apply evolutionary pressure to our systems.  Making sure code passes automated checks like Checkstyle's metrics can help one to write code within complexity limits.  Lines of code limits or a tool like Simian would be a way to force systems into simplicity and expressive power.  Code coverage targets would encourage adaptability and robustness.  Performance testing like JMeter forces software to maintain computing efficiency. 
Tomorrow is a new week ---- what forces can you apply to your systems to force them onto a long term fitness curve.

Marco



 
Ive been coding in clojure for the past year in my personal projects.  I just thought I would review a bit about the language and my experiences with it now that I have several small and medium sized projects under my belt.
Im pretty happy with the language itself.  I have a repl for interactive programming.  I have adynamic language but I can add optional typing in.  It seems strange to me the no one ever mentions clojure's optional typing as it would seem like a big selling point in a dynamic language.  I like a lot that pre-conditions and post-conditions are part of the language.  With the last release, protocols were added to the language, so what was originally just a functional language has now become something of a object / functional language.  I like a language that has a philosophy but also offers multiple paradyms to developers.  And lastly there is good concurrency support and macros.  I have only used the last two features moderately.  Both are features are easy enough to use with macros, at least for me, requiring a bit more effort to work with.  IMO the above list is a fantastic set of features.
Another big plus working with clojure is that Im running code on a mature and fast virtual machine.  And I also have the entire ecosystem of java code out there that I can leverage in my work.  Calling java from clojure is well explained in the documentation for the language and once the syntax is explained its pretty intuitive.  Best of all everything just works.  The only trouble I ever had with java interop was in recent problem where I wanted to reference and inner class.  It wasn't explained in the documentation how to do that -- (one has to add a '$' between the inner and outer class names it turns out).
All of the above are to good things about clojure but not everything is roses and puppy dogs.  I have never found a way to debug clojure code that fully satisfies me.  On the main clojure.org site RH tells us he uses jswat.  I have never been able to use jswat and set breakpoints in clojure code though.  There is debug-repl which I have played with a bit -- that works...But in the end, what I find I end up using is the clojure.contrib.trace module to introspect my code.  Now I know there are IDE's for clojure now and perhaps their debugging functionality works well.  I use just emacs and swank so maybe others have a better way.  Also as a final note, among those of us that write unit tests there is the common perception that once one has tests one never has to use a debugger.  I dont buy it.  I just finished a first pass on an interpreter in clojure last week.  It has lots of tests but I still ran into two situations where it really would have saved me a lot of time to walk through the recursive calls in a nice debugger.  All tools have their usages.
My biggest project with clojure has been a web application.  I decided to use clojure / compojure server side, YUI as a javascript library, and couchdb for storage.  This was a learning experience on a lot of fronts and if I had to do it again I would probably skip compojure but its a hard call.  I would at least seriously consider if I could start with one of the existing java frameworks and build in clojure from there.  Here are a few things that I just couldn't warm up to enough about compojure.  It uses an old version of Jetty so I cant use things I might want to like web sockets.  The http request / session information is abstracted away into a map.  That works fine but in some cases I felt I didn't have the information a wanted and I didn't have the control I needed to do what I wanted to do.  Lastly the documentation is just ok.  I had to read the code to figure stuff out, especially when starting, and there are a lot of macros in there.  That means that, for people starting out reading, the code is not going to be all that intuitive.  Lastly, even starting with compojure, I found out that I had to build a lot of the infrastructure for a web application on my own.  The compojure / ring / hiccup modules all together give one
a set of routing procedures for urls and a way to write html in clojure.  The rest one does on ones own.  This is not ruby on rails or django.
One more note on web application development with clojure.  My original hope in using clojure for web development was that I could make changes on the server side and see those changes immediately on my web pages.  Naturally one can reload modules in clojure so this quickly becomes a natural part of development.  In reality, as my web application started growing larger and more complex, though I was able to reload modules,  I lost that ability to see code changes immediately.  In order to see changes I had to start rebooting and restarting.  These days I have less and less patience for that sort of stuff - especially when Im working in emacs.  Web development is already such a polyglot of languages and cluges and ajax and different browser behaviour; everything about web development seems to already conspire towards wasting time as it is.
Well thats my experience, good and bad.  Im glad that I have spent the time to learn clojure.  Like everything else its no magic bullet.

Marco

 
Im currently working on a personal coding project which is a developing into a project management tool / personal work management tool.  One of the difficulties I am having with this tool is deciding what philosophy of work it will support and what type of organisation it would be appropriate for.  In business speak this would be called the vision.  The coincident problem I have, beyond the vision, is realising function ins the software that supports that vision.  All together these issues come down to the problem of design and combined with the technical challenges, the project has turned out to be a good learning experience.
The ideas are still forming as I go along but, for now here are a directions that I am or am not heading in:
Traditional project management tools are really technology to ensure compliance and ease supervision.  I dont want to go down that road.  This is a tool for self responsible teams in self responsible organisations.  The tool has to support autonomy while at the same time not promoting enough individual discipline to avoid chaos.  Im not sure exactly how to achieve this goal but my general direction is have the tool enable project level decisions to be made as a combination of the opinions of the concerned individuals on the team.  At a higher level though I want team results to be visible organisation wide through visuals that allow everyone to see what is working and what is not working.  For these types of visuals I will probably experiment with some multivariate statistical techniques and see how promising various displays are for providing insight.
I also want the software to have a social aspect that promotes the cross fertilization of ideas.  This is important because the primary way that individuals / organisations learn is by exposure to better ideas from the people around us and the better practices they use.  My thoughts on how to do that are by allowing differences between teams to be visible, to provide ways for teams to publish ideas and practices they have found helpful.  That alone is probably not enough so this has to be fleshed out more.
As a developer one of the things that has always annoyed me is that I have to waste time filling out forms and dates and times for management tools that provide no benefit to me.  For this tool I want the primary benefit to be to the developer.  I want it to provide feedback to the individual on his tasks, how relative to his peers is his development speed, how many bugs, how successful was the deployment of his work.  I want the tool to provide feedback, in private, with a focus on providing data for improvement if the developer is inclined in that direction.  This will probably require some integration with bug tracking, source control systems.  Its a lot of work and I have been thinking of starting with a git interoperability functionality...because well the code is on github and so I can test ideas out easily there.
Lastly, Im developing the software for teams that are using some form of continuous deployment.  Continuous deployment forces development to eliminate transaction costs across the organisation -- these would be the qa organisation , build organisation , and deploy organisation.  The analysis end of the organization is these types of organisations is then subsummed into A/B testing and direct customer - developer feedback.  The only coordination costs that remain are then between the developers themselves.   The downside of all this is that developers are then tasked with doing customer service, getting requirements, maintaining the build system and deploy and we loose gains from specialization.  So the question is - can software eliminate the coordination costs associated with more people and more organisations while mitigating the loss of specialisation.  Maybe maybe not.  Again the ideas are still being fleshed out....

 
I gave my boss a long talk on build and integration systems today.  He wanted to know what we have found are best practices and what are the difficulties we encounter with our build and integration systems.  The reason the info was requested was so my boss can give his boss a write up about how good our build systems are.
Yet I didnt really feel that my boss got any of the technical ideas that I was trying to get across but since they are important and I want someone to understand here are some incomplete experiences with maintaining build and integration systems:
Developer environments -
Its very important that people can develop on self contained environments.  At the very least this means they have to have a local database that they can modify as they add functionality, or that will contain special data sets that they need.  A local db allows developers do their work without impacting everyone on the team and without the rest of the team impacting them.  When a developer is ready to publish his work, the common dev database can be updated and he can commit his code as well.
<experience> = In my experience a lot of developers have no interest in maintaining their own local database.  Or using the command line tools that most dbs come with.  They want to shovel code out the door and shoveling code is just so much easier when someone else manages their database.  They will try to do work and even make changes to common use databases.
Its also important that the external services other than the database can be mocked out -- and these days those external services are more and more.  Being able to selectively detach/attach external dependencies allows developers to continue work if some service goes down, allows them to set up certain situations that are hard to duplicate, allows them to work off line, on the road - ie be more productive.
<experience> = Having configurable dependencies is extra work and will probably involve the build system.  The build system is not seen as interesting.  Extra work does not increase the rate of shoveling.  Developers will avoid these types of tasks and hope no one notices.
Database -
In general one wants to have a structured method of updating databases and the most practical method I have seen is a migration method.  This works for relational dbs and it also works for no-sql databases.  There is a lot of talk about how no-sql databases make migrations unnecessary.  I have not found that in my own work with no-sql dbs (couchdb).  When I develop I want my data well defined.  I feel it leads to more robust code.
<experience> = Developers still dont want anything to do with sql and will generally try to find someone else to write scripts for them.  Most importantly though, developers will go along with a migration system though and consider it a good idea.
Unit Testing -
The real story is testing allows one to embed knowledge in the code, allows everyone to better change code when requirements change, encourages developers to think about code chunks in terms their specification, and encourages better design.  Unit tests dont 'catch bugs'.  Unit tests do not necessarily lead to 'fewer bugs' and 'more reliable code'.
<experience> = These days all managers say their application has a suite of units test and their developers are writing unit tests.  This is mostly bullshit posturing.  Most developers still dont write tests and a lot of developers will throw in a few tests for their managers during the qa, post qa / pre deploy stage.  Where I work, at a worldwide, top ten trafic volume web site, the unit tests have devolved into essentially assertTrue(true).  People have pressure to get stuff out and it takes only a few people who dont want to maintain other peoples test cases for it all to fall apart.  But we have tons of test cases - and they always pass.
Integration Testing -
One wants to test database code against a real database.  Mocking doesnt cut it here and small changes to sql can often have unexpected consequences.
<experience> = Its a pain in the ass to use tools like dbunit.  Spring is the way to go here.  One wants to have a special database for the integration tests and always have it in a well defined state.  Developers will write integration tests but it may be difficult to get them to understand the difference between an integration test and a unit test (may not understand what a mock object is).  Its important to have a separate project that is just for integration tests and tell the developers that that is where the db tests go.  Developers are more willing to write integration tests than unit tests.  Make writing integration tests easy and a team will tend to see payback from efforts in this area.
Automated testing -
I have used selenium and it works fine.  There are other tools here that work on pattern rec but those are very new -- Im interested though.
<experience> = Keep these tests simple.  These types of tests tend to be very fragile as they test the ui, which tends to change a lot on most projects.  Still simple functionalities tend to break more than one would expect so even simple tests here can be effective.  Simple test suite of limited size here.
Continuous Integration Servers -
CIS gives one confidence that one is always ready to deploy.  Do a daily build every night, have CIS deploy that to a dev server and run smoke/automated tests.  Have you customers keep in touch with the latest code work by checking dev server.  In addition to a daily build, build all products and run all integration tests continuously.
<experience> = I use Hudson. Im used to it. I like it.  All the CIS pretty much do the same thing.  Management needs to buy into continuous integration, though, so that when the build fails it is everyones first priority to fix the build.  I have yet to meet a manager who made the build a first priority.
Build Systems -
Run you unit tests whenever you build your product.
<experience> = I use Maven right now.  It does what I need it to do.
Static analysis -
Checkstyle, FindBugs, Simian, JLint
<experience> = The above are a few that I have used and integrated into Ant / Maven during builds.  I currenly use Simian which I find the most worthwile.  I like FindBugs as well.  I have used Checkstyle but I dont integrate it into builds (I may use it in my own dev environement).  Everyone has their own style and Im not going to impose my ideas on someone else.  I do like the statistics / metrics that Checkstyle offers (for instance the complexity measures) and I have added those to build systems but my experience is that when one of the metrics goes high and is flagged in the build there is always strong resistance to changing the code in question.  Its fine...it works and there is no good reason...it cant be implemented any other way...I dont know how to...There are always reasons and it not worth the trouble.  On the other hand FindBugs is seen as helpful by developers and tells them about problems they were not aware of.  Simian is a hard call.  For some people its a good reminder, for other people it just makes them better cheaters.
All of the above practices are valuable but not always in the way that we read about.  Also we hear a lot about the practices above and how they'll help us build better products.  But there is a gulf between the rosy visions and possibilities that we might hope for and the real world that we have to develop in.  Still good build and integration practices are one essential of the many practices, that when combined together helps teams put out better code.
do your best, Marco
 
I have run into a few articles on developer productivity this week.  The articles have got me thinking enough to add a few comments of my own
First, one of the articles that I ran into was an 'ask' dialogue on Hacker News.  There were a lot of ideas proposed but what I found particularly interesting was that as the dialogue progressed it become mostly about debugging.  For some people debuggers were useless; all they needed was 'println'.  Or maybe logs where enough.  Or for some people once they had their test suite in place that was enough; if anything went wrong they could just look at their test suite and that took them to the root cause.  And then of course some people found debuggers pretty valuable.  Some statements that I thought were pretty valid:
Debuggers are useful for for figuring out unfamiliar code when just reading the code doesnt give one any insight.
Debuggers are also useful when something has gone wrong and all the tests look fine and all the code looks fine and one is looking for clues for where to start. 
But overall given the topic of programmer productivity and the fact that the dialogue converged almost exclusively on debugging, my take was that however one likes to do it; program introspection is major enabling technology.  We either automate the introspection (log, println) or we manually (debugger) do it.  Code tends to behave differently than our best efforts end up writing and we don't end up very productive if we cant introspect our running code.  Use all the tools you have at your disposal and use them well.
The second point about productivity I want to make is that there is not single productivity secret.  In the 1980's the Japanese car producers were making major inroads in the US market and people would ask how can they make higher quality cars for a lower cost than US manufacturers?  What's their secret?  The secret was a multiple upon multiple small changes to how they produced cars that in the end added up to a big difference.  And I shouldn't say added up -- rather to put it plainly multiple small improvements in productivity compound multiplicatively and they start to make a noticeable difference a lot faster then one would believe based on our natural intuitions.  If one wants to be more productive expect to improve in a lot of small ways and then have those new habits compound at some point into a visible difference.
To be more productive one also has to become more productive where, as a developer, we spend the most time.  Most of our time is spent trying to figure out how to implement a feature, trying to figure out what existing code does, and trying to figure out why code doesn't behave as expected.  We spend very little of our time actually writing code, estimates are on the order of 10%.  So one isn't going to be more productive by coding faster.  One should concentrate one efforts where the fat is -- thinking and understanding.   So write code that has clear architectural intent (macro level), and is understandable at the micro level as well.  Write code that is as simple as possible. Write code with clear contracts that fails fast.  And solve every problem once (see previous post on writing reliable code)
The final point is that productivity is ultimately limited by our primary constrained resource, time.  One can make more time by automation, if there is enough payback from that.  And to the extent possible, one can also use time well by identify tasks that are well thought out and have the most customer value.  Don't just do work; do the 'right work'. 

do your best,  Marco