Beyond Java?

Some experiences from building two CORBA IDL compilers.

by Dr Jason Trenouth

(Originally published in 2001.)


The Java Age

We have been living through the Java age. It is somewhat of an understatement to say that there has been a great deal of interest in the Java programming language – although as far as hype goes XML is now overshadowing it. Initially Java, outside Sun Microsystems at least, was seen as a programming language purely for the Internet. In particular, it came with a virtual machine that could be embedded in web browsers and a security model to control the activity of Java programs running in such browsers.

These days, however, Java has moved on and developers are using the language for all sorts of applications. Java is perceived as a reasonable general purpose programming language – one that is good enough. The tools are evolving. JIT compilers like Sun’s Hotspot are making Java code run faster. UML-based design tools like Rational Rose are finding Java’s rigid syntax amenable to round-trip engineering. EJB-based Application Servers like BEA’s WebLogic are providing the background of services needed for many kinds of business applications.

So what’s the problem?


The Problems

Java may not be up to the job. Linguistically it has a number of faults which all seem minor on their own but that add up to awkward and hard to maintain code. Some of these faults are born out of engineering compromises and some of them are deliberately chosen to reduce the perceived complexity of the language. But, for whatever the reason they exist, they are a burden to developers and they will be a burden to maintainers. Here are some of the problems:

  • single implementation inheritance

  • single argument method dispatch

  • primitive types distinct from objects

  • casting required

  • no extensible syntax

  • poor iteration/collection integration


The Solutions

The solution is to adopt a more advanced programming language instead of Java. This need not mean going back to C++. After all, Java’s popularization of garbage collection and pointer-free programming has probably significantly advanced the cause of reliable software. This also does not mean stepping sideways to C# – a similar language with many of the same problems.

The alternative programming language that I’m proposing in this article is Dylan. Dylan already has solutions to the problems listed above for Java:

  • multiple implementation inheritance

  • multiple argument method dispatch

  • pure object types

  • no casting required

  • extensible syntax

  • good iteration/collection integration


An Example

Since, according to Samuel Johnson, “example is always more efficacious than precept”, I will examine a real programming task that is made more awkward by Java and more elegant by Dylan.

In the last few years I’ve personally been working on some CORBA projects. In particular, I’ve written compilers in both Dylan and Java for CORBA’s Interface Definition Language (IDL), so I feel able to show how a language like Dylan can simplify application development compared to Java. In the sections that follow I will use IDL compiler examples while looking at the points I listed above where I contend that Dylan is an improvement over Java.


Single Implementation Inheritance

While Java supports multiple inheritance of interfaces, it only offers single inheritance of implementations. Partly, this is because single inheritance is easier to implement efficiently and partly this is because multiple inheritance is seen, philosophically, as unnecessary. The view is that, in nearly all cases, the goal of multiple inheritance is better achieved through composition of objects and merging of interfaces rather than the merging of objects.

Let’s look at this in reality. In any compiler it is usual to convert the textual form of the input language into an internal representation consisting of a graph of objects. This graph is sometimes called an abstract syntax tree or AST.

The input language, CORBA’s IDL, consists of the usual types of construct. Some elements of the language textually contain some other parts of the language. E.g. an IDL module can contain all the other kinds of construct including other modules. Suppose we want to model these IDL constructs in our internal representation. Let’s assume a parser is supplying us with calls to allocate these objects. How should we implement them? Well, Java advocates would start by concentrating on the interfaces, because the implementation details don’t matter. So let’s look at two pairs of IDL constructs:

  • interface and string both define a type in IDL, members of which can be passed to and returned from operations; and

  • module and interface both introduce new scope* for identifier names.

We can define Java interfaces to model the IDL concepts of type and scope. We can subclass these interfaces individually to model string and module, and subclass them together to produce a model for an IDL interface. However, after you have defined the interfaces, you need to get down to the implementation. Whatever the interfaces, you still have to implement these objects and their associated operations.

In this case, since the concepts are orthogonal, we would like to be able to reuse and so share the implementations, otherwise we are going to have maintenance headache. That is, we’d like the implementation of IDL’s interface to share the implementation of both type and scope.

Unfortunately, in Java, we cannot use inheritance to implement both these kinds of sharing. We will have to implement one kind of sharing through inheritance and one through composition (aka aggregation). As a colleague of mine put it, “if you want multiple inheritance then you have to get off at the next stop as the Java bus doesn’t go there.”

This asymmetry is messy. Here is some putative Java code:

public interface AbstractIDLType
{
  public TypeInfo getTypeInfo();
}

public interface AbstractIDLScope
{
  public ScopeInfo getScopeInfo();
}

class IDLType implements AbstractIDLType
{
  private TypeInfo _typeInfo;
  public TypeInfo getTypeInfo()
  {
    return _typeInfo;
  }
}

public class IDLModule implements AbstractIDLScope
{
  private ScopeInfo _scopeInfo;
  public ScopeInfo getScopeInfo()
  {
    return _scopeInfo;
  }
}

public class IDLInterface extends IDLType implements AbstractIDLScope
{
  private ScopeInfo _scopeInfo;
  public ScopeInfo getScopeInfo()
  {
    return _scopeInfo;
  }
}

public class IDLString extends IDLType
{
}

Contrast this with the equivalent Dylan code which can use multiple implementation inheritance:

define abstract class <IDL-type> ( <object> )
  slot get-type-info :: <type-info>;
end class;

define abstract class <IDL-scope> ( <object> )
  slot get-scope-info :: <scope-info>;
end class;

define class <IDL-module> ( <IDL-scope> )
end class;

define class <IDL-interface> ( <IDL-type>, <IDL-scope> )
end class;

define class <IDL-string> ( <IDL-type> )
end class;

The Dylan code is symmetrical whereas the same sharing in Java has to be asymmetrical with redundant code if you want to take advantage of inheritance where you can. This symmetry simplifies the code and so speeds initial development time and eases later maintenance.


Single Argument Method Dispatch

Now lets look at another of Java’s drawbacks: single argument dispatch. Now, many programmers many not even be aware of this limitation if they have only programmed in single-dispatch or message-passing OO languages. It can be a case of what you’ve never had you don’t miss. However, you tend to miss multiple argument dispatch if you’ve ever had it taken away. Here is an example from our IDL compiler. When building the AST from the textual input there comes a point when the program has created the new node instance and needs to add it into the tree as a new child of the current parent. This is an ideal opportunity to check some constraints on what is and is not legal IDL. For example, the grammar can tell you that arguments are attached to operations, but not that out or inout arguments make no sense for oneway operations. The latter constraint is best dealt with during tree construction. So suppose we had a method for adding new nodes into the tree called addNode. In Java the above constraint might be coded as follows:

class IDLOperation extends IDLObject
{
  ...
  void addNode ( IDLObject node ) throws IDLException
  {
    if ( node instanceof IDLArgument )
    {
      IDLArgument argument = ( IDLArgument )node;
      int dir = argument.direction();
      if ( flag() == ONEWAY && ( dir == OUT || dir == INOUT ) )
      {
        illegalOneWayOperation( this, argument );
      }
    }
    super.addNode( node );
  }
}

Constrast this with the Dylan code which can dynamically dispatch on both arguments in the signature:

define method add-node ( operation :: <IDL-operation>, argument :: <IDL-argument> )
  let dir :: <integer> = direction( argument );
  if ( flag( operation ) == ONEWAY && ( dir == OUT || dir == INOUT ) )
    illegal-one-way-operation( operation, argument );
  end if;
  next-method();
end method;

We don’t have to complicate this Dylan method by making it deal with the case when the second argument is not of type <IDL-argument> because the language’s dispatching mechanism takes care of that for us. And in the case when it is an instance of <IDL-argument> we don’t have to do a cast in order to call methods on it.

The situation gets more complicated when there are other classes you have to care about. IDL operations can raise exceptions, but that doesn’t make sense for oneway operations. Suppose that we want to protect against that. With Java we have to modify the existing definition:

class IDLOperation extends IDLObject
{
  ...
  void addNode ( IDLObject node ) throws IDLException
  {
    if ( node instanceof IDLArgument )
    {
      IDLArgument argument = ( IDLArgument ) node;
      int dir = argument.direction();
      if ( flag() == ONEWAY && ( dir == OUT || dir == INOUT ) )
      {
        illegalOneWayOperation( this, argument );
      }
    }
    else if ( node instanceof IDLException )
    {
      IDLException _exception = ( IDLException ) node;
      if ( flag() == oneway )
      {
        illegalOneWayException( this, _exception );
      }
    }
    super.addNode( node );
  }
}

While Dylan lets us define a new modular method.

define method add-node ( operation :: <IDL-operation>, argument :: <IDL-exception> )
  if ( flag( operation ) == ONEWAY )
    illegal-one-way-operation( operation, argument );
  end if;
  next-method();
end method;

Single Argument Method Dispatch (Part II : The Visitor Horror)

The situation gets even more complicated when you want to walk the nodes as part of backends that are responsible for emitting stubs and skeletons for all the supported languages in your CORBA framework. In Java, one way of doing this would be to add new code for each backend to each of the node classes. This is clearly the wrong thing since you have to repeatedly edit the same core files over and over. Not very modular.

An alternative is the visitor design pattern. Suppose, as one of the backends, you want to just dump out a regurgitated form of the original IDL for debugging purposes. We can avoid having to modify the core files for each backend if we add a single mechanism that lets us visit each node with an object of the backend’s choosing.

E.g. in each node class file we would have the class implement the following method:

void accept (Visitor v) {
  v.visit(this);
}

This accept method switches the recipient of the message from the current object to the argument and passes the current object along as an argument. Its a kind of callback. The Visitor interface has to specify visitor methods for each node class:

interface Visitor
{
  void visit ( IDLInterface node );
  void visit ( IDLOperation node );
  void visit ( IDLModule node );
  // etc
}

Then the backend must define a visitor object that can be passed to the accept methods and must define visitor methods that can be called depending on the acceptor’s class:

class DumpVisitor implements Visitor
{
  void visit ( IDLInterface node )
  {
    ...
  }
  void visit ( IDLOperation node )
  {
    ...
  }
  // etc
}

So the backend calls into the tree representation as follows:

...
node.accept( this );
...

This sends the visitor (the backend) to the node and lets Java dynamically dispatch to an accept method. The accept method, as we’ve seen above is just a trampoline that immediately calls back into the visitor:

...
visitor.visit( this );
...

This might seem pointless, but because there is a different accept method for each node class a different visit method is statically selected on the backend. In a sense, we have converted a dynamic dispatch into a static one.

Unfortunately, we have lost the inheritance we might have used had we put the code directly into the core node class files. The visit methods are separate methods. We can’t use super.visit(node) in say the IDLInterface visit method to invoke the inherited visit method on say IDLScope. Instead we have to explicitly recode the inheritance by hand:

void visit ( IDLInterface node )
{
  visit( ( IDLScope ) node );
    ...
}

In a kind of Catch-22 situation Java programmers are forced to break modularity in order to avoid breaking modularity.

In Dylan, which not only has multiple-argument dispatch, but also lets you define methods which dispatch on classes from other libraries (without having access to private state), the situation is more straightforward. The dump back end can simply define a method (let’s also call it visit) that dispatches on the IDL node classes:

define method visit ( backend :: <dump-backend>, node :: <IDL-interface> )
  next-method();
  ...
end method;

And that’s it. There is nothing more that is required. The Dylan visit method can call next-method and invoke the inherited visit method on <IDL-scope> without having to encode that directly into the method.

Moreover the Dylan visit method can call visit directly on sub-objects without having to bounce there via the accept bottleneck. There are no accept methods at all in the Dylan solution.

In summary:

  • There is no need to define lots of identical accept methods. (In Java, we might even have had to introduce another accept method per class to deal with, say, value-returning visitors.)

  • There is no need to define a Visitor interface that redundantly knows all the classes.

  • There is no need to give up inheritence and then try and hack it back in!


Primitive Types Distinct From Objects

Java has primitive types that lie outside its object system. This can make intuitively simple operations into complex nightmares.

In an IDL compiler you have to process arithmetic expressions. Suppose that your front end has parsed them into nodes that contain the operations and operands. Let’s consider just a node for a binary operation: one with two sub-nodes with values of their own that need to be combined. For simplicity, let’s suppose that IDL allows only all-integer or all-float arithmetic so we only have to code those two cases. We can record all integers we parse as long values and all floating point values we parse as doubles. To distinguish between these types we either have two different primitive fields and a flag in each node or we can store different kinds of numeric objects: Longs or Doubles.

Given these constraints, the Java code for evaluating addition and subtraction nodes might come out looking like the following. Notice all the casting, primitive value accessing, object allocation, and repetition needed to do the actual arithmetic.

class IDLExpression
{
  ...
  Object evaluate () throws EvaluationError
  {
    Object lhs = leftSubExpression().evaluate();
    Object rhs = rightSubExpression().evaluate();
    char op = operation();
    switch ( op )
    {
      case '+':
        if ( lhs instanceof Long && rhs instanceof Long )
        {
          return new Long( ( ( Long ) lhs ).longValue +
                           ( ( Long ) rhs ).longValue );
        }
        else if (lhs instanceof Double && rhs instanceof Double)
        {
          return new Double( ( ( Double ) lhs ).doubleValue +
                             ( ( Double ) rhs ).doubleValue);
        }
        else
        {
          throw new EvaluationError( this );
        }
      case '-':
        if ( lhs instanceof Long && rhs instanceof Long )
        {
          return new Long( ( ( Long ) lhs ).longValue -
                           ( ( Long ) rhs ).longValue);
        }
        else if ( lhs instanceof Double && rhs instanceof Double )
        {
          return new Double( ( ( Double ) lhs ).doubleValue -
                             ( ( Double ) rhs ).doubleValue);
        }
        else
        {
          throw new EvaluationError( this );
        }
      ...
    }
  }
}

Now imagine all this ugly code replicated for more arithmetic operations. And more realistic constraints on the arithmetic types will make the bloating worse.

Dylan does not have primitive types that are distinct from its object types so it can express the code above more succinctly while retaining arithmetic efficiency:

define method evaluate ( expr :: <IDL-expression> ) => ( value )
  let lhs = evaluate( left-subexpression( expr ) );
  let rhs = evaluate( right-subexpression( expr ) );
  let op = operation( expr );
  select ( op )
    '+' =>
      check-constraints( lhs, rhs );
      lhs + rhs;
    '-' =>
      check-constraints( lhs, rhs );
      lhs - rhs;
    ...
  end select;
end method;

define method check-constraints ( expr :: <IDL-expression>, lhs :: <integer>, rhs :: <integer>)
end method;

define method check-constraints ( expr :: <IDL-expression>, lhs :: <float>, rhs :: <float>)
end method;

define method check-constraints ( expr :: <IDL-expression>, lhs :: <object>, rhs :: <object>)
  error( make( <evaluation-error>, expression: expr ) );
end method;

In the Dylan version we only have to test that the arithmetic constraints of IDL are met, using multiple argument dispatch again, and then we can perform the operation quite naturally in a single intuitive line. By contrast the Java version must split up the cases into very long-winded, clumsy, and inefficient code.


Casting Required

In static languages casting seems to be a necessary evil. You have to tell the compiler things that it needs to know. In unsafe languages, if you break that promise, the computation can become corrupt and go awry. In a safe language like Java runtime checks prevent such unfriendly behaviour. But whatever safety net is provided for mistakes, cast-ridden code is harder to read and maintain.

The arithmetic example from the section on “Primitive Types Distinct From Objects” has already introduced some of the ugliness of casting. The evaluate method returns the result of an arithmetic (sub)expression. In both Dylan and in Java we don’t know exactly what the resulting type is, but in Java we have to care because we need to extract a primitive value from that result and the primitive accessor we use depends on the type. So in Java we have to test and then cast the result to that type.

When you have several layers of abstract protocol that have to work on many data types casting can become incredibly ugly. Here is some code that takes a name of a constant value that we know in a certain context is used to declare the size of an IDL array. Suppose that we want to access that size in Java:

( ( Long )
  ( ( IDLConstant )
    node.resolveIdentifier( name )
  ).expression().evaluate()
).intValue()

Now compare that with Dylan:

evaluate( expression( resolve-identifier( node, name ) ) )

In Dylan we can assume that if there is a runtime type mismatch it will be detected and signaled for us. We don’t have to put in explicit casts at the call site just to promise the compiler that we’re allowed to call a method. The mere fact we are calling a method which expects that type is sufficient.

Dylan does not force you to pay for what you don’t use. Elsewhere in your code – away from dynamic cases – type declarations and domain sealing (constraining the extensibility of functions by argument type) can allow Dylan compilers to statically dispatch calls or warn of the lack of applicable methods.


No Extensible Syntax

Java has no macros. After the C preprocessor this was seen as a step forward. The two main uses of C preprocessor in C and C++ programs has been for naming constants and conditionalizing code. In Java constants can be declared via static final fields and benefit from the automatic association with a class namespace. Conditionalizing code was typically done to enable source code to be ported across platforms and also in order to turn on and off debugging code. Java deals with cross-platform portability in other ways. For example integer types have known fixed sizes no matter what the platform. Finally, for debugging, Java compilers will optimize away unreachable code, so debugging code can be left in and turned off by manipulating an ordinary constant in one place in the code.

However, constants and conditionalized debugging are not the only uses of macros. Macros can also be used for syntactic extension. Unfortunately, macros defined in the C preprocessor use textual substitution and so extending the syntax is hard to get right.

In Dylan, macros are defined using a pattern and template filling notation that is smart about the syntax of the language. This lets programmers build abstractions, encapsulate details, and map external formats to internal ones which can be verified at compile time.

As an example, let us consider the dump output of an IDL compiler again. When reproducing IDL code from the internal tree representation it is useful for the human reader if we indent the output. One way to do this would be to implement a so called “pretty printer” for the internal representation. This could take account of the page width and have heuristics for breaking lines and so on.

A full-blown pretty printer is too much work for this IDL re-emitter which is really only for debugging purposes, although one could argue that a generalized component of this sort would come in handy in other situations.

Instead we will take care of line breaks manually and we only need something to keep track of the indentation level. An natural abstraction is an extended I/O stream which inserts extra whitespace before each line that is printed, and that can be told to add more or less whitespace as the context demands. We will call this an “indenting stream” although it will also add braces around the nested IDL as well as indenting it.

In Java we might use the indenting stream, wrapped up in some utility calls, for dumping out the IDL for an interface as follows:

void visit (IDLInterface node)
{
  node.dumpName();
  node.dumpInherits();
  this.stream.start(); // write out a '{' and increase the indention
  {
    node.dumpBody()
  }
  this.stream.finish(); // decrease the indentation and write out a '}'
}

In the example we add extra indentation for the elements of inteface by calling a function start and we remove the extra indentation by calling finish. Also, we use a trick and introduce extra indentation into the source code by using a Java block. This lets us see the structure of the emitted code in the source code. Unfortunately, we have to remember to put the call to finish in ourselves and the Java compiler won’t notice if we miss it out. There is also no real connection between the indenting in the output and the indenting in the source – we just have to remember to put in the block braces.

An alternative might be to use an extra instance (a Start !) of an anonymous inner class that takes care of the indentation by using a body callback something like the following.

void visit (IDLInterface node)
{
  node.dumpName();
  node.dumpInherits();
  new Start( this.stream )
  {
    public void body ()
    {
      node.dumpBody();
    }
  }
  .finish();
}

The finish method calls the body method wrapped with the other calls for indenting and exdenting the stream.

However, the Java compiler will still not notice if we miss out the .finish(). Also this is beginning to look rather strange and potentially inefficient if we are creating lots of these “Starts” all over the place. More fatally, there is at least one popular Java compiler that cannot cope with more than two nested levels of the above construct.

A more typical way of doing this Java would be to use a static method and an instance of an anonymous inner class:

void visit (IDLInterface node)
{
  node.dumpName();
  node.dumpInherits();
  Body.with( new Body( this.stream )
  {
    public void invoke ()
    {
      node.dumpBody();
    }
  } );
}

This at least lets us wrap up the call in a way that enables the Java compiler to spot missing delimiters, but we have to pay the cost of an object allocation each time.

So what would you do in Dylan using macros? Well, the following demonstrates. First we define the macro:

define macro indenting
  { indenting ( ?stream:expression ) ?body:body end }
  =>
  { begin
      let stream = ?stream;
      start( stream );
      ?body;
      finish( stream );
    end }
end macro;

This defines a macro called indenting which wraps up some code denoted by ?body with a couple of administrative calls to change the current stream’s indentation and insert braces. The :body syntax is a constraint that says ?body must match a well-formed sequence of Dylan source code statements. Similarly, the :expression syntax is a constraint that says ?stream must match a well-formed Dylan expression.

Now that we have the macro we can use it as follows:

define method visit ( backend :: <dump-backend>, node :: <IDL-interface> )
  dump-name( node );
  dump-inherits( node );
  indenting ( stream( backend ) )
    dump-body( node );
  end;
end method;

The macro expands into the administrative calls, but lets us see the indented source and lets the Dylan compiler check the balanced statements: indenting...end.

Now suppose that there is a possibility that parts of the output may go awry or be terminated and yet we want the bulk of the printing to continue. In particular, we want to be able to reset the indentation back to the original level in the advent of a non-local exit.

The original Java technique would mean having to insert a try-finally block in every use of the pattern:

void visit ( IDLInterface node )
{
  node.dumpName();
  node.dumpInherits();
  try
  {
    this.stream.start(); // write out a '{' and increase the indention
    node.dumpBody()
  }
  finally
  {
    this.stream.finish(); // decrease the indentation and write out a '}'
  }
}

Now this is beginning to get out of hand. The usual Java technique of defining a static method that calls a protocol invoker on an instance of an anonymous inner class helps, but at the cost of clumsy boilerplate in the source and instance allocation for each call.

In Dylan the solution is simply to extend the macro in one place:

define macro indenting
  { indenting (?stream:expression) ?body:body end }
  =>
  { block
      let stream = ?stream;
      start( stream );
      ?body;
    cleanup
      finish( stream );
    end }
end macro;

And all the uses of indenting ... end remain the same.


Poor Iteration/Collection Integration

Let’s suppose we are implementing dumpInherits from the above emitter code. We need to iterate over the inherited interfaces and dump out their names. Suppose also that we want to maintain a degree of abstraction over how we implement the sequence of inherited interfaces.

In Java, we might expose the interface to the sequence of inherited interfaces as a List so that we could implement it as a LinkedList or an ArrayList or something else depending on other constraints.

Here is some possible code:

void dumpInherits ()
{
  dumpColon();
  Iterator it = getInherited().iterator();
  while ( it.hasNext() )
  {
    ( ( IDLInterface )it.next() ).dumpName();
    if ( it.hasNext() )
      dumpComma();
  }
}

However, if we knew we had an ArrayList (which can be indexed in constant time) we could use an index rather than using an iterator (which is slightly more efficient):

void dumpInherits ()
{
  dumpColon();
  ArrayList inherited = getInherited();
  int nInherited = inherited.size();
  boolean isFirst = false;
  for ( int i = 0; i < nInherited; i++ )
  {
    if ( isFirst ) isFirst = false; else dumpComma();

    ( ( IDLInterface )inherited.get( i ) ).dumpName();
  }
}

And if we didn’t need the power of the collection classes then we use a good old fashioned array.

void dumpInherits ()
{
  dumpColon();
  IDLInterface[] inherited = getInherited();
  int nInherited = inherited.length;
  boolean isFirst = false;
  for ( int i = 0; i < nInherited; i++ )
  {
    if ( isFirst ) isFirst = false; else dumpComma();

    inherited[ i ].dumpName();
  }
}

The situation in Dylan is different. The following code covers any collection class from the built-in ones like <vector> or <list> to custom ones that you’ve written for your application. Moreover you don’t actually need to create an explicit iterator or index in order to loop over the elements of any collection as that is done for you behind the scenes.

define method dump-inherits ( interface :: <IDL-interface> )
  dump-colon();
  for ( inherits in get-inherited( interface ), first? = #t then #f )
    if ( ~first? )
      dump-comma();
    end;
    dump-name( inherits )
  end;
end method;

We could program explicitly using iterators in Dylan, just as in Java, but that’s a bit of a drag when you have a for construct that can do the work for us automatically.

In fact, using Dylan’s macros we can even build an abstraction that lets us express the dump-inherits method even more clearly. First we put a line similar to the following one in the module definition.

import dylan, rename: { \for => \dylan/for }

This rebinds the original Dylan for macro on import so that we can define a new one and have it expand into the original one.

define macro for
  { for ( ?clauses:* )
      ?body:body
    between
      ?between:body
    end }
   =>
   { dylan/for ( ?clauses, first? = #t then #f )
       if ( ~first? )
         ?between
       end;
       ?body
     end; }
end macro;

This macro lets us express dump-inherits as follows:

define method dump-inherits ( interface :: <IDL-interface> )
  dump-colon();
  for ( inherits in get-inherited( interface ) )
    dump-name( inherits )
  between
    dump-comma();
  end;
end method;

Actually, our for macro interferes with the facilities of the built-in Dylan for macro more than we might wish. The new macro has deliberately been kept simple for the purposes of the example, and the real situation would require more propagation of rare syntax from our macro to the the Dylan one.


Conclusion

We’ve covered some of the awkwardnesses of Java with respect to Dylan, but what is the point? Java is clearly here to stay isn’t it? There’s been too much investment in Java to stop now hasn’t there? We don’t need another new programming language do we?

Well, in 1995 people thought that mainstream programming language development had peaked with the widespread use of C++. Of course there were detractors, but in the main C++ was seen as the de facto standard application development language. On the fringes Smalltalk was gaining currency (pun intended) in business application areas, and other languages had their established niches (e.g. Common Lisp for R&D).

Since 1995 Java has swept aside C++ as the de facto standard programming language for mainstream applications. Even C shops that had not even moved to C++ have jumped on the Java bandwagon.

But if you’re building the future at the latest e-commerce “.com” you need a competitive advantage over the other energetic internet businesses. Games developers budget on throwing away their code base every two to three years. Internet time is bringing that time-frame to more mainstream development. You don’t want to be held back by your tool, and if you are using the same mediocre tool as everyone else in the herd then you are being held back.

Equivalently, if you are shoring up the present in a larger organisation then it is your duty to ensure that the systems you build are maintainable in the future. And you want maintainable to mean really maintainable. That means using a tool that lets you encode abstractions and model the domain in the program source, and not just hope that takes place in some surrounding cloud of documentation and diagrams.

Ultimately, you have to ask yourself: is Java is really good enough for the 21st Century?


Acknowledgements

Thanks to Carl L. Gay, Hugh Greene, Scott McKay, and others associated with Functional Objects for feedback on drafts of this article.

Copyright; 1999-2001 Functional Objects, Inc. All rights reserved. All product and brand names are the registered trademarks or trademarks of their respective owners.