Monday, November 12, 2007

The Ugly Duckling of Java!

It took me time to find a good title. It could have been things like:
  • What's so ugly about explicit type parameters?
  • Done with inference!
  • Let's pave the way for reification and promote explicit type parameters!
Chaotic java already made a similar point: Can someone please explain type inference to me?.

But all these compiler language terms, are showing off for no reason. The problem is not so complicated, but the stakes are high. Like Eric Burke says in his blog A Syntax Trick I Was Not Aware Of, there is a syntax in Java that very few people use and encounter: Explicit type arguments or parameters. One of the rare places where you may have encountered it, is in the JLS and javac code and test code. So, it looks like:
List<String> empty = Collections.<String>emptyList();
The first reaction, for most Java developers, is: WTF?!

But I remember the first time I started converting some 1.4 code to use generics and wrote:
Map<Integer, Map<String,Thing>> messages;
Wow, that's a lot of <>! And when I saw the amount of bugs the compiler gave me (wrong casting and objects in put()), I really thanked the <>, and asked for more ;-)

Why, the Java compiler wants to hide <> of explicit type parameter?
On IntelliJ the <String> before the emptyList method, is underlined with the remark: Explicit type arguments can be inferred. Basically, it means the compiler can be smart enough to replace the content of <> with the correct type. Great, at first. But wait?
Like Stephan says in Explicit Static Types are not for the Compiler, but for the Developer - Duh, the compiler knows a lot more stuff. He can remove a lot of all this types we are writing.
I love strongly type Java, and all the types repetition. I love the types in the code because it's more readable.

Question: Why explicit type parameters are an exception?
Answer: It got the Ugly Duckling stamp for some reason! Some said that it was dangerous for kittens!

Personally, I think there are 2 reasons:
- When Java 5 came out it looked like too much <> all over and Sun tried to remove some unneeded ones.
- It lacks consistency. I would really like to know, why explicit type parameters are declared BEFORE the method name? When, in constructor and type declaration, they are declared AFTER. In the later, they are enforced, and everybody is using them!

Another worrying inconsistency that may appear in Java 7 is around the "Short instance creation" issue. For example the nicest proposal so far is Neal's constructor type inference. I really like this proposal because it enables inference to work, but you SEE it working. It's not hidden woodoo compiler stuff. According to this proposal instead of:
List<String> list = new ArrayList<String>();
you'd write:
List<String> list = new ArrayList<>();

Again, a strange sense of peculiarity for the method emptyList. In the type inference block of Alex Miller Java 7 page, you see the peculiar treatment of methods compare to types.
Why nobody wants:
List<String> empty = Collections.<>emptyList();
It's the same, no? Let's be consistent!

Now, the main big issue that I have with this inference vs. explicit issue, is that it's moving Java away from reification of generics. It's going in the wrong direction. I just refactored some JPA code from:
// The Class of the bean implementaion provided by
// the framework and unknown to the client
Class implClass = ...;
MyBeanInterface b = (MyBeanInterface) em.find(implClass,pk);
MyBeanInterface b = em.<MyBeanInterface>find(implClass,pk);
It may be just changing casting to generics. But that's the point of generics being way nicer and more powerful.

The other point is, when generics will be reified I'd be closer to the perfect methods:
MyBeanInterface b = em.<MyBeanInterface>find(pk);
or with VISIBLE type inference
MyBeanInterface b = em.<>find(pk);
These methods will find the implementation from the interface. Exactly what I need, the client code doesn't know about the implementation class, and the code is very clean and readable.

Today, because of erasure, in most methods (80%) that use generic types, you end up having Class<T> or Collection<T> as a parameter. So, inference works most of the time, and it's saving us from having to visualize the Ugly Duckling. For sure, it worked.

Now, for the 20% that still resist the roman empire of javac guru inference power, a new wave is coming. They want to write a "smarter" javac, that will make inference work (Check Kevin Bourrillion and Bob Lee comments here).

Here, there is a shift in the javac thinking, and it's going against readability, since: The "smarter" (or magical) the compiler is, the less "readable" your code is. Furthermore, inference will never be (by definition) 100% sure. So, why bother? The final code is more readable anyway...

IMHO: The kittens are safe, and reifying generics is more important than inference.

Thursday, November 8, 2007

Is JSF following EJB road?

First I have to be clear: I don't like JSF!

But, I did not like EJB 1.0 when it came out. And still, I did bet on it's success.

When EJB 1.0 came out I just finished my personal implementation of a CORBA Application Server, so this new Specification was "way below" my stuff. I was young, but still you cannot stop the machine coming from Sun, BEA and after quite some time IBM.
EJB came in a world were crazy techies (like me) thought they can build great servers which can handle enormous load. So, this specification really calm down crazy development and provided a good base for containers (usage and implementation).
But, it was flawed, right from the beginning.
All the API, interfaces and design concentrated on showing off the "great" power of the Application Server. Look how I can passivate/activate, look I can do security and transaction, and look I even save in DB for you. And for every "look" as a poor developer you needed to answer (implement, declare) something.
There was no escape, your code was full of unwanted pollution.
EJB 2.0 did not solve the problem and actually added more "look" at my beautiful AS: Messaging and CMP.

We had to wait for JBoss guys to take over EJB specification for 3.0, to finally end the developer nightmare.

And now, the funny trick. Please replace in the above EJB with JSF...
Amazing, it fits.

JSF came when Web UI was a total mess (All Struts usage override most configuration and basic classes) and everyone had his own "best/better" implementation.
So, JSF did some clean up in Web UI design and implementation.
But, JSF is really "showing off": Look my nice lifecycle in 42 steps, Look my nice big list of jsp tags/attributes, Look my nice EL, Look my nice XML configuration, and so on.
By "showing off" I mean that you need to fill all this with a lot of unreadable and incomprehensible repetition. And the worst of it you need to master all this in order to:
- debug correctly,
- integrate external stuff (Ajax4JSF and so on),
- create just one JSF UI component (Does someone sells T-shirt with: I wrote my JSF component! without breaking the lifecycle?),
- write stupid HTML pages.

A really good description of all JSF flaws are listed in Gavin blog: EE6 Wichlist
In this blog entries, the first one on EJB the second on JSF, looking quickly at the code examples you understand the gap. EJB is full of nice Annotations and Meta Annotations (the future for sure), and JSF full of ugly XML and Expression Language (scheduled to die).

The biggest conceptual flaw of JSF is the usage of EL.
EL is breaking encapsulation, in term of IoC: it's on the wrong side of the road. Ophir pointed me to a nice essay from Terence Parr "Enforcing Strict Model-View Separation in Template Engines ", that really nicely proves my point.
A UI definition (Web page, Swing panel, ...) should not know about the object graph. If I am a UI designer, I'm not a business modeler.
The answer is in architecture like Wicket and JSR-295 (The EL in this Beans Binding is due to the lack of property support).
In these good architecture: The UI components have an ID related to the page/panel, and the Java developers bind data to these components and receive events from these components. This is the good way to do UI, integrate UI design and manage UI logic. From experience, it works great.

All the other flaws are due to the age of JSF and the fact that it was written for ugly request/response Web UI behavior.

So JSF is today at the stage of EJB 1.0, I can see JSF 2.0 pushing more in the wrong direction, and so having finally a JSF 3.0 were developers will stop having nervous shake when they hear: JSF!

Friday, November 2, 2007

Playing with the full abstract enum!

Since Neal Gafter released the closure prototype, Ricky Clarkson has been having a lot of fun. He wrote 2 nice blogs:
Both have "Java 7 Example" in their titles - So closures is going in Java 7: Got that!

The Java code is a little scary at first, but after a while it works, you can read it. IMHO: all the inlining of closure syntax can get very confusing. I know they supposed to be, but I played with the code and when you create variables for the closures it helps increased readability.

Anyway, I really like the fact that you can now really test, today, closures and their impact on your code.
All this, gave me the idea to do the same for abstract enum.

The reasons for abstract enum (and there are more I'm sure) are:
  1. I wanted abstract enum for solving the property binding issue in a type safe way.
  2. I found out that you cannot use String and int Enum.ordinal() methods as annotations parameters.
  3. I know there are way too many strings in annotations (methods, fields, scopes, states, package, groups, ...). I don't like strings describing my code.
  4. While I was at it, Steven Coco asked for generics in enums, so it's there also. That happens to be quite an interesting feature, that was already there from Neal (but buggy ;-).
For all those reasons I know abstract enum is a good thing. Changing the javac to implement this feature is at least 2 orders of magnitude easier than implementing closures, so it was within my reach ;-)

I have exposed some mercurial repositories for langtools and for the jdk. You need to have a mercurial working copy of openJDK, then inside langtools run:
hg pull

You can "hg log" to see what's going on, then do:
hg up

The patch for the JDK is very small, and actually (in my view), it is just another valid way (in a Java 5 environment) to do Class.isEnum() and Annotations parsing. The new JDK is needed for the tests to pass because abstract enum are not considered enum by java.lang.Class otherwise :-(.

Type safe reflection

The solution for field binding is straight forward. The abstract enum looks like:
public abstract enum FieldDefinition {
private Field reflectField;

public Field getField() {
if (reflectField == null) {
Class modelClass = null;
try {
modelClass = this.getClass().getEnclosingClass();
reflectField = modelClass.getDeclaredField(name());
} catch (NoSuchFieldException e) {
throw new ObjectMappingException("Field " + this +
" does not exists in " + modelClass, e);
return reflectField;

And to use it in my Value Object it looks like:
public class ModelMock {
private String firstName;
private String lastName;
private int age;

public static enum fields extends FieldDefinition {
firstName, lastName, age;
Of course is not as good as properties as a language support, but it solves my problem. From the above, I extended to property using generics. So, the PropertyDefinition takes the property type as generic parameter, so the generic getter/setter methods are generic typed. I took (type 1 and 2) from Stephen Colebourne's Weblog Java 7 - Properties terminology, and converted it.

Before abstract enum there was:
public class MyBeanBefore {
private String firstName;
private String lastName;
private BigDecimal height;
private Date dob;

public PropertyDefinition<MyBeanBefore, String> firstNameProperty() {
return ReflectionAttachedProperty.create(this, "firstName");
public PropertyDefinition<MyBeanBefore, String> lastNameProperty() {
return ReflectionAttachedProperty.create(this, "lastName");
public PropertyDefinition<MyBeanBefore, BigDecimal> heightProperty() {
return ReflectionAttachedProperty.create(this, "height");
public PropertyDefinition<MyBeanBefore, Date> dobProperty() {
return ReflectionAttachedProperty.create(this, "dob");
always using strings :-( And now:
public class MyBean {
private String _firstName;
private String _lastName;
private BigDecimal _height;
private Date _dob;

public static enum properties<V> extends PropertyDefinition<MyBean,V> {
<String> firstName,
<String> lastName,
<BigDecimal> height,
<Date> dob
This code actually compile and run (with the strange _ prefix ;-). Cool, no. The full code is under subversion here.
Now, with this, I can find usages, refactor, and get compilation errors for every binding using a field that does not exist. Like in here:
MyBean bean = new MyBean();
PropertyAdaptor<MyBean, Date> dobProperty =;
PropertyInstance<Date> dobPropertyInstance = dobProperty.getPropertyInstance(bean);
The type safety is not entirely true, since in my own Model class I get Runtime error for mismatching fields in the "fields" enumeration. This can be solve in 2 ways: make fields a keyword in Java and generate it automatically ;-), and/or test your model class and fields enumeration correctly.

abstract enum in Annotations

Like with closures, today when I encounter a problem in some projects, I often end up with: That will have been a lot faster and nicer with closures, and that will be solved perfectly with abstract enum. Now, I know it's the standard disease of being too much into it ;) But still I think it's true.

And abstract enum in Annotations is, for me, a really powerful feature.

So, one problem I had was with JPA schema names, cache names, and fields associations. All those information need to be provided as strings inside annotations. And of course they are repeating themselves a lot, they get misspelled, and basically you loose the strong typing.
So, I wanted to use enum (Schemas, Caches, Fields) to control the strings. But I would hit a dead end: is not a string literal, so you cannot use it in Annotations.
Now, with abstract enum, I have way more flexibility and type safety.
For the caches for example I can have Hibernate declaring something like:
public abstract enum CacheDefinition {
private CacheConcurrencyStrategy defaultUsage;
private String defaultInclude;

CacheDefinition(CacheConcurrencyStrategy defaultUsage, String defaultInclude) {
this.defaultUsage = defaultUsage;
this.defaultInclude = defaultInclude;

public CacheConcurrencyStrategy getDefaultUsage() {
return defaultUsage;

public String getDefaultInclude() {
return defaultInclude;
and then in my application:
public enum MyCaches extends CacheDefinition {
to use in:
class Action {

I wrote a blog (JPA NamedQueries and JDBC 4.0) about how to solve the string association issue in JPA named queries. Frank Cornelis answered with an addition that enables the reuse of query methods. The main problem with this addition is it adds more strings in annotations.
This case can also be solved with abstract enum.

So, if you reached there and you like the language feature of abstract enum: Vote for the RFE: 6570766.

Sunday, October 28, 2007

Web Beans and Modules!

After some good work, the JSR-299 group released an early draft, first published on Gavin King's blog, and now on the JCP site.

The spirit of Web Beans is really good, and the way the annotations came out is very promising. I have nothing to say but compliment the Component Types, Component Bindings, Scopes, Injection, and Interceptors. The Event has the good approach (Event type filter), but looks young at the moment.

Web Beans is really the state of the art "Educational Framework". The technical base is already explored (Guice, Spring, Seam) and is not so complex, but the impact on the developer thinking is really significant. Like every good "Educational Framework" (Struts, Spring, Seam), it makes it harder to do the "bad thing" (hacking, ugly coupling) than the right one (framework driven injection and loose coupling).
I know that if developers start to use these Annotations their code will get cleaner, more readable and manageable.

But, reading through the specs and especially, "4.4.2 Interceptors bindings" and "8. Packaging and configuration", I felt like something was wrong: Where are my Web Beans modules?
I don't want to get into the argument of OSGi or JSR-277, but I was expecting a more modular approach of Web Beans Injection.
What I mean, is that I would like to see a Web Beans "core" defining all the above Annotations for a pure J2SE environment. This would be the specification of how to use Dependency Injection Annotations. On top of that, another specification on how to provide "Components Provider/Injectors" for JEE, JSF, EJB, MDB and so on.
For example, I want to be able to do:
private Logger log;

just by adding log4j-web-beans.jar in my classpath. In this jar Log4J will provide the Web Beans Component for my class and my environment.
So, with this approach all the EJB, JSF & JEE stuff that really don't have to be there, can be specified separately as: jee5-web-beans.jar, ejb-web-beans.jar, mdb-web-beans.jar, etc.
I think it will make the specifications easier to read and a lot more flexible for all the great future components ;-)

Since I don't like people that complain without proposing something of their own, I tried to see how to do this with the current specification. And, basically it's not missing much.
Today in EJB3 environment, the first thing I do is creating an Interceptor that can inject components from Spring or other sources in my Session Beans. The injection is based on my project level Annotations and it bridges Spring and EJB3 nicely.
So a first solution (not good but...) is to use javax.interceptor.InvocationContext. If the Web Beans container is adding some entry in getContextData(), I can find out the Component Type, the Scope, and other information and decide how to populate all the @Log fields.

A nicer solution will be to have Annotations specified by JSR-299, that will allow me to write something like:
public class Jee5ComponentProvider {

public Object getEJB(WebBeansContext wbc) {
try {
return new InitialContext().lookup(
} catch (NamingException e) {
throw new RuntimeException(e);

Here I really did not investigate enough, but I know I'm missing @ComponentProvider and WebBeansContext in the current specification.
What do you think?

Wednesday, October 17, 2007

JSR-299 or Web Beans

After postponing for too long, I finally decided to look at Web Beans or JSR-299.

The story is that for our seminar JavaEdge 2007 we invited Gavin King. He accepted to come and to talk about his new JSR, and this was the first time I encountered "Web Beans".
And to tell the truth: The name gave me a cold shower ;-)
What? Gavin started as a backend guru for O/R Mapping, then decided to target the issue of Web development with Seam, and now is "may be" going even higher in the stack with some UI components spec!
It really sounded like Gavin was leading the new spec for some JSF UI widgets :-(

This name really put me off, and so I postponed reading about "Web Beans".
That was a mistake, and a really bad move.

When I saw in Gavin blog, that he was excited about exposing the work done in JSR-299 group, I decided to get into it.

And the conclusion is:
  1. From my personal technical glossary, "Web Beans" has nothing to do with Web, and Beans are not UI JavaBeans.
  2. Gavin did it again.
Before Hibernate the O/R Mapping tools concentrated more on showing off their feature list than helping developer having a persistent model. For Gavin what's important is how the user (poor developer) will communicate with the framework. Powerful features should not show off and complicate the API or the tool. Once the usage is clear, the framework follows.
So after Seam and easy @Conversation, he's leading this great JSR.

"Web Beans" is basically a good (meaning pushing forward) standardization of IoC and DI concepts using Annotations. The overview from Gavin slides of the Silicon Valley JUG, is a really good start to understand what JSR-299 is about.

"Web Beans" really uses the good Guice framework, removes the issue of static scopes in Seam, and helps you create your own meaningful project Annotations. And all this is done true to the Java spirit: in a readable way.
JSR-299 answers some of my needs I had in AADA and I hope it will help projects moving towards creation of more custom Annotations.

So, please, change the name...
  • First, for once the JSR number is very easy to remember ( 300 - 1 easy to find an association).
  • Second I always associated Beans with JavaBeans Swing UI or Struts, and I never felt it was connected with the concept of Components used in Spring, Guice, Seam or JSR-299. By the way, in "Web Beans" there is no @Bean but only @Component.
  • Third, I found the term API (Application Programming Interface) does not match today's specification code and technique. EJB3, JAXWS and so on don't export a DLL API. They help you code. For example, In JPA (Java Persistence API) more than 90% of the code are Annotations not Interfaces. It should be Java Persistence Annotations, no? You can argue that Annotations are indeed interfaces in Java, and that it will not change the acronym ;-)
Anyway here are 2 possible names:
- Dependency Injection API (or Annotations)
- Components Injection API (or Java Components Injection Annotations ;-)

Finally, JSR-299 DIA also generalizes the concept of Injector injected by injection from the Dependency Injection framework, and I'm really happy about it ;-)

Wednesday, August 1, 2007

JPA NamedQueries and JDBC 4.0

In one project doing a migration from EJB 2.0 to EJB 3, I found this:
@Entity(name = "Action")
@NamedQuery(name = "Action.findAll", query = "SELECT o FROM Action o"),
@NamedQuery(name = "Action.findByExtCode", query = "SELECT o FROM Action o WHERE o.externalCode = ?1"),
@NamedQuery(name = "Action.findByDescription", query = "SELECT o FROM Action o WHERE o.description = ?1"),
@NamedQuery(name = "Action.findManualActions", query = "SELECT o FROM Action o WHERE o.manual=?1"),
@NamedQuery(name = "Action.findSelectedActions", query = "SELECT o FROM Action o WHERE <> 3"),
@NamedQuery(name = "Action.findFlowActions", query = "SELECT o FROM Action o WHERE"),
@NamedQuery(name = "Action.findByActionFlowId", query = "SELECT o FROM Action o JOIN o.actionFlows af WHERE = ?1")
The code looks like this, because when doing named queries in JPA the name need to be unique for the WHOLE persistence unit. So, we agreed about the naming convention "[entity name].[finder name]" for the name of the query.
It is actually quite nicer an more manageable than EJBQL in xml files, but still there is quite a bunch of copy/paste, String that are not constants, and the usage of named queries is here more problematic than EJB 2.0 home interfaces.
The usage looks like:
Query namedQuery = em.createNamedQuery("Action.findByExtCode");
namedQuery.setParameter(1, "001");
ActionBean actionBean = (ActionBean) namedQuery.getSingleResult();
And this is for only one parameter...

The possible code errors (due to lack of static typing) we get here are:
  1. Errors on the string name for the namedQuery
  2. Errors on the parameter position (the named queries annotation is in the model not close to the business logic executing queries)
  3. Errors in Casting
So, when looking at this, I thought about JDBC 4.0 (jsr 221 chapter 20 of the spec) and finally managed to create a nice dynamic proxy doing the work for JPA.
It is quite clear looking at the code above, a JPA named query can be defined as an interface method. It has:
  • a name (entityName + methodName),
  • a list of parameters (ordered or named),
  • and a result (list or single).
So, with the dynamic proxy the usage code looks like:
ActionQuery actionQuery = NamedQueriesFactory.getQueryProxy(ActionQuery.class, em);
ActionBean action = actionQuery.findByExtCode("001");
And the Queries interface:
@JpaQueriesInterface(prefix = "Action")
public interface ActionQuery {
public Collection<ActionBean> findAll();
public ActionBean findByExtCode(String extCode);
public ActionBean findByDescription(@JpaParamName("description")String description);
public Collection<ActionBean> findManualActions(boolean manual);
public Collection<ActionBean> findSelectedActions();
public Collection<ActionBean> findFlowActions();
public Collection<ActionBean> findByFlowActionId(long flowActionId);
Which gets all the advantage of strong Java typing.
So, the code of my small running example is here: and it's using maven of course...
Now, the next step is to use the Annotated Query Interface has NamedQuery provider so it will really look like JDBC 4.0.

Saturday, July 14, 2007

The need for RAE

With all the discussions around language changes in Java7, there is a need for democratic vote on what should be included or not.
But in today's Bug Database of Sun you can only vote for a RFE (Request For Enhancements), you cannot vote against. And when you look inside the Top 25 RFEs you can see that there is big controversy on most of them, and that Sun act as a final "benevolent dictator".
For example here is an extract of the top 25 RFEs that may have an impact on Java7/OpenJDK and the language:
Votes Bug ID Synopsis
580 4449383 Support For 'Design by Contract', beyond "a simple assertion facility"
341 4820062 Provide "struct" syntax in the Java language
303 4267080 break up rt.jar into downloadable-on-demand components to reduce jre size
197 4093687 Extension of 'Interface' definition to include class (static) methods.
172 4905919 RFE: Operator overloading
171 4801527 Support Repository in Java Web Start
152 4727550 Advanced & Raw Socket Support (ICMP, ICMPv6, ping, traceroute, ...)
125 4313887 New I/O: Improved filesystem interface
122 4129445 An API to incrementally update ZIP files
122 4650689 RFE: Java needs public API for FTP
113 4648386 Simplify deployment and versioning by embedding JAR files within each other

From this list, I'm personally for "java kernel: 4267080", "New IO: 4313887", against "Operator overloading: 4905919", "Design By Contract: 4449383", and still undecided on "struct: 4820062".
So, how can we compile all the votes?
Sun needs to create RAE (Request Against Enhancements) in parallel of each controversial RFE.
To compile the votes, we will need better access to the Bug Database, since listing all RFEs and RAEs by importance is impossible from the current Web Interface.
The need for RAE came from the big controversy around checked exceptions and I was really missing a nice compilation result of for/against.

Today, Java is Open Source and the full activation of the OpenJDK Governance Board is in progress. I will really like to know what to expect in the language. Having a compilation of popular RFE/RAE will help.
But until the process of the Governance Board is not clarified, Sun position will be the decisive one. And here the current position of Danny Coward, JavaSE Platform lead, is very confusing: "Seeking a small number of changes for Java SE 7 platform" (from Java One presentation).

Tuesday, July 10, 2007

Voting for Good Exception Handling

When getting ready for a conference I gave for Sun (Java 7 - A lot to be waiting for !), I came across this blog: Voting for Checked Exceptions. It was in response of Neal Gafter's blog: Removing Language Features?.
The list of comments is very long, and I never found the time to read them, until now. I really liked what went on there (except the personal attacks), and I want to try to summarize the arguments, as I view them. The arguments are also exposed in Neal's blog comments but there was a lot less arguments...
I'm not trying to be impartial, since for me it's clear: Checked Exceptions should go.
Page 49 of the presentation has the bullets that summarize my view on Checked Exceptions.

So here are the valid arguments I saw in the blog comments:

Checked Exceptions avoid the digging of what can go wrong


In other languages that does not have checked exceptions, it takes a good amount of time, thinking, QA cycles, and production crashes before you know which exception types can be thrown by a specific method.
Basically, since the API does not declare what it can throws, as a caller you need to "guess" what can happen or catch everything and try to re-throw what should not have been catch.


The point is totally valid, but for me it does not make the balance tip on either side of the checked/unchecked issue.
Good API should declare a good throws clause list of what can happen. This is good design and OO practice. And a caller that knows what to do in case of an exception should catch them and handle them correctly. This is valid for Exceptions in general, checked or unchecked. The issue is that in Java nobody declares unchecked exceptions in the throws clause, it looks stupid. I don't know why, I think it's just a stupid habit.

Still there is a point against checked exceptions here. As an API writer, you are "forcing" the caller to catch ALL of the "checked" exceptions declared in your throws clause. This simple fact, breaks OO, forces some design concept on the caller (where to put the exception handler and method throws clauses), and generate all the "bloat".

Checked Exceptions helps you remember that you need to handle them

During coding you encounter methods throwing exceptions and so you need to handle them at some point. So, with checked exceptions, for a certain class or module, the list of exceptions to handle is well define.
This point was made especially when some lazy developers will ignore unchecked exceptions all together. If the compiler will not bother, exceptions are ignore.

Here the point fails on multiple accounts:
1) Forcing the catching of an exception on a lazy developer is worse than let him go along. You end up with things like:
try {
o = getObject(id);
catch (PersistenceException e) {
// Should not happen I just saved it before
Which is my personal nightmare on many projects. When the code is full of these it's totally impossible to debug, understand the behavior, or move forward. It's desperate. Simply put:
"Empty catch block are worse than no catch at all"

2) Listing and catching all checked exceptions does not cover your needs in exception handling. Even in Java there are unchecked exceptions, and in today's framework you have more and more of them. So exceptions will slip through anyway.

3) Putting the error handling in the middle of the logical code is a very bad practice. You end up copy/pasting catch block all over, and if you need a small change in error handling, well... forget it.
The error handling is a layer of your module/component/architecture, that should be transparent and decoupled from your code.
This is the only way to make sure you are handling correctly ALL exceptions. This is the power and design in modern framework using AOP Interceptors. So, like in Spring, JBossMC, EJB3, Hibernate, you need a framework layer to catch Throwable and find out which exception handler should take care of what at which layer. Perfect, unbeatable, robust, user friendly, HA, and more.

You don't have to handle checked exception just pass them on

If inside a method you are calling a method with checked exceptions and you don't know how to handle it, just add it to your (big) list of throws clause, someone up the call stack will take care of it.

This is the most non valid argument for checked exceptions, and the main reason why Java should have only unchecked exceptions.
I don't know many projects where any developer can just add a checked exception to the signature without blinking (the same for removing one). Checked Exceptions makes API throws clause fixed for eternity. If you add, you suffer, if you remove, you're lynched...
So, the only way is to create non meaningful generic Super Exception class (like IOException) that every methods throws.
Basically, this argument contradict the 2 previous ones which are a lot more valid.
On the other hand this feature is mandatory for normal coding. Everybody agree:
"If you don't know what to do with an exception, don't handle it"

This is critical, for "robust" application, so you need to find a way to answer this argument. Here is the proof that checked exceptions are contradicting their own benefits.

Checked Exceptions are recoverable, Unchecked are bugs

The principle of separation between checked/unchecked is well defined by ELH:
"as in section 8 of Effective Java. [...] checked exceptions are for unpredictable environmental conditions such as I/O errors and and XML well-formedness violations while unchecked exceptions are for program failures that should be caught during testing, such as array index out of bounds or null pointers."

Classifying, and understanding what the error IS, is a critical step in good error handling. You need to send a good type of exception, for the good type of error, with the maximum amount of information (which file, to read, to write, which DB, which XML tag, ...).
Forcing yourself to create meaningful Exception is a difficult step, but one that always pays off. Add all the parameters you can, create meaningful messages, and use the good exception class.
But, the separation "unpredictable environmental conditions" vs. "program failures" is highly subjective, and always false once you climb one layer in your architecture. The kind of comment "Should not happen I just saved it before" proves my point.
Typing is good (IOException, NPE), but to enforce the global decision for all Java software ever written that IOException is an "unpredictable environmental conditions" does not make any sense.
Furthermore, the "forcing" of handling of "unpredictable environmental conditions" generates a lot of "development by exceptions". Basically, if you know that the file may not be there, I really prefer if the developer just create a File() object and test file.exists() instead of waiting for IOException.

Experiences with Checked Exceptions

1) In my experience to handle correctly an exception you need a lot of infrastructure information: How to display errors to the user (Web or Fat client), How to log, How to translate (code, language), How to trap, Severity detector, transaction access, ...
And to provide all this information to the inner logical code that need to wrap checked exceptions is a big burden for the application. Just pass the exception, my exception handler will handle it correctly.
Since, creating an exception handler with IOC and/or AOP that has access to all the above managers, is quite easy and clean.

2) We are doing lately a lot of migration to JPA, and more than 50% of our headaches are on checked exceptions. We found out that checked exceptions promotes indirectly the bad habit of development by exceptions:
try {
o = getObject(
catch (PersistenceException e) {
// Object not found create it
o = create();
And changing the API signatures, removing FinderException from EJB 2.0, or the annoying CloneableException is a true nightmare.

3) Good coding with checked exceptions is sometimes very very tedious. For example the correct management of FileInputStream.close() and it's IOException is getting out of hand:
FileInputStream is = null;
try {
is =
new FileInputStream(fileName);

// Some work...
int n =;

// Need to close here so caller will know file in unstable state
// Need to set null to avoid double close
is = null;
catch (IOException e) {
throw new PersistenceException("Error reading file "+fileName, e);
finally {
if (is != null) {
try {
catch (IOException ignore) {
// Well I try to close, but it does not close
// SO may be I should close ;-)
log.debug("Ignoring on close of ", ignore);

Independently, of the discussion on closures, I really hope that the compiler enforcement of catching checked exceptions will be removed in the openJDK very soon...