Saturday, December 19, 2009

Structuring An Android Application For Reuse

The Android application I'm working on consists mostly of code that's not Android-specific:

- Database schema creation.
- SQL queries.
- Data access (objects mapped to and from database).
- Business logic.

There is only a small amount (less than 10%) of Android-specific GUI and notification code.

Other than the code that creates the database schema (which is database-specific), all of the non-Android code is device- and OS-agnostic. It just needs a Java runtime.

I wanted to be able to easily reuse the portable code to run the application on a website, and on other Java-based handhelds.

The first step involved abstracting the database so that any database can be used, instead of just Android's binding to SQLite:

http://code.google.com/p/blog-code-hosting/source/browse/#svn/trunk/android/reuse/CommonPortable/src/main/java/com/jimandlisa/common/database

and also abstracting the logger:

http://code.google.com/p/blog-code-hosting/source/browse/#svn/trunk/android/reuse/CommonPortable/src/main/java/com/jimandlisa/common/logging

Once that was done, I could test the database and logger independently from Android:

http://code.google.com/p/blog-code-hosting/source/browse/#svn/trunk/android/reuse/CommonPortableTestUtils/src/main/java/com/jimandlisa/common/testutils

http://code.google.com/p/blog-code-hosting/source/browse/#svn/trunk/android/reuse/CommonDatabaseTestUtils/src/main/java/com/jimandlisa/common/testutils/database

http://code.google.com/p/blog-code-hosting/source/browse/#svn/trunk/android/reuse/CommonPortableTests/src/test/java/com/jimandlisa/common/tests

The portable business logic is in another set of projects:

MyAppPortable
com.jimandlisa.myapp.common
com.jimandlisa.myapp.controller
com.jimandlisa.myapp.database
com.jimandlisa.myapp.model

MyAppPortableTestUtils
com.jimandlisa.myapp.testutils

MyAppPortableTests
com.jimandlisa.myapp.tests


Finally, the Android-specific code is in two more projects:

MyAppAndroid
com.jimandlisa.myapp

MyAppAndroidTests
com.jimandlisa.myapp.android.tests


The inter-project dependencies look like this:

CommonPortable:
n/a

CommonPortableTestUtils:
CommonPortable

CommonPortableDatabaseTestUtils:
CommonPortable
CommonPortableTestUtils

CommonPortableTests:
CommonPortable
CommonPortableTestUtils
CommonPortableDatabaseTestUtils

MyAppPortable:
CommonPortable

MyAppPortableTestUtils:
CommonPortable
MyAppPortable

MyAppPortableTests:
CommonPortable
CommonPortableTestUtils
CommonDatabaseTestUtils
MyAppPortable
MyAppPortableTestUtils

MyAppAndroid:
CommonPortable
MyAppPortable

MyAppAndroidTests:
CommonPortable/target/classes
MyAppPortable/target/classes
MyAppAndroid/bin
CommonPortableTestUtils
MyAppPortableTestUtils


The CommonDatabaseTestUtils are broken out from CommonPortableTestUtils because they depend on JARs that aren't loaded on the device, so they can't be included in the dependencies for MyAppAndroidTests.

MyAppAndroidTests depends on target/classes and bin for reasons explained in http://jimshowalter.blogspot.com/2009/10/developing-android-with-multiple.html.

So was it worth it?

Well, yes, and no.

There are definitely advantages:
  • I can write any other database-intensive application by cloning this setup and reusing the code in com.intuit.jimandlisa.common.*.
  • The database-abstraction layer doesn't have the problems reported in Google Android issues #3302, 3296, and 3304.
  • Having much of the code be regular, portable Java makes it easier to write coverage tests because EasyMock, JMockit, and Cobertura are available.
But there are also disadvantages:
  • This approach creates a lot of separate projects, and that's kind of a pain to set up.
  • I have to maintain the database-abstraction layer.
  • More work is required to mature the database-abstraction layer. (It currently doesn't support binding args or compound keys, all keys must be longs, etc.)
  • Because the database-abstraction layer isn't a native binding, it's slower. For a cellphone app with a few database operations per hour, this doesn't matter, but it will matter on a website.
On balance, the approach makes sense for the particular application I'm working on, because it is almost entirely not tied to Android. But the approach wouldn't be appropriate for a GUI-intensive application that makes a lot of calls to Android APIs.

It might be better to model the application, and generate the code. With so many devices, OSs, and vendors, code generation might be the only pragmatic way to develop a cross-platform application.

Update: The sample code has been updated to add createSchema and updateSchema to AbstractDatabase in order to make execSql private.

Sunday, October 4, 2009

Developing Android With Multiple Eclipse Projects

The Android application I'm developing at home has non-Android-specific schema, database access, and business logic, so it makes sense to partition the application into multiple Eclipse projects, some Android-specific and others not.

One benefit of using separate projects is that I can write a second version of the application that runs on a website instead of on a handheld without having to rewrite the bulk of the code.

But there's a catch: when the Android-specific unit tests are moved into a separate Eclipse project, the loader reports "Class resolved by unexpected DEX error" and the unit tests fail with "java.lang.IllegalAccessError: cross-loader access from pre-verified class". (In 2.1 the error remains, but the message has changed to "Class ref in pre-verified class resolved to unexpected implementation".)

I first ran into this problem in 1.5 Android, and upgraded to 1.6 hoping it would go away, because in 1.6 using separate Eclipse projects for unit tests is recommended:

http://mobilebytes.wordpress.com/2009/09/19/new-eclipse-android-test-features


http://www.danielswisher.com/2009/06/as-new-android-developer-i-have-been.html

There is even a toolbar icon and wizard to create an Android test project that points to a separate Android project to be tested.

Googling around, I found other programmers having the same or similar problems:

http://groups.google.com/group/android-platform/browse_thread/thread/20ff41b925e04dd4

But googling also found programmers with multiple projects but not the problem:

http://www.anddev.org/unit_testing_private_methods-t7847.html

What makes some programmers have no problem using multiple Eclipse projects to separate Android application code from Android unit-test code, while other programmers get the loader errors?

The problem shows up when the loader sees the same class loaded more than once:

http://groups.google.com/group/android-developers/browse_thread/thread/3440dd8e11a1b481

which can happen if the main Android code and the Android unit tests share code from another Eclipse project:

http://groups.google.com/group/android-developers/browse_thread/thread/5537ae10e4143240

which is the setup I have.

To reproduce the problem:
  1. Create an ordinary Java project called LoaderProblemCommon, and create this class in it:
  2. package com.loaderproblem;

    public class LoaderLogger
    {
    public static void log(String message)
    {
    System.out.println(message);
    }
    }
  3. Create an Android project called LoaderProblem, add the LoaderProblemCommon project to its build path, and create this class in it:
  4. package com.loaderproblem;

    import android.app.Activity;
    import android.os.Bundle;

    public class Hello extends Activity
    {
    public void onCreate(Bundle savedInstanceState)
    {
    super.onCreate(savedInstanceState);
    setContentView(R.layout.main);
    logHello();
    }

    public static void logHello()
    {
    LoaderLogger.log("Hello");
    }
    }
  5. Add LoaderProblem to the Android manifest.
  6. Create an ordinary Java project called LoaderProblemTestUtils, and create this class in it:
  7. package com.loaderproblem.testutils;

    public class TestUtil
    {
    public static final void doNothing()
    {
    }
    }
  8. Create an Android test project called LoaderProblemTests, add the other three projects to its build path, and create this class in it:
  9. package com.loaderproblem.tests;

    import android.test.AndroidTestCase;

    import com.loaderproblem.Hello;
    import com.loaderproblem.LoaderLogger;
    import com.loaderproblem.testutils.TestUtil;

    public class LogHelloTest extends AndroidTestCase
    {
    public void testLogHello()
    {
    LoaderLogger.log("Hello");
    Hello.logHello();
    TestUtil.doNothing();
    }
    }
The result should look like this:

Compile all four projects, create a brand-new AVD, and run LoaderProblem as an Android application. It should deploy and display:

Now try to run LoaderProblemTests as an Android JUnit test. It will fail with:


Or, for 2.1, it will fail with:

I first tried to fix this problem by editing the build path for LoaderProblemTests so it only depends on the LoaderProblemTestUtils Eclipse project, and then adding back the necessary classes by adding dependencies on the other projects' class folders:



This "worked", but it introduced several unwanted side effects:
  • Now that the dependent project's source is seen as a class folder, one can no longer smoothly refactor across projects or navigate into classes (via F3, for instance), because these files are now treated as a compiled objects. Attaching the source to the classfolder library sort of works, but then it brings up an uneditable version of the source. [reported by a reader]
  • It screws up EclEmma. For details see https://sourceforge.net/tracker/?func=detail&atid=883351&aid=2934081&group_id=177969.
The proper way to fix this problem was reported by another reader:
  1. In the build path for LoaderProblem, click on "Order and Export".
  2. Check the box for LoaderCommon, to let LoaderProblem export the LoaderCommon classes as well as its own.
  3. In the build path for LoaderProblemTests, remove the dependency on the LoaderCommon project. The dependency is already now covered by the LoaderProblem dependency.
  4. Clean, build, rerun LoaderProblem, then rerun LoaderProblemTests.
This solution also fixes the EclEmma problem.

Saturday, September 26, 2009

Using Eclipse With Large Code Bases, Part V

Previous posts described how to use file-system links and various Eclipse features to make a large, single-rooted source tree look like a set of smaller, multi-rooted Eclipse projects.

But there's a catch: the Perforce Eclipse plugin doesn't work, because the Perforce client spec specifies the root of the actual source tree on disk, but the Eclipse Perforce plugin thinks the source tree is rooted at the directory that contains the Eclipse projects.

For example, if we start with this client spec:
Client: originalclient
Root: C:\
View:
//trunk/src/... //originalclient/src/...
and use links to create this Eclipse structure:
C:\projects\
Child1\
.classpath
.project
src\
com\
parent\
child1 → linked to
C:\src\com\parent\child1
Child2\
.classpath
.project
src\
com\
parent\
child2 → linked to
C:\src\com\parent\child2
then the Perforce root is C:, but the Eclipse Perforce plugin thinks the root is C:\projects.

To fix this, we updated our link-generation script to generate a second client spec, where the client spec's root is the directory that contains the Eclipse projects, and the client spec maps individual sub-roots to their counterparts in the Eclipse projects (a one-for-one mapping between Perforce and the links):
Client: myclient
Root: C:\projects
View:
//trunk/src/com/parent/child1/... //myclient/Child1/src/com/parent/child1/...
//trunk/src/com/parent/child2/... //myclient/Child2/src/com/parent/child1/...
Note that although we have three different "views" of the source tree (two in Perforce and one in Eclipse), there is only one actual occurrence of each source file on disk, which means changes are kept in sync among all three views.

So if Perforce can do this mapping, why do we need the links at all? In many cases, you might not. In our case, though, the file layout in Perforce is really complicated, and it would be painful to create a client spec that maps all of the files to the file system. For example, there are sixty or so files at the very root of the Perforce tree, and these would have to be mapped to a directory on disk while using client-view exclusions to prevent mapping everything else in the root to that same directory. For us, it's easier to use the second client spec to map only the files we need in Eclipse--and to map them to directories that are actually links into the full tree.

A Powerful And Flexible Java Dependency Analyzer

An earlier post evaluated two Java dependency-analysis plugins for Eclipse. Both plugins are very good at providing an interactive, dynamic view of the code, which allows a programmer to explore dependencies and evaluate the impact of changes.

Recently I had a different requirement: generate reports that summarize dependencies, and provide those reports to management in a format that doesn't require having the code or Eclipse available.

A bit of googling brought up Dependency Finder, which turned out to be an excellent tool--scalable, fast, flexible, well-documented, and easy to install. It comes with a set of reports, with numerous options, and it can be customized to produce specialized reports. It also has a GUI, although that wasn't needed for what I'm using it for.

Support is excellent--a question to the author was answered within a few minutes.

Saturday, September 12, 2009

Test Utils are now open-source

At work, we've been encouraged to find ways to open-source technologies that are useful but aren't core to our business.

While writing unit tests over the past several years, I developed some utilities that make it easier to perform some common test tasks, particularly when used with JMockit and EasyMock.

The utilities are now available as open source.

Update: The genius who develops JMockit has added so many features it's not necessary to use reflection to populate fields, so there's no need for TestUtils to be used with JMockit. TestUtils are still useful with other mocking tools, and when you just want to use reflection for something in a test where you aren't using mocking.

Sunday, August 30, 2009

Selecting A Coverage Tool

At work, we use Clover to track coverage.

For programming at home, a Clover license is way too expensive (not as bad as those software vendors that used to not list prices and instead said "Call for quote", but close).

Also, we've had some issues with Clover:
  1. It sometimes falsely claims that code is covered when it isn't.
  2. It doesn't support branch coverage.
  3. It can't instrument assignments in conditionals.
(The Clover rep confirmed the above problems a couple of years ago. These may have been fixed by now. However, because the licenses are so expensive, we haven't upgraded, so we're stuck with the problems even if they're fixed in the current version. Also, the Clover rep didn't think the third issue would ever be fixed due to the way they instrument the code.)

For a simple example of problem #2, start with a function like this:
public boolean eval(boolean x, boolean y, boolean z)
{
return x && (y || z);
}
Clover scores 100% coverage if the entire expression evaluates at least once to true and at least once to false, so this is sufficient:
assertTrue(eval(true, true, false));
assertFalse(eval(true, false, false));
But evaluating all of the meaningfully distinct ways for the expression to evaluate requires more tests:
assertTrue(eval(true, true, false));
assertTrue(eval(true, false, true));
assertFalse(eval(true, false, false));
assertFalse(eval(false, true, false));
Problem #3 means that this code can't be covered:
while (currentLine = stream.readLine() != null)
Covering that code with Clover requires rewriting it as a do/while.

For the above reasons, I went in search of a coverage tool other than Clover for programming at home.

Googling around, EMMA and EclEmma (an Eclipse plugin for EMMA) kept showing up, so I tried them.

Unfortunately, where Clover incorrectly says that uncovered code is covered, EMMA incorrectly says that covered code is not covered:



True, it only screws up occasionally, but if you're five lines short of 100% coverage, and the five lines are spurious tool errors, it's annoying.

This was unfortunate, because the EclEmma plugin is very fast and completely non-intrusive, and would be great to use. Plus, it's currently the only coverage tool that works with Android.

After reporting these problems, I looked around for alternatives. A bunch of coverage tools were listed here, and I tried each of them out. Most didn't install, or weren't compatible with the latest Eclipse, or hadn't been maintained for a number of years, etc.

But Cobertura works really well.

Unfortunately, there's no Eclipse plugin for Cobertura, and it's slower than the other two tools, but compared to getting the wrong answer, that's not too much to give up. The dream tool would combine Cobertura's accuracy with EclEmma's speed and usability, but no such tool exists.

This table compares the three tools:
















































CloverEMMA/EclEmmaCobertura
Statement coverageyesyesyes
Branch coveragenonoyes
No quirks or bugsnonoyes
Fastyesyesno
Reportsyesyesyes
Maven integrationyesyessort of
Works with mocking toolsyesyesyes
Eclipse pluginyesyessort of

Freenoyesyes
Active communityyesyessort of



Notes:
  • Maven integration for Cobertura is provided by a separate open-source project. This project has been around for a while. There are some problems using the integration.
  • Eclipse integration for Cobertura is provided by a separate open-source project. This project is new and not very mature yet.
  • Cobertura's community support is rated "sort of" because the only forum is via an email distribution list, and responses can take a couple of days. On the other hand, the code is mature and easy to use, so there isn't much need for help (most of the emails I sent were questions about setting up the ant scripts).
  • JMockit can be configured to generate coverage reports, but currently only statement covered is supported. The author plans to add additional features (including branch coverage), at which point it should be evaluated like the other three tools.
  • For a comparison of coverage tools that arrives at the opposite conclusion to mine (partly due to other requirements), see http://javapulse.net/2008/09/02/coverage-emma-cobertura-maven.

Tuesday, August 25, 2009

Eclipse Plugins For Java Dependency Analysis

Once I got our project's code imported into Eclipse in a usable way, the real work started.

The plan is to upgrade the code to use newer APIs and services. To do that, we need a way to analyze the dependencies, to identify cycles, determine the easiest places to refactor, etc.

A web search found a useful evaluation of various dependency-analyzers for Eclipse.

I tried all of the tools listed in the evaluation, and liked STAN the best.

STAN installs easily, is intuitive to use, and doesn't choke on our million-plus lines of code (after configuring it to analyze class-to-class dependencies instead of at the method level). Plus, questions emailed to STAN are answered quickly--the support is good. We bought a license.

But STAN is a commercial product, and I also needed something to use at home. Something inexpensive.

That's when I noticed a comment at the bottom of the evaluation. (It had been there all the time, but who reads comments, right?)

The comment recommends CAP, and it's a good recommendation. CAP is similar to STAN in terms of ease of installation and use, it doesn't choke on our million-line project, and it's faster than STAN and uses less memory. Plus it's free, which is hard to beat.

However, CAP hasn't been upgraded for a while, and the author is intermittently difficult to contact. Also, STAN has a better display of rolled-up package dependencies (for example, if you have com.abc and com.abc.def, you can see dependencies on com.abc, or com.abc.def, or com.abc*, but in CAP you can only see com.abc or com.abc.def individually).

STAN doesn't offer floating licenses, which makes it pretty expensive if more than a couple engineers will be using it. Dependency analysis is something an engineer might do while learning or refactoring a code base, and then not do again for months, so floating licenses would make sense.

Both tools are good. STAN's package roll-ups are really useful when you have a lot of subpackages. CAP's price is hard to beat.

If you do use CAP, please support open source and send the author a donation. I was the first person to do that, which is kind of a shame.

Update: See this post for another good Java dependency analyzer.

Sunday, August 23, 2009

Using Eclipse With Large Code Bases, Part IV

Previous posts described how we created a set of Eclipse projects that break up our million-line, single-rooted source tree into manageable chunks.

Today, I'll describe how we solved a similar problem with our runtime classes:

5. JAXB is used to generate some .class files into a runtime directory "rt", but that same directory contains all of the .class files for the system

Some details on the situation:
  • Our project currently uses gmake to compile the million-plus lines of code.
  • Builds output to a runtime directory on the developer's machine called "rt". All classes required to launch the application are either in rt, or in JARs in a "lib" directory.
  • Many of the classes in the rt directory are duplicates of classes Eclipse compiles in the projects, but some of the classes in the rt directory are generated by JAXB (which is executed by gmake as part of the builds), and are not in the source tree.
  • Eclipse needs to see the generated classes in order to compile.
For example:
package com.parent.child1;

import com.parent.Parent;

import com.parent.child1.generated.Gen1;

public class Child1 extends Parent
{
private Gen1 gen1;
}
Eclipse supports linked resources, so at first it seemed like we just needed to define a linked resource for external classes that pointed to rt. Unfortunately, there are so many classes in rt that Eclipse again ground to a halt.

Fortunately, by now we were experienced with using links to subset a source directory hierarchy, so I just used the same approach to subset the runtime directory hierarchy.

First, run commands to add a link into the rt directory at the desired location:

mkdir C:\projects\Child1\rt\com\parent\child1\generated
junction C:\projects\Child1\rt\com\parent\child1\generated C:\rt\com\parent\child1\generated


Then add the rt directory to the .classpath:
<classpathentry kind="lib" path="rt"/>
Refresh in Eclipse, and the code builds:



Notes:
  • As was the case with source files, multiple links to runtime-class directories can be created for a single project.
  • If multiple projects need the same .class files from an rt directory, the rt link should be set only in the project that generates the .class files. Other projects should point to that rt link via Build Path → Add Class Folder...

In Part V, I'll describe how we fixed a problem with the Perforce Eclipse plugin caused by using links.

Saturday, August 22, 2009

Using Eclipse With Large Code Bases, Part III

In Part II, we saw how to use links to break a large single-rooted source tree into separate Eclipse projects.

In those examples, the packages didn't have mutual dependencies--they were a directed, acyclic graph.

But in my project's legacy code base, there is another complication:

4. Packages and layers have mutual dependencies (for example, business logic in the UI layer), but Eclipse treats cycles among projects as compile errors

Of course, it would be better not to have cycles, but in a legacy code base it's not always easy, or even tractable, to remove them. Remember that my project has to work with this constraint:

6. None of this can be changed, at least not any time soon

So, a more realistic example has child1 depend on child2, and vice-versa:
package com.parent.child1;

import com.parent.Parent;
import com.parent.child2.Child2;

public class Child1 extends Parent
{
private Child2 child2;
}

package com.parent.child2;

import com.parent.Parent;
import com.parent.child1.Child1;

public class Child2 extends Parent
{
private Child1 child1;
}
This is allowed by Java, but not allowed (by default) by Eclipse for packages in separate projects, even after adding the project dependencies:



Fortunately, Eclipse allows cycles to be a warning instead of an error:


Unfortunately, that just converts the errors into warnings, which clutter the window (we already know we have cycles):


Fortunately, Eclipse supports filtering out specific warnings. In the Problems window, click the down-arrow icon on the right (View Menu), select Configure Contents..., and add a filter:


And now the Problems window is empty.

Part IV describes how we fixed this remaining issue:

5. JAXB is used to generate some .class files into a runtime directory "rt", but that same directory contains all of the .class files for the system

Friday, August 21, 2009

Using Eclipse With Large Code Bases, Part II

In Part I, linked source with excludes failed to solve the problem of how to break up a large, single-rooted source tree into multiple projects in Eclipse.

After a night off, I thought of using file-system links to create the illusion of multiple directory roots. But we needed directory-to-directory links, not file-to-file links, and Windows doesn't directly support those.

A coworker found the Windows-specific "junction" command, which can be downloaded from Microsoft. It's an add-on to Windows, not part of the standard set of shell commands. With the junction command, I was able to create a multi-rooted source tree that points to the source it needs from the single-rooted source tree.

For example, starting with:
C:\src\
com\
parent\
Parent.java
child1\
Child1.java
child2\
Child.java
Create a parallel structure:
C:\projects\
Child1\
.classpath
.project
src\
Child2\
.classpath
.project
src\
The .project files don't have linked-source directives in them, and the .classpath files just have the standard <classpathentry including="**/*.java" kind="src" path="src"/> entries.

From a command prompt, execute commands to create links from the project src directories into the real source tree:


mkdir C:\projects\Child1\src\com\parent\child1
junction C:\projects\Child1\src\com\parent\child1 C:\src\com\parent\child1

mkdir C:\projects\Child2\src\com\parent\child2

junction C:\projects\Child2\src\com\parent\child2 C:\src\com\parent\child2


(The commands for creating directory and file links in linux/Mac are of course different, but the concepts are the same.)

The resulting directory structure looks like this:
C:\projects\
Child1\
.classpath
.project
src\
com\
parent\
child1 → linked to
C:\src\com\parent\child1
Child2\
.classpath
.project
src\
com\
parent\
child2 → linked to
C:\src\com\parent\child2
After refreshing the projects in Eclipse, the unwanted packages and source files are gone:


Unfortunately, the code doesn't compile:


Remember the third item in the list of problems?:

3. Some source files used throughout the code are located in the top of the source tree

It has come back to haunt us. We have to have visibility to Parent.java in both projects, but we can't link to the root of the source tree, because that's the problem we're trying to solve with links.

To fix this, create another project, Parent, but use a file link instead of a directory link:

mkdir C:\projects\Parent\src\com\parent
fsutil hardlink create C:\projects\Parent\src\com\parent\Parent.java C:\src\com\parent\Parent.java


Then add the Parent project to the dependencies of Child1 and Child2, and now it does compile:


This approach works very well--the entire million-plus lines of code is broken up into 40+ projects in Eclipse, and the code compiles quickly after the initial import.

You can envision an Eclipse plugin that would semi-automate this process. At a minimum it would be nice to generate the projects and link scripts from some kind of description, instead of editing the files by hand. Unfortunately, by the time I had worked out the pattern, the projects and links were mostly already finished.

Note: Although it's not shown in the examples above, this approach can also be used to link to multiple child nodes in a source tree to produce a combined tree for a project. For example:

mkdir C:\projects\Child1\src\com\parent\child1
junction C:\projects\Child1\src\com\parent\child1 C:\src\com\parent\child1


mkdir C:\projects\Child1\src\com\parent\otherChild
junction C:\projects\Child1\src\com\parent\otherChild C:\src\com\parent\otherChild


In Part III, I'll describe how we dealt with this issue:

4. Packages and layers have mutual dependencies (for example, business logic in the UI layer), but Eclipse treats cycles among projects as compile errors

Thursday, August 20, 2009

Using Eclipse With Large Code Bases, Part I

The project I'm currently working on has more than a million lines of source code. Some of the code was written as long ago as 1998, so as odd as it sounds to call anything involving Java "legacy", this is a legacy Java codebase.

I wanted to bring the code into Eclipse, but not as one giant million-line project. Instead, I wanted to break it up into smaller projects.

But there were complications:
  1. The source tree has a single root directory
  2. Eclipse can't nest projects
  3. Some source files used throughout the code are located in the top of the source tree
  4. Packages and layers have mutual dependencies (for example, business logic in the UI layer), but Eclipse treats cycles among projects as compile errors
  5. JAXB is used to generate some .class files into a runtime directory "rt", but that same directory contains all of the .class files for the system
  6. None of this can be changed, at least not any time soon
Items #1 and #2 mean that the Eclipse projects have to be located outside the source tree, and point to source in the source tree.

Fortunately, Eclipse supports linked source, so I cre
ated the Eclipse projects in a different location, and set their build paths to have linked-source entries that pointed to the source tree.

The first step is to create a global linked-resource variable that points to the root of the source tree:


Then right-click on each project and select
Build Path → Configure Build Path... Link Source... → Variables..., and select the global linked-resource variable.

Saving the changes results in a .classpath entry like this:
<classpathentry kind="src" path="src"/>
and a .project entry like this:
<linkedResources>
<link>
<name>src</name>
<type>2</type>
<locationURI>src</locationURI>
</link>
</linkedResources>
for each project.

(
Because I had so many projects to manage, I edited the .project files directly, instead of interactively.)

Unfortunately, item #1 complicated linking to the source, because every Eclipse project linked to the same root directory, which meant every Eclipse project saw the same source instead of just seeing the source for that project.

For example, if the source tree looks like:
C:\src\
com\
parent\
Parent.java
child1\
Child1.java
child2\
Child.java
and we want two Eclipse projects, "Child1" and "Child2", they both have to start their source trees at C:\src. So Child1 sees Child2's code in the child2\ directory, and Child2 sees Child1's code in the child1\ directory:


Fortunately, Eclipse supports source exclusion. To exclude source, right-click on a project and select Build Path → Configure Build Path... Source, Excluded Edit... Exclusion Patterns: Add..., and add every package and/or file you don't want included.

This modifies the .classpath files to have entries like:
<classpathentry kind="src" path="src"
excluding="com/parent/child1/"/>
Using this approach, I was able to exclude unwanted packages from each project. Because there are a lot of packages, this was tedious and took hours, but it worked:


Unfortunately, once everything was configured and I launched Eclipse, it took 35 minutes to load.

35. Minutes.

It turned out that Eclipse bogs down if it has to import a lot of source code and then filter it out. Ideally it would filter it out first and only load the remainder, but it doesn't seem to do that.

After reporting this problem
, I switched to plan B, which is described in Part II.