David R. Heffelfinger

  Ensode Technology, LLC

 

Confessions of a Java Snob


For the past couple of months I've been working on porting a PHP application to Java EE using JSF, JPA and EJB 3 (in case you are wondering, yes, I've been using NetBeans and GlassFish).

I've never had any real exposure to PHP, so this is a new experience for me. I'm not sure if what I'm seeing is typical PHP code, but comparing this legacy system to the typical enterprise Java system shows some striking differences in architecture. When analyzing the PHP code, it became obvious to me that the mindset of PHP developers is very different from your typical Java developer.

In the Java world, we love our design patterns, we can't live without our DAO's and MVC. In PHP, it seems to be no big deal to mix presentation, business logic and data access in a single file.

Having worked with Java for over 13 years and Java EE/J2EE for about 10 years, I have to confess that the architecture (if you can call it that, more like "lack of architecture") seemed appalling to me. In the enterprise Java world, we've been conditioned to think that separation of concerns is a good thing.

Our presentation logic should contain presentation only, that way if in the future we want to switch say, from straight JSPs and servlets to JSF, the rest of the code shouldn't be affected. Additionally, if we want to convert our web application to a desktop application using Swing, it should be fairly straightforward to do so.

Data access logic should be done via Data Access Objects (DAO's), that way if today we are using straight JDBC and tomorrow we want to use an object relational mapping tool such as JPA or Hibernate, all we need to do is change the data access layer, the rest of the code should not be affected.

Communication between layers in our applications should be done via Value Objects, which shouldn't really change if we change our data access layer or presentation layer.

Controllers should manage flow from one page to another, again these should only be rewritten if we change our presentation layer. Most Java web application frameworks provide their own controllers, however they are not self sufficient, for example, in Struts we need to use Actions and in JSF we need to write managed beans, therefore changing the presentation layer would usually involve changing the controller as well.

After analyzing the code for the legacy system I got the impression that PHP is a language for amateurs and Java/J2EE/Java EE is for professional software engineers and architects. Am I right? Or I am just a Java snob? Feel free to set me straight.

 
 
 
 

Ubuntu Jaunty Jackalope on an HP dv6000 laptop


Ubuntu 9.04 (aka "Jaunty Jackalope) was released earlier this week.

Today I set away some time to install it on my laptop, an HP dv6810us, part of the Hewlett Packard dv6000 series.

Almost everything worked "out of the box", unfortunately the wireless still takes some work to set up.

In the past I had been using ndiswrapper
to get it to work. This time it wasn't necessary, but it still took
some effort to get it going. It would be nice if the wireless would
work out of the box.

In any case, lspci -v returns the following information for my wireless card:

03:00.0 Ethernet controller: Atheros Communications Inc. AR242x 802.11abg Wireless PCI Express Adapter (rev 01)
    Subsystem: Hewlett-Packard Company Device 137a
    Flags: bus master, fast devsel, latency 0, IRQ 19
    Memory at f6000000 (64-bit, non-prefetchable) [size=64K]
    Capabilities: [40] Power Management version 2
    Capabilities: [50] Message Signalled Interrupts: Mask- 64bit- Queue=0/0 Enable-
    Capabilities: [60] Express Legacy Endpoint, MSI 00
    Capabilities: [90] MSI-X: Enable- Mask- TabSize=1
    Capabilities: [100] Advanced Error Reporting <?>
    Capabilities: [140] Virtual Channel <?>
    Kernel driver in use: ath5k
    Kernel modules: ath_pci, ath5k

I googled around to see if I could find a solution, and bumped into this thread in the Ubuntu forums. The thread is for Intrepid, but I thought I would adapt the solution to Jaunty and see if it worked.

apt-get install linux-backports-modules-jaunty

Rebooted and... nothing!

Since
the solution didn't work, I uninstalled the above package and, lo and
behold, like magic and for no apparent reason, the wireless started
working!

I suspect that one of the dependencies on that package
did the trick, I'm not sure which one (I can't even remember which
dependencies were automatically downloaded), but installing the above
package, then uninstalling it did the trick. Weird, but it worked.

Now wireless is working without ndiswrapper.

Other
than the wireless, the installation was very smooth. Ubuntu
automatically detected my Nvidia card on the first boot, and asked me
if I wanted to install the restricted drivers. I did, rebooted and the
driver "just worked".

Also, boot time is amazingly fast, which is very nice.

 
 
 
 

Preventing Trackback Spam in Apache Roller


This morning I woke up to find 150+ comments in one of my blog entries. I have email notification of comments set up in Roller, so the 150 emails notifying me of comments in my blog indicated that something was obviously not right.

I logged in to my blog to see what is going on, and sure enough, I had over 150 bogus trackbacks in one of my blog entries.

I googled around, and found a way to prevent trackback spam in Apache roller, going to "Main Menu", then clicking on "Server Administration", then checking "Enable verification of trackback links?" and "Enable referrer linkback extraction?" seems to have taken care of the problem.

Roller should really have those two settings checked by default.

Also, I noticed all the bogus trackbacks were coming from the same IP address (83.233.30.32). I googled around, and it looks like many others are having problems with spam from that IP address as well. Just to make extra sure, I dropped any incoming traffic from that IP by configuring IP Tables:

iptables -A INPUT -s 83.233.30.32 -j DROP

iptables-save > /etc/sysconfig/iptables

Hopefully the problem is taken care of for good.

 
 
 
 

How do kids these days get started in programming?


Back in the 80's, when I was growing up, all personal computers would come with a BASIC interpreter which you could use to write your own software. As a matter of fact, it was expected for end users to write their own applications.

My very first personal computer was an Atari 800,  I was in my early teens when I got it, it was a hand me down from my uncle, who had gotten himself a shiny new IBM PC.

 

During that time, computer magazines came with games and applications in source code form that you had to type into your computer in order to "install" them. A lot of us didn't know exactly what all these lines of code meant, but we wanted the game or application so we typed away, unfortunately typos were an issue, since we were just blindly copying what seemed like greek into our BASIC prompt. Fortunately BASIC was interpreted, so it would catch syntax errors immediately, but many times the syntax was correct, but there was still a typo in the line, making the program not run as expected. It could be frustrating at times, but it was very satisfying to finally get the code to work exactly right. You could also experiment and make little changes here and there to see if you could change the behavior of the software. I remember eagerly waiting for the next issue of A.N.A.L.O.G magazine to arrive in the mail every month to see what goodies it would bring.

It is worth mentioning that at this time there wasn't yet a dominant computer architecture for personal computers. Some of us had Ataris (8 bit and/or ST), others had Commodores (PET, Commodore 64 or 128, Amiga), others had IBM PCs, other architectures existed as well. What all of these architectures had in common was that they all came with a BASIC interpreter. As a matter of fact, in most cases, the machine would boot directly into a BASIC prompt. The BASIC versions of the machines were not 100% compatible across one another, since vendors modified them to highlight specific features of their own products, but in general your BASIC skills could be used across architectures.

I remember been amazed at the wonderful things you could make these machines do, it got me really motivated to learn to write my own software, not simply blindly typing code listings from magazines. A lot of software developers from that era got our start that way, at the time, the barrier of entry for software development was very low. I derived a lot of satisfaction in creating software, I would proudly show my creations to my friends and relatives. All of these got me motivated to pursue a career in software development, which is what motivated me to major in computer science when I went to college.

Somewhere in the 90's most of these various architectures disappeared, and the one true personal computer platform emerged, the IBM PC, or what we simply call a PC today. Just like all the platforms of the time, the IBM PC came with a BASIC interpreter, but unlike the others, BASIC wasn't built into the operating system, it was something you had to look for if you wanted to use it. When the PC became the de facto standard, the focus of having end users as programmers started to decline. Magazines stopped coming with BASIC listings for you to type in. When DOS 6.0 came out, PCs even stopped coming with a BASIC interpreter altogether. Now if you wanted to develop software, you had to install a compiler or interpreter yourself, which, sadly, is still the case today.

So I wonder, how do new generations of software developers get their start? It is not as easy to "get your feet wet" these days like it was back in the day. I wonder if they pick computer science without knowing exactly what they are getting into? It's a shame that software development is not as accessible as it once was.


 
 
 
 

OpenOffice.org Document Version Control With Mercurial


I've always wanted to put my documentation under version control, just like I do with my source code. However, word processor files are binaries, therefore not that well suited for version control (track changes aside). Of course, they can be committed, however, being binaries they can't be diffed very easily.

Standard OpenDocument Text (the default format for OpenOffice.org Writer since version 2), are nothing but zipped XML files. I searched around for an easy, automated way to unzip them and zip them "on the fly" as necessary, thinking that i could put the "raw" XML files under version control. However, I couldn't find anything that would help in that regard. Manually zipping and unzipping files seemed like more trouble than it's worth.

OpenOffice.org's word processor, Writer, allows us to save in formats that are text based, such as Docbook XML, Microsoft Word 2003 XML, and OpenDocument Text Flat XML (.fodt). I figured I could try to use one of these formats internally, since they are text based they would be "diffable" by Mercurial (or any other version control tool), then when I needed to distribute the document I could export to Word format, PDF or what have you.

I haven't had the opportunity to work with DocBook in the past, and I admit I've been kind of curious about it, so I tried this option first. Unfortunately it turned out I couldn't use this format since I frequently work with Word templates (even though I work with OpenOffice.org, word templates work fine in Writer) and it doesn't seem like DocBook supports them.

I then turned my attention to the OpenDocument Flat XML (.fodt) format, this format can work with word templates, and it is saved as a plain text (XML) file. It looked like the perfect solution. To test it out, I created a simple document, saved it as OpenOffice Flat XML, and committed it to a Mercurial repository. I then made a simple change to this document, and did an hg diff on it.

To my dismay, this very simple change (I just added a new paragraph with a single sentence on it) resulted in quite a number of diffs between the two versions. Apparently this format contains a bunch of metadata such as creation time, creator, the time the file was saved, etc. This metadata was creating a number of diffs that were irrelevant to the task at hand, which is to find out what change I actually made to the file.

At this point I considered using the Handling OpenDocument Files oodiff trick described in the Mercurial site, however this trick seemed to me more like a hack than a proper solution. When using this approach, files are checked in as binary, then when diffing, a tool called odt2txt to convert the document to plain text "on the fly" then diff the plain text version. The problem with this approach is that the files are still commited to version control as binary, and most version control tools are not very efficient in storing binary files.

At this point started using the above trick, however recently I found the color extension for Mercurial, which allows diffs to be color coded. After I installed this extension, I gave the .fodt format a try again, and I started to notice patterns of what to look for when looking for diffs. For example, paragraphs are nested inside a <text:p> tag, this makes it easy to find text changes. Images are stored inside a <draw:image> tag, which makes it straightforward to see if an image was added, deleted or moved. Tables use the <table:table>, <table:column> and <table:cell> tags, making it fairly easy to identify them. This seemed like a good solution, however after a while I noticed that sometimes making a simple change in the document (for example, adding a heading somewhere in the middle), created a bunch of diffs on the document again, for example, lines that were now farther down in the document were being reported as deleted from one place and added in another, which is inaccurate.

For now, I went back to the oodiff trick, even though it bothers me a bit that I am checking in binary files to the repository, however this approach  results in sane diffs that actually allow me to track what was changed in the document.


 
 
 
 

Excluding directories from zip files on Linux


I frequently have to turn in source code to one of my customers in zip files (not fancy nor sophisticated, but that's life).

Lately, I've been working on a project that uses good old plain ANT build files. I load this project into NetBeans as a free form project so that I can have a decent working environment. NetBeans of course creates its own folders and files so that it can open the project. I am also using Mercurial for version control, which creates an .hg folder that I don't want to distribute.

 I wanted to zip up the code, while excluding the directories and files that were not meant to be distributed (.hg and the NetBeans specific files and folders). I'm on Linux, therefore I usually use file roller, a graphical archive management tool for the GNOME desktop, to create my zip files. File roller is very easy to use, just right click the directory to be archived and select "create archive".

Unfortunately there is no way to easily exclude files or directories from the zip file, I thought I could zip up the whole thing, then delete the unwanted files and directories. This worked fine for files, but for directories it deleted the files in the directory, but left the directory in the zip file.

Obviously file roller wasn't meeting my needs here, it was time to go to the good old command line. Most Linux distributions come with a command line zip utility appropriately named "zip". I read the man page and found a way to tell zip to exclude files and directories from the created archive, all that needs to be done is use the -x switch and list the files and directories to be excluded, separated by spaces, for example:

zip -r filename.zip directoryname/* -x directoryname/.hg\* directoryrname/nbproject\* directoryname/catalog.xml

The above command will do exactly what I needed, which is to create a zip file without the Mercurial and NetBeans specific files and directories. Of course any file or directory name can be passed as a parameter to the -x switch.

 
 
 
 

Thoughts on Distributed Version Control



Nowadays, distributed version control systems such as Mercurial, Git and Bazaar are all the rage. Lately, for a couple of my projects I have been using Mercurial.

Some frequently mentioned advantages of distributed version control are that it is not necessary to be connected to the network to commit your changes, and that all repositories are "equal".

Now that I've been using Mercurial for a while, I don't care that much about these advantages, however, there is one thing in Mercurial that I really, really like and that I miss very much when using other, centralized version control systems. What I love about Mercurial is that branching and merging is very simple and trivial (other distributed version control systems like Git and Bazaar probably share this advantage, however I don't have any experience with them).

When using traditional, centralized version control systems such as Subversion or CVS (or, $DEITY forbid, Harvest), many times I have found myself making some changes that will potentially introduce major breakage to the project. In cases like this I am "forced" to work without version control until all the kinks are ironed out, since committing my changes would prevent my coworkers from having a buildable source tree to work from.

The ideal solution for these cases is to create a branch in which I would make my changes safely, without affecting other developers, then when my changes are done merge my branch into the trunk. The problem with this is that for some reason branching and merging are not something "mere mortals" can do with centralized version control system. In order to do this, I would have to talk to a "CM" person to create the branch for me, which would probably take at least a few hours (if not days), then when I'm done the procedure to merge my changes would be just as painful.

When using Mercurial (and, I assume, other distributed version control systems as well), I can create a branch with ease, in Mercurial, all I have to do is use the hg clone command, passing the path of the repository to clone as a parameter. After doing this I would have my own, private branch that I can work with, without fear of breaking the build and preventing anyone else from making progress.

When I am done, all I have to do is an hg push to merge my changes back into the "trunk" (or my main branch).

If it turns out that I don't want to make the changes after all, all I need to do is delete my cloned repository using standard operating system commands (rm -rf in Linux/Unix), and the branch "never existed". This capability of easily making branches for experimental features and simply "nuking" them if it turns out to be a bad idea is what really makes Mercurial great for me. It is a very liberating feeling that I don't think can be expressed in words, you need to experience it to know what I mean.

 
 
 
 

Solving JasperReports Dependencies with Ivy


Lately, I've been doing some work with JasperReports. On my previous JasperReports projects, I've either used Maven, which automatically takes care of resolving dependencies, or I have simply downloaded the project's dependencies by hand.

Using Maven is nice, since it resolves dependencies, however, JasperReports comes with a series of useful ANT tasks to compile reports, preview them etc. I wanted access to these tasks and I also wanted dependency management.

There were a couple of ways I could achieve both. I know ANT targets can be called from Maven, this could be one approach. Also, there is a dependency manager for ANT called Ivy. I had briefly used Ivy before, I thought I would try this approach.

Ivy works by adding a series of custom ANT tasks, installing Ivy is very simple, all that needs to be done is to copy the Ivy jar file to ${ANT_HOME}/lib. Once Ivy is installed, custom Ivy tasks are available in our ANT build files.

Some additional configuration needs to be done in an Ivy specific XML file, named ivy.xml. In this file is where we actually specify our dependencies.

I set up my project to depend on JasperReports and tried to have Ivy automatically download all of JasperReports dependencies, unfortunately the build failed, complaining about some missing dependencies, specifically mondrian and commons-javaflow.



[ivy:retrieve] 		::::::::::::::::::::::::::::::::::::::::::::::
[ivy:retrieve] :: UNRESOLVED DEPENDENCIES ::
[ivy:retrieve] ::::::::::::::::::::::::::::::::::::::::::::::
[ivy:retrieve] :: commons-javaflow#commons-javaflow;20060411: not found
[ivy:retrieve] :: mondrian#mondrian;2.3.2.8944: not found
[ivy:retrieve] ::::::::::::::::::::::::::::::::::::::::::::::
[ivy:retrieve]
[ivy:retrieve] :: USE VERBOSE OR DEBUG MESSAGE LEVEL FOR MORE DETAILS


After some googling, some head scratching (and some hair pulling and banging on the table for good measure), I found out what was going on.

JasperReports has some optional dependencies declared in its pom.xml, these dependencies are not downloaded by default when using Maven, however Ivy attempts to download them. For some reason these dependencies do not exist in the repository, because of this the ANT build fails.

After some research, I found out the necessary modifications to ivy.xml to make the build succeed:



<ivy-module version="2.0">
    <info organisation="ensode" module="mymodule"/>  
    <dependencies>
        <dependency org="jasperreports" name="jasperreports" rev="3.1.2" conf="*->default"/>
    </dependencies>
</ivy-module>

What I had to do was to  use the conf attribute of the <dependency> tag to specify that I wanted that configuration for this particular dependency. The *-> default means that all your module configurations depend on the 'default' configuration of JasperReports, as explained by Xavier Hanin in this message of the Ivy users mailing list.

After making this change, Ivy was able to successfully download all JasperReports dependencies.

[ivy:retrieve]     commons-beanutils#commons-beanutils;1.8.0 by [commons-beanutils#commons-beanutils;1.7.0] in [runtime]
    ---------------------------------------------------------------------
    |                  |            modules            ||   artifacts   |
    |       conf       | number| search|dwnlded|evicted|| number|dwnlded|
    ---------------------------------------------------------------------
    |      runtime     |   14  |   12  |   12  |   2   ||   11  |   11  |
    ---------------------------------------------------------------------

I figured I would document this procedure in case others are having similar issues.

 
 
 
 

Yet another "My Favorite Firefox Extensions" Post


It seems like every other day there is a link on DZone about someone listing their own favorite Firefox extensions.

Well, I didn't want to be left out, so here are mine:

  1. Web Developer, provides a lot of functionality useful when developing web applications. One of my favorite features is the ability to outline elements on the page, very useful.

  2. Firebug, I can't live without the ability to edit Javascript in real time, and see the results on the page immediately. Also, the Firebug console is invaluable when debugging Javascript, freeing us from the awful alerts(); we used to have to use all the time in the past. Working with Javascript without firebug is like working with a hand tied behind your back.
  3. iMacros, when developing web applications, we often have to go through tedious, repetitive steps to get to the page we are developing. iMacro allows us to record macros to go through the repetitive boring stuff for us, a real time saver and sanity preserver.
There you have it. If you develop web applications for a living, the above Firefox addons will make your life a lot easier, and will also make you a lot more productive.


 
 
 
 

Common JPA Questions



  1. How can I have a composite primary key in JPA?

    There are several ways to do it, this page explains them.

  2. How can I prevent users from overwriting each other's changes to the database?

    Use the @Version annotation.

  3. Is there a way to do bulk updates and/or deletes with the Java Persistence Query Language (JPQL)?

    Yes.

Upgraded to Roller 4.0.1


Since I just upgraded JavaDB, I figured this was a good time to upgrade to the latest version of roller as well. I was using roller 4.0, which is fairly up to date, but I figured it wouldn't hurt to upgrade to the latest, 4.0.1.

I simply downloaded roller 4.0.1, created a war file from the roller directory and deployed it to GlassFish, but for some reason the upgrade didn't go smoothly, I kept getting a "Roller Weblogger has not been bootstrapped yet" error.

I tried various ways of deploying, using asadmin, copying the war file to the autodeploy folder, etc, but I kept getting the same error. I restarted GlassFish several times, I restarted the database (JavaDB/Derby) several times, nothing seemed to solve the problem.

I then decided to reinstall roller 4.0 (thank goodness I made a backup), and it came back up successfully. After doing this, I then redeployed roller 4.0.1 using a different temporary context root, and this time, it installed successfully, asking me to upgrade the database. I did upgrade the database (the only thing that needs to be done when going from roller 4.0 to roller 4.0.1 is change the version number), and I had both roller 4.0 and 4.01 running in parallel with different context roots.

I then undeployed roller 4.0, and changed the context root of 4.0.1 to match the one I had in 4.0 (/roller), I am now in business.

Why wouldn't roller 4.0.1 just install over 4.0 is anyone's guess, however I was glad I was able to work around the issue.

Java DB / Apache Derby Upgraded


Like I mentioned in My very first entry in this blog, I am running Apache Roller to run this blog.

I'm using GlassFish and Java DB, which isn't much more than a rebranded version of Apache Derby.

The setup was working OK, but every now and then Roller would throw some weird errors, valid URLs would 404, and sometimes it would go completely "out to lunch", generating error 500's.

I started looking through the GlassFish logs and noticed some entries similar to the following:


Caused by: java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
at java.util.ArrayList.RangeCheck(ArrayList.java:547)
at java.util.ArrayList.get(ArrayList.java:322)
at org.apache.derby.client.net.NetCursor.findExtdtaData(Unknown Source)
at org.apache.derby.client.net.NetCursor.getClobColumn_(Unknown Source)
at org.apache.derby.client.am.Cursor.getString(Unknown Source)

I googled around and found out this issue is caused by a bug on JavaDB/Derby version 10.3.2.1 and earlier. I happened to be running 10.3.2.1. I just upgraded to version 10.3.3.0 of JavaDB/Derby. I expect the weird errors I've been seeing to go away now.

 
 
 
 

Running Graphical Applications on a remote server


Like many, I host my web site on a remote server, it is a Virtual Private Server, running Linux (Cent OS 5).

I have full SSH access to my server, which provides me with full access to the bash shell to do anything my heart desires, well, almost anything, or so I thought.

One thing I couldn't do was to run graphical applications on the server, since it doesn't have an X server such as X.org. This didn't bother me too much, since there weren't too many graphical applications there I wanted to run remotely, with one exception, the GlassFish update tool.

I remember back in the good old days when Unix ruled, you could set the DISPLAY environment variable to run an application on one workstation or server while displaying it on another, I tried setting the DISPLAY variable to the display of my local Linux laptop, but it didn't work.

A few days ago while browsing Slashdot, I ran into a comment that explained how to achieve this:

ssh -YtC [email protected] /path/to/graphical_app

I tried the above with the correct credentials, server and path and lo and behold, I was able to run the GlassFish update tool on my server while displaying it on my laptop.

Thanks Doug!

 
 
 
 

Installing Ubuntu Intrepid Ibex in an old laptop


I have an old Averatec 3250H1 laptop that is still being used (2200+ 32 bit AMD processor, 60GB hard disk, 512MB RAM). This laptop was running Ubuntu 5.04 (Hoary Hedgehog).

I wanted to upgrade the version of Ubuntu to a more modern version. I downloaded the 32 bit version of the Intrepid Ibex install ISO, burned it into a CD and got to work.

Unfortunately the installer CD wasn't running properly, early into the install process it would just dump me into a command line and the install would abort.

I tried googling around but most information out there for this laptop is from around circa 2005, seems like nobody is trying to install a modern version of Linux on this specific laptop model.

After some messing around, I started to suspect that what was making the install abort was lack of drivers for the video card on this machine (S3 Unichrome Integrated Graphics with 64MB Shared Memory). I tried looking for an alternate way to install, the only way I could find was to download an alternate install ISO. So I downloaded this alternate install CD and proceeded to install Ibex on this "classic" laptop, unfortunately, the installer froze again at 25% of the "Select and Install Software" stage, in the "Preparing gnome-icon-theme" step. At this point I had to abort the installation, by forcibly turning off the laptop, which became unbootable after this mishap.

At this point I tried the Ubuntu alternate install CD once again, this time choosing the "command line system" (or something along those lines) option. I figured I could try and hack my way into installing X later, after all, it was running when the laptop had Hoary Hedgehog. This time, the install finished successfully, and I had a fully functional (albeit text only) system.

I booted up to my new install and upgraded all the software by running "apt-get update", followed by "apt-get upgrade" (thank goodness I am a Debian veteran and a few years of graphical only package management in Ubuntu didn't make me forget how to update software from the command line). At this point I had a fully updated, command line only system.

After some more googling around, it became obvious that I wasn't the only one having problems with the included Unichrome integrated video card included in the laptop. Thankfully, I was able to figure out that I needed the "openchrome" X driver, and I found a usable xorg.conf that I could just download and use.

After figuring out how to configure the video card, I was able to successfully run X, however saying that it was ugly doesn't even begin to describe it, all I had was a terminal window and a gray background, of course, GNOME wasn't yet installed.

I did an "apt-get install gnome", which resulted in approximately a million packages being downloaded and installed, after a very long wait I tried to boot to X again, this time running GNOME, but for some reason it seemed to be running a Debian theme, as opposed to the default Ubuntu "brown" theme.

At this point I have the laptop almost fully functional. The only thing that is not working yet is the wireless. I'm pretty sure I'll be able to get it to work, after all it was working in Hoary, however life got in the way and I had to stop setting it up. To be continued, I guess.

 
 
 
 

Java EE 5 Reference Material




My Books

Two of my books cover Java EE 5, one focuses on GlassFish deployment, the other focuses on developing using NetBeans.

  • Java EE 5 Development using GlassFish Application Server - free chapter on JSF
  • Java EE 5 Development with NetBeans 6 - free chapter on JSF
  • Java EE 

    The Java EE 5 Tutorial covers most aspects of Java EE 5. Examples can be downloaded as NetBeans projects.

  • Java EE 5 Tutorial
  • Java EE 6 (JSF 2.0, EJB 3.1)
  • JSF

    JavaServer Faces is the standard component framework for Java EE 5.

  • JSF HTML Tag reference
  • JSF Custom Conversion and Validation
  • JSF Phase Listeners
  • Facelets
  • JSF Component Library Matrix
  • JSF Localization
  • JPA

    The Java Persistence API is the standard Object Relational Mapping (ORM) tool for Java EE 5. It takes the best ideas from Hibernate and other ORM tools and incorporates them into the standard.

  • JPA Composite Primary Keys
  • JPA Optimistic Locking using @Version annotation
  • JPQL Reference

  •  
     
     
     
     

    « November 2024
    SunMonTueWedThuFriSat
         
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
           
    Today

     
    © David R. Heffelfinger