Wednesday, November 08, 2006

Solve Business Problems Now, Technical Later

In working with a combination of C-level management, open source projects and developers, commercial projects and their support staff, and my own team, came to a pretty good resolution that clearly defines where I stand on certain topics:

  • Solve the Business Problem now, if there are technical issues that will take a while to resolve, solve them later.

Obviously, there is certain conditions that should be met, such as solving the business problem with some architecture/engineering to help with maintaince issues, changes, etc - but the point is to not over-engineer, and not to wait for a technical 'fix' unless it is clearly within the timeframe to solve a business problem or would impact the quality of the business solution to solve said business problem.

Cost: It *will cost more* to solve the business problem now and take care of technical issues later. This is where most management and customers do not really want to hear and, frankly, usually do not care as long as the business problem gets solved.

Revenue Stream: However, as a technical representative to an organization, and more importantly as an employee or consultant, the sooner you can bring in and/or maintain the revenue stream, the better overall for the organization. This may or may not cover the additional cost associated with the above statement, but quicker time-to-market is usually a good thing as long as the quality *of solving the business problem* is not compromised.

Why this rant? It's not a rant, it's been a thorn for a lot of individuals and teams. Some people are very good at the so-called 'quick and dirty' solutions that get something up and running, then spend *enourmous resources* maintaining that solution. Other over-engineered solutions may miss deadlines/over-budget but once deployed *may* cost much less over time (TCO) compared to an equivalent quick-and-dirty solution.

There is no perfect answer, other than no matter what a technical project will have costs during development and costs for maintanence -- but it has no value unless it solves a business problem.

Friday, October 06, 2006

Accounting with EJB3, JPA-QL, and Databases

Just an update post on what I have been working on. I have been working on EJB3.

I started learning/using EJB3 back in Oct '05 and deployed an application based on jboss EJB3-embedded (actually, it was just the hibernate/JPA persistence side of it).

New project has allowed me to jump back into EJB3 and glad to see there is still some momentum. The persistence side (JPA) is working very well for me. The concept of using java objects through O/R mapping to deal with database structures is wonderful (and significantly improved since my days of working with EJB1.1). The new JPA-QL query enchancements over the old EJB-QL are great.

However, one problem still isn't solved completely - working with Monetary figures. I'm working on an application that needs to do financial calculations. Naturally, I'm trying to use java-side types of BigDecimal so I can set the precision/scale to only have two numbers after the decimal. On the application side, this works great. Store the data as database fieldtypes of Decimal with the same precision/scale also works great.

That's when I ran into big problems. Doing calculations on those values through a query was not working correctly. When asking the java object its value was returning values of not the correct scale.

JPA-QL statement:
  • "select node from Balances node where (node.totalPaid - - node.price != 0)"
would give incorrect results. Why incorrect results? Floating point optimizations.

Floating point optimizations - computer hardware and underlying operating systems will take shortcuts to improve the Floating Point calculation performance. These are also called single-precision Floating Point calculations. The results of using these kinds of calculations can turn something simple like '58.11' to actually be '58.10997856788934...'. Big Problem.

  • Hardware/OS enable of double-precision floating point calculation (usually requiring Xeon-type processors). Specifically, on the database server that is running these queries (and on the application server would not hurt).

  • Application-level IEEE-754 Floating Point calculations - i.e., get all the record results from the database and have the application itself do controlled calculations (in Java, strictfp keyword or use BigDecimal java.math functions).
  • Carefully choose/configure your database server on how it handles Decimal-field calculations.

I went with the 3rd option once I understood where my problem was coming from. I was developing on Mysql 4.1 which, even though it is storing the Decimal field with the correct precision, running queries with those fields ran with the Floating Point problem. Mysql > 5.0.3 corrected this problem, as well as testing on Postgresql 8.1.1 also corrected this problem.

As for asking for a specific field value, I was still having problems. In EJB3, you can set a BigDecimal with this annoation for persistence:
  • @javax.persistence.Column(precision=8, scale=2)

As of Jboss EJB3 RC9, this only helps when saving the data to the database (i.e. the value stored in the database is correct), but not when displaying a retrieved value. This goes into how the underlying JDBC driver works with the persistence engine and how JDBC does things like 'getObject' that may return something different than expected.

  • Carefully choose JDBC driver/persistence engine for BigDecimal support (i.e. getObject returns double or BigDecimal, instead of float).
  • Handle the returned data on your getter (re-set the BigDecimal scale and have any UI display the BigDecimal as a formatted string).

As you can see, financial applications you need to be very careful about the Floating Point Optimization problem, and requires careful diligence across the entire stack - hardware, OS, database, and application.

6/20/2011 update:

Friday, September 01, 2006

Parsing X12 EDI HIPAA 835 files

At work, I have been tasked to work on parsing and creating 835 files. I already have a previous system that parses 835 files for use in a sales/payment reconciliation program, but that uses an external commercial program to convert to XML and I process the XML results.

First, there are a number of solutions already out there. As for open-source, python-based pyx12 was one of the first I started playing with because there was a lack of non-GPL java based ones. The java-based ones include EDIReader and OBOE.

Now, one of the challenges I was running into was this whole conversion to XML. I will need to store the 835 EDI files, and if you use a different tool later for XML conversion it may convert it differently, which may lead to different results on the same 835 EDI file (or using the same original datasources when creating an 835 file).

So, I have written my own, specific parser/renderer based on a simplified, java-based domain model (domain objects that could be extended to be EJB3 Entities to be precise). No XML. And, to be more specific, it only works with Pharmacy-based 835 making it smaller and more efficient for my particular purpose. How efficient? I can parse a 2000 transaction 835 EDI file from local disk into java classes in about 200ms -- and that is prototype, non-optimized code with no external library dependencies ;-)

What have we learned? I left some out, but you have to already know and understand 835 EDI files to accomplish this. If you do not have domain knowledge related to 835's, there are plenty of software packages to help you and then you will have to integrate it into your solution. I took the latter approach first and it did work, but proved inefficient and dependent on a commercial solution. With more knowledge, you are able to take control of your destiny (with software at any rate)!

Friday, July 14, 2006

File Cabinets and Solutions (JCR/WEBDAV)

I'm back with my first 'official' blog in the development realm. I'm a java developer, an open-source follower, and am a big fan of consistency and standards.

My off-coding time is actually spent as a CDIA+, certified document imaging architect. One of the problems that usually fall under this role is in the arena of Document Management.

Document Management: Very simply, look at file cabinets. There are documents (a document is a collection of pages usually either single or collected together by a staple or a paper clip). Documents are stored in Folders. The folders usually have tabs that 'index' the type of documents it contains. Drawers are labeled usually with a portion of the index (i.e. if by last name, maybe the first two letters) for faster manual lookup. Everyone understands filing cabinets.

So, how come transforming the manual filing cabinet is so difficult electronically? The information and idealogy is right there in front of us!

The answer, in my opinion, is that not one solution does it the same as another - they are all proprietary and, as such, not everyone understands how to use between different proprietary document management solutions.

Solution: I have been tracking a protocol that allows you to store documents, archive documents, index documents, and then search for those documents later in a consistent fashion. This protocol is HTTP-based and is not tied to Java/.NET/Ruby/Perl, but can be used and understood by all of them. It is documented and RFC'd a number of times. What is this magical protocol? An old one called WebDAV.

I have been anxious for a solid, working solution of a java-based open-sourced WebDAV server from Don't hold your breath - after two years, I still haven't been impressed enough to actually deploy it, but am using it as a learning tool for WebDAV. Last binary was from 2004, and I have had sporadic success with building from CVS (i.e. something that used to work would break - not fun).

There is a newly active content repository from is in development based on the Java Content Repository API (JCR), also known as JSR-170. I'm not overly excited about JCR/JSR-170 as it is language specific, but they are working on a WebDAV interface which I am very excited for. Come on apache, don't let me down a second time!

First blog post

Welcome everyone!

I already had my first blog planned out. Unfortunately, I will be posting that blog second.

Two influential developers that I follow are Matt Raible and Rick Hightower. They use for their blogs. I waited two months and jroller still hasn't fixed their new user registration. was my second choice and everytime I've come here they have been reliable and always up. Guess what, got my business :-)