OpenSubsystems

Business Components for Java Applications

Open Core

Tutorial

Documentation
Last modified

$Author: bastafidli $
$Date: 2007/03/11 06:30:45 $
$Revision: 1.25 $
$RCSfile: tutorial_persistence.html,v $

Implementing the persistence layer

Once we have decided on the data model, it might be a good idea to implement the persistence of the data objects. This way we will be able to create, read and save the data objects and make everything ready for the implementation of the application specific business logic.

The best practices tell us to separate the business logic from the persistence tier by interfaces making it independent from the specific persistence mechanism implementation. To make the design of the persistence tier interfaces easier, Open Core provides set of generic high level interfaces containing the methods necessary to access and persist data objects. These interfaces implement the concept of data factories to create, access and modify the data entities.

Most business applications utilize relational databases as their persistence store. Open Core provides several interfaces and classes to speed up implementation of persistence layer of our application and to make accessing the relational databases easier. Open Core also establishes implementation pattern that separates the database dependent code from the database independent code to allow applications easily support multiple databases. The database independent code is placed into the database factory implementation classes. All code that is or can be potentially database dependent is placed into one or more database schema implementation classes. Database factory will use the database schema manager to create and access the correct database schema that is suitable for the currently active database type (such as Oracle or DB2). Database factory will use a connection factory to retrieve active connection to the database usually from the underlying connection pool and a transaction factory to manage database transactions. Even though we will be writing all the code and nothing will be generated, the utility classes make the coding simple and effortless.

Our tasks

We will decide what Open Core data factories fit our needs the best and derive from them our interfaces. Next, we will add to our newly defined interfaces any methods to load or modify data that will be required by our application specific business logic. We will create implementation of these interfaces using the bases classes provided by Open Core. Since we will use relational databases to store our data we will use the Open Core database operation classes to greatly simplify implementation of our factories. We will decide, which databases does our application need to support and provide database schemas for each one of them.

DataFactory - base interface for all data factories responsible for loading and persisting data. Data factory is here to implement the Data Access Object pattern or more generically the Abstract Factory pattern as described by GoF95. It's main purpose is to create, retrieve and change the persisted data objects in the underlying persistence store without exposing any specific persistence store technology dependent information to the rest of the application. This interface doesn't dictate the implementation therefore it is possible to have FileDataFactory, DatabaseDataFactory, etc.

BasicDataFactory - base interface for all data factories responsible for persisting BasicDataObject derived data objects. It allows to create and delete the read only data objects. The delete method, even thought it is not applicable to all data objects, is required since in order to implement unit tests every data object must be created during setup and deleted during cleanup.

ModifiableDataFactory - base interface for all data factories responsible for persisting ModifiableDataObject derived data objects. It allows to update already existing data objects in the persistence store.

DataFactoryManager - factory class responsible for instantiation of data factories. Business logic uses data factory manager to access data factory instance to retrieve or persist data without being exposed to the actual persistence mechanism used. This class determines what data factory should be instantiated based on the currently used persistence mechanism for a specified interface and creates the factory instance. It utilizes ClassFactory derived class to implement the strategy pattern to express the exact mechanism how to determine what class to instantiate.

Factory interface design

We will need a separate factory for each data object because the Open Core provided factory interfaces are meant to support only a single data object type at a time. Both BlogFactory View source and EntryFactory View source will be derived from ModifiableDataFactory since both data objects types can be modified. This will immediately provide us method declarations for creation (DataObject create(DataObject), Collection create(Collection)), modification (ModifiableDataObject save(ModifiableDataObject)), deletion (delete(int, int)) and basic retrieval (DataObject get(int, int)) of data objects.

The only other methods we will have to add will be those required by the user interface to retrieve data it needs. The initial page will display list of all available chronicles and therefore BlogFactory must provide a way to retrieve these. We will therefore add method List getAll() to BlogFactory View source factory.

   /**
    * Return collection of all Blogs in the persistence store.
    *
    * @return List  - list of Blogs objects sorted alphabetically
    * @throws OSSException - an error has occurred
    */
   List getAll(
   ) throws OSSException;
               

Similarly our user interface provides page where user can see list of all entries for selected chronicle and therefore the EntryFactory View source will need to provide method List getAll(int iBlogId).

   /**
    * Get all entries from blog
    *
    * @param  iBlogId - ID of the blog to get entry from
    * @return List - list of entries sorted from the most recent to the oldest 
    *                one or null if none exists
    * @throws OSSException - an error has occurred
    */
   List getAll(
      int iBlogId
   ) throws OSSException;
               

We can add additional methods later once we need them.

Notice that these interfaces are completely agnostic to particular persistence mechanism. Using the same interfaces the data objects can be persisted to a file or a relational database. User of these interfaces doesn't (and should not) have any idea where the data objects come from.

Factory implementation

After the interfaces are designed, it is time to decide on a specific persistence mechanism and implement it.

DatabaseFactory and DatabaseFactoryImpl - interface and base class for all data factories responsible for persisting data objects using relational database. It provides all what is necessary to access the active database, such as instance of a connection factory to get connections to the database or instance of a transaction factory to manage transactions on the database connections.

DatabaseSchema and DatabaseSchemaImpl - interface and base class that represents abstraction of database schema, which is set of related tables, indexes, and stored procedures that are required by subsystem to run. Subsystem will usually implement at least one database schema containing database independent code and multiple derived schemas each optimized for a specific supported database. Database schema is responsible for creation and management of all database structures for a given subsystem, derived classes then encapsulate all specific database dependent information (such as database dependent queries). Each database schema can also define what other database schemas it depends on (e.g. it has foreign keys referring to tables managed by that schema) and Open Core ensures that those schemas will be created before the current schema needs them.

DatabaseSchemaManager - factory class responsible for instantiation of database schemas. Database factory uses database schema manager to access database schema instance to construct efficient queries or efficiently execute database operations without being exposed to the currently used database management system (such as Oracle or DB2). This class determines what database schema should be used based on the currently used database for a specified interface and creates the schema instance. It utilizes ClassFactory derived class to implement the strategy pattern to express the exact mechanism how to determine what class to instantiate.

Database and DatabaseImpl - interface and base class that represent an abstraction of a database the application is accessing. It encapsulates all database specific behavior such as what is the syntax of a method to return the current date and time, what is the default transaction isolation, if and how to bring the database online etc. The current database is determined automatically based on the configured JDBC driver. One database instance will usually consists of multiple database schemas corresponding to subsystems used in the application.

DatabaseConnectionFactory and DatabaseConnectionFactoryImpl - interface and base class that define how an application acquires and manages database connections regardless of the platform or middleware it runs on. It implements the Abstract Factory pattern as described by GoF95. The main reason is that database connections can be managed in different ways: they may be always opened when requested or pooled and retrieved from the pool, they might be created by the driver, retrieved from the data source or provided by the middleware. This interface and base class provide unified way how to access the database connection regardless of the actual implementation strategy used.

TransactionFactory, DatabaseTransactionFactory and DatabaseTransactionFactoryImpl - interfaces and bases class that define how an application manages logical transactions regardless of the platform or middleware it runs on with an emphasis on database transactions. It implements the Abstract Factory pattern as described by GoF95. The main reason is that transactions can be managed in multiple different ways. Application can use a middleware transaction manager that manages all transactions accross multiple databases and systems in a way compatible with JTA/JTS specification. Application can also manage transaction using database connection as specified by JDBC, etc. These Open Core interfaces and base class provide unified way how to access transactions regardless of the actual implementation strategy used.

DatabaseReadOperation, DatabaseReadMultipleOperation, and DatabaseUpdateOperation - classes to make implementation of database operations easier. Instead of coding the entire operation and typing the same JDBC constructs over and over you just need to instantiate the proper operation, supply few arguments and define any special behavior and the database operation will do the rest. The provided operation classes make the implementation easier, faster and safer since they take care of most resource (connection, statement, result set) allocation and deallocation letting you focus on the specifics of your persistence call.

DatabaseReadSingleDataObjectOperation, DatabaseCreateSingleDataObjectOperation, DatabaseCreateMultipleDataObjectsOperation, DatabaseUpdateSingleDataObjectOperation, DatabaseUpdateMultipleDataObjectsOperation, DatabaseDeleteSingleDataObjectOperation, - classes that make implementation of common database tasks trivial. It usually takes only 2 lines to create, update, read or delete data in the database using these database operations.

OpenChronicle persistence layer

OpenChronicle will use relational databases as its persistence store. We want to support all Open Core supported databases as efficiently as possible and Open Core architecture provides a nice division between database dependent and independent code. All the database independent code will be implemented in two classes, BlogDatabaseFactory View source and EntryDatabaseFactory View source because as we have mentioned before, each database factory supports only one data object type. The database dependent code will be placed into common class BlogDatabaseSchema View source and its derived subclasses.

BlogDatabaseFactory View source and EntryDatabaseFactory View source are quite similar. We will discuss only one of them; the same concepts apply to both. BlogDatabaseFactory has to implement the BlogFactory View source interface we have defined earlier. It will be derived from DatabaseFactoryImpl because this class provides most of the common functionality needed by all database factories and saves us lots of coding. The first thing factory should do is to get an instance of a schema. The schema will provide all the queries and database dependent operations. This can be done in a constructor and stored in a member variable since all factories are stateless and therefore thread safe.

   public BlogDatabaseFactory(
   ) throws OSSException
   {
      super(DataConstant.BLOG_DATA_TYPE);

      m_schema = ((BlogDatabaseSchema)DatabaseSchemaManager.getInstance(
                                         BlogDatabaseSchema.class));
   }
               

Data retrieval

Interface requires implementation of several read operations in the form of getXYZ methods. We can divide them based on how many results they can return. The operations returning at most one data element, such as DataObject get(int) can be easily implemented either using DatabaseReadSingleDataObjectOperation or DatabaseReadOperation classes.

The DatabaseReadSingleDataObjectOperation class is specialized for retrieval of data object by its id since this is such a common task. Entire implementation consists of two commands constructing and invoking the operation that will do all the work for us automatically. No more handling connections, statements, results sets, etc.

   public DataObject get(
      final int iId,
      final int iDomainId
   ) throws OSSException
   {
      DataObject data = null;
      
      // If the ID is supplied try to read the data from the database, if it is not,
      // it is new blog which doesn't have ID yet
      if (iId == DataObject.NEW_ID)
      {
         // This blog doesn't exist yet so just create a new one
         data = new Blog(iDomainId);
      }
      else
      {
         DatabaseReadOperation dbop = new DatabaseReadSingleDataObjectOperation(
            this, m_schema.getSelectBlogById(BlogDatabaseSchema.BLOG_COLUMNS),
            m_schema, iId, iDomainId);
         data = (DataObject)dbop.executeRead();
      }         
      
      return data;
   }
               

Notice that we have also provided implementation for a case when the requested data object id is a special constant DataObject.NEW_ID. This constant signals that a new object initialized to the default values is requested. The data factory is this way the only place where a Blog data object is created (instantiated). Later on this method can be modified to provide additional functionality for this special case, such as initialization of the data object based on a default template for a given user.

The DatabaseReadOperation class allows retrieving any data using any criteria but requires little bit more (but not too much) coding. The operation will obtain the connection and prepare the query for us so that the only thing left is to set the arguments for the query and execute it. The execution and data retrieval is one line of code using one of the loadXYZ methods provided by DatabaseUtils class.

   public Blog get(
      final String strFolder
   ) throws OSSException
   {
      if (GlobalConstants.ERROR_CHECKING)
      {
         assert strFolder != null : "Blog folder cannot be null";
      }
      
      DatabaseReadOperation dbop = new DatabaseReadOperation(
         this, m_schema.getSelectBlogByFolder(BlogDatabaseSchema.BLOG_COLUMNS), 
         m_schema)
      {
         protected Object performOperation(
            DatabaseFactoryImpl dbfactory,
            Connection          cntConnection,
            PreparedStatement   pstmQuery
         ) throws OSSException,
                  SQLException
         {
            int iDomainId = CallContext.getInstance().getCurrentDomainId();            
            pstmQuery.setString(1, strFolder);
            pstmQuery.setInt(2, iDomainId);
            return DatabaseUtils.loadAtMostOneData(dbfactory, pstmQuery,
                      "Multiple records loaded from database for domain ID "
                      + iDomainId + " and folder " + strFolder);
         }         
      };
      return (Blog)dbop.executeRead();
   }
               

The query passed to the operation constructor is retrieved from the schema. This is done even if the same query works on all databases correctly. By retrieving the query from the schema, the schema can provide the query optimized for an active database (such as supply database specific hints what indexes to use to retrieve the results faster, etc.) if such need occurs later.

Read operation returning multiple data objects can be in a similar fashion easily implemented using DatabaseReadMultipleOperation class.

   public List getAll(
   ) throws OSSException
   {
      DatabaseReadOperation dbop = new DatabaseReadMultipleOperation(
         this, m_schema.getSelectAllBlogs(BlogDatabaseSchema.BLOG_COLUMNS), m_schema)
      {
         protected Object performOperation(
            DatabaseFactoryImpl dbfactory,
            Connection          cntConnection,
            PreparedStatement   pstmQuery
         ) throws OSSException,
                  SQLException
         {
            int iDomainId = CallContext.getInstance().getCurrentDomainId();
            pstmQuery.setInt(1, iDomainId);
            return DatabaseUtils.loadMultipleData(dbfactory, pstmQuery);
         }         
      };
      return (List)dbop.executeRead();
   }
               

You may wonder how does the DatabaseUtils class know how to create Blog data object. When we call its methods, we pass a reference to the calling factory using the dbfactory variable. The DatabaseUtils class calls back method load(ResultSet, int) from DatabaseFactory interface that constructs the actual data object from the result set. BlogDatabaseFactory implements this method for Blog data object.

   public DataObject load(
      ResultSet rsQueryResults,
      int       initialIndex
   ) throws OSSDatabaseAccessException      
   {
      Blog data;
      
      try
      {
         // The order must exactly match the order in COLUMNS constant
         data = new Blog(rsQueryResults.getInt(initialIndex),
                         rsQueryResults.getInt(initialIndex + 1),
                         rsQueryResults.getString(initialIndex + 2),
                         rsQueryResults.getString(initialIndex + 3),
                         rsQueryResults.getString(initialIndex + 4),
                         rsQueryResults.getTimestamp(initialIndex + 5),
                         rsQueryResults.getTimestamp(initialIndex + 6));
         data.setFromPersistenceStore();
      }
      catch (SQLException sqleExc)
      {
         throw new OSSDatabaseAccessException("Failed to load data from the database.",
                                             sqleExc);
      }
      
      return data;
   }
               

This is the only method that constructs the data object from the result set. It is used regardless if only one or multiple data objects are being retrieved. In order to provide more flexibility for cases when we would want to retrieve various data objects using the same query (for example using joins), it accepts initialIndex argument that tells it where in the result set it needs to start retrieving the data for Blog.

Data creation

The factory implementation of method to create single data object is very straightforward. Open Core provides DatabaseCreateSingleDataObjectOperation class to simplify this task into two commands: construct operation and execute the update.

   public DataObject create(
      final DataObject data
   ) throws OSSException
   {
      if (GlobalConstants.ERROR_CHECKING)
      {
         assert data.getId() == DataObject.NEW_ID 
                : "Cannot create already created data.";
      }
      DatabaseUpdateOperation dbop = new DatabaseCreateSingleDataObjectOperation(
         this, m_schema.getInsertBlogAndFetchGeneratedValues(), m_schema, data);
      dbop.executeUpdate();
     
      return (DataObject)dbop.getReturnData();
   }
              

The definition of the DataObject create(DataObject) method requires to return the id and the creation timestamp of the newly created data object. The best way to generate unique id and a timestamp is to let the database do it. Since each database implements sequences differently the factory again asks the schema to get the best query for the active database to do this task. The operation then knows how to optimally execute the query and retrieve its results for each supported database.

Creation of multiple data objects at once is as simple. Open Core provides DatabaseCreateMultipleDataObjectsOperation class that is optimized for this task.

   public int create(
      final Collection colDataObject
   ) throws OSSException
   {
      if (GlobalConstants.ERROR_CHECKING)
      {
         assert colDataObject != null && !colDataObject.isEmpty() 
                : "Cannot create empty data list.";
      }

      DatabaseUpdateOperation dbop = new DatabaseCreateMultipleDataObjectsOperation(
         this, m_schema.getInsertBlog(), m_schema, colDataObject, false);
      dbop.executeUpdate();
      
      return ((Integer)dbop.getReturnData()).intValue();
   }
               

You may again wonder, how does the database operation know how to insert our data object. In a similar fashion as we did with the load method, the database factory provides method setValuesForInsert that is used to set values from the data object to the statement in order to insert it to the database. The same method is used regardless if we are inserting only one data object or multiple objects at the same time.

   public int setValuesForInsert(
      PreparedStatement insertStatement,
      DataObject        data,
      int               iIndex
   ) throws OSSException, 
            SQLException
   {
      Blog blog = (Blog)data;
      
      // Here you must pass the domain id sent to you in blog object
      // If you want to check if this id is the same as current domain id
      // do it at the controller level. 

      insertStatement.setInt(iIndex++, blog.getDomainId());
      insertStatement.setString(iIndex++, blog.getFolder());
      insertStatement.setString(iIndex++, blog.getCaption());
      insertStatement.setString(iIndex++, blog.getComments());

      return iIndex;
   }
               

In case the application requires special handling of inserts (such as INSERT ...SELECT combination) Open Core provides the generic DatabaseUpdateOperation class. The value for the update type argument should specify that this update is in fact insert operation using DatabaseUpdateOperation.DBOP_INSERT constant. The application can then override the performOperation to populate or execute any query any way it wants. The database class will provide a connection, it may automatically prepare the query and provide transaction and exception handling.

Data modification

Updating data object in a database is as simple as its creation using the class DatabaseUpdateSingleDataObjectOperation provided by Open Core.

   public ModifiableDataObject save(
      final ModifiableDataObject data
   ) throws OSSException
   {
      if (GlobalConstants.ERROR_CHECKING)
      {
         assert data != null 
                  : "Data can not be null";
         assert data.getId() != DataObject.NEW_ID
                  : "Cannot save data which wasn't created yet.";
      }

      DatabaseUpdateOperation dbop = new DatabaseUpdateSingleDataObjectOperation(
         this, m_schema.getUpdateBlogAndFetchGeneratedValues(), m_schema, data);
      dbop.executeUpdate();
     
      return (ModifiableDataObject)dbop.getReturnData();
   }
               

The database operation class calls back to the factory to populate the values from the data object to the statement using setValuesForUpdate method.

   public int setValuesForUpdate(
      PreparedStatement updateStatement,
      DataObject        data,
      int               iIndex
   ) throws OSSException, 
            SQLException
   {
      Blog blog = (Blog)data;
      
      updateStatement.setString(iIndex++, blog.getFolder());
      updateStatement.setString(iIndex++, blog.getCaption());
      updateStatement.setString(iIndex++, blog.getComments());
      updateStatement.setInt(iIndex++, blog.getId());
      // Here you must pass the domain id sent to you in user object
      // If you want to check if this id is the same as current domain id
      // do it at the controller level. Otherwise the test fails 
      // if run on the empty database 
      updateStatement.setInt(iIndex++, blog.getDomainId());
      updateStatement.setTimestamp(iIndex++, blog.getModificationTimestamp());

      return iIndex;
   }
               

If the application does not want to update the entire data object but maybe just its portion, it can override the setValuesForUpdate method in the DatabaseUpdateSingleDataObjectOperation class to set only those columns it needs. For even more custom modifications it can use the generic DatabaseUpdateOperation class. The value for the update type argument should specify that this update is in fact update operation using DatabaseUpdateOperation.DBOP_UPDATE constant.

In case there is a need to update multiple data objects at once, Open Core provides specialized operation in the form of DatabaseUpdateMultipleDataObjectsOperation class. Usage of this operation is very similar to usage of the DatabaseCreateMultipleDataObjectsOperation class demonstrated earlier.

Data deletion

As you are probably expecting, deleting single data object is as simple as its creation and modification using the DatabaseDeleteSingleDataObjectOperation class

   public void delete(
      final int iId,
      final int iDomainId
   ) throws OSSException
   {
      if (GlobalConstants.ERROR_CHECKING)
      {
         assert iId != DataObject.NEW_ID 
                : "Cannot delete data, which wasn't created yet.";
      }

      DatabaseUpdateOperation dbop = new DatabaseDeleteSingleDataObjectOperation(
         this, m_schema.getDeleteBlogById(), m_schema, iId, iDomainId);
      dbop.executeUpdate();
   }
               

Implementation of custom deletions is very similar to custom inserts and updates described earlier this time using DatabaseUpdateOperation class and specifying the update type as delete using DatabaseUpdateOperation.DBOP_DELETE constant.

At this time the entire factory is implemented. What remains is to provide the actual SQL queries the factory needs.

Schema implementation

As we have demonstrated by our code, the database factory is mainly concerned with JDBC code execution and doesn't care about differences that might be caused by proprietary implementation of SQL and JDBC standards by each supported database. Main purpose of a database schema classes is to provide database dependent functionality that hides these differences from the factory. Since many databases do not require any special treatment, the database schema is implemented at two levels. The base class BlogDatabaseSchema View source provides functionality common to all databases and derived [DatabaseIdentifier]BlogDatabaseSchema classes provide functionality specific only to a given database. The same schema can provide queries for multiple factories and therefore multiple data objects. The BlogDatabaseSchema will provide queries for blogs as well as for entries.

BlogDatabaseSchema class is derived from DatabaseSchemaImpl, which provides most of the functionality required by all database schemas and will save us lots of coding. The constructor passes to the parent class several values describing the schema to Open Core. Among them are the name and version of the schema, names of the tables it contains and list of dependent schemas containing database objects, such as tables this schema depends on (e.g. has foreign keys referring to it). Open Core makes sure that the dependent schemas are initialized before this schema. If you modify your database structures (e.g. if you add a column or index) in a new version of your application, you can just increase the version number and provide a simple code in the upgrade(Connection, String, int) method to upgrade your schema. Open Core makes sure that all the schemas are upgraded when the application starts.

   /**
    * Name identifies this schema in the database. 
    */
   public static final String BLOG_SCHEMA_NAME = "BLOG";
   
   /**
    * Version of this schema in the database.
    */
   public static final int BLOG_SCHEMA_VERSION = 1;

   ...
   

   public BlogDatabaseSchema(
   ) throws OSSException
   {
      super(null, BLOG_SCHEMA_NAME, BLOG_SCHEMA_VERSION, true, TABLE_NAMES);
   }   
               

As we have mentioned during the data model design, the limits for attributes of the data objects are dictated either by application business rules or by the limits of the persistence store. In our case we do not have any business rules that would require us to limit the data sizes, and therefore the databases will dictate how much data can user enter for each attribute. Data type limits for various databases differs. We will provide the most conservative limits in the base class and expand them in the database specific classes whenever possible.

   /**
    * Maximal length of blog folder.
    */
   public static final int BLOG_FOLDER_MAXLENGTH = 50;

   /**
    * Maximal length of blog caption.
    */
   public static final int BLOG_CAPTION_MAXLENGTH = 1024;

	...
	
   /**
    * Maximal length of blog entry target URL.
    */
   public static final int BLOGENTRY_TARGETURL_MAXLENGTH = 1024;

   // Constructors /////////////////////////////////////////////////////////////
   
   /**
    * Static initializer
    */
   static
   {
   	...
      
      // Setup maximal length of individual fields for entities
      Blog.setFolderMaxLength(BLOG_FOLDER_MAXLENGTH);
      Blog.setCaptionMaxLength(BLOG_CAPTION_MAXLENGTH);
      ...
      Entry.setTargetURLMaxLength(BLOGENTRY_TARGETURL_MAXLENGTH);

   }
               

Bulk of the base class schema implementation are methods returning queries that are used by the factory. It is a good practice to implement these in BlogDatabaseSchema View source base class by returning generic query that will work on all databases. Later we can override some of these methods in database specific schema class to provide optimized behavior if needed.

   public String getSelectBlogById(
      String strColumns
   ) throws OSSException
   {
      StringBuffer buffer = new StringBuffer();
      
      buffer.append("select ");
      buffer.append(strColumns);
      buffer.append(" from BF_BLOG where ID = ? and DOMAIN_ID = ?");
      
      return buffer.toString();   
   }

   public String getDeleteBlogById(
   )
   {
      return "delete from BF_BLOG where ID = ? and DOMAIN_ID = ?";
   }

   public String getInsertBlog(
   ) throws OSSException
   {
      StringBuffer buffer = new StringBuffer();
      
      buffer.append("insert into BF_BLOG(DOMAIN_ID, FOLDER, CAPTION, COMMENTS," +
                    " CREATION_DATE, MODIFICATION_DATE)" +
                    " values (?, ?, ?, ?, ");
      buffer.append(DatabaseImpl.getInstance().getCurrentTimestampFunctionCall());
      buffer.append(",");
      buffer.append(DatabaseImpl.getInstance().getCurrentTimestampFunctionCall());
      buffer.append(")");

      return buffer.toString();
   }
               

Notice how a reference to the currently active database is used to retrieve database specific function calls. This allows us to provide one common method implementation for all supported databases even if the specific call name differs.

For the queries that do not have form that would work on all databases we have to create abstract method definitions that will force the database specific classes to provide the correct implementation.

   /**
    * Get query that inserts a blog to the database and fetches database 
    * generated values such as the generated id and creation timestamp 
    * 
    * @return String - query for simple insert or stored procedure call
    * @throws OSSException - an error has occured
    */
   public abstract String getInsertBlogAndFetchGeneratedValues(
   ) throws OSSException;
               

The last piece of code we implement in the common base class is an exception handling. The correctly designed database schema gives names to all the database objects, such as foreign keys and constraints. It is often much easier and faster to let the database check such constraints (such as if folder name is unique) rather then perform an additional programmatic check ourselves. The only issue with this approach is that the databases usually do not generate very user friendly error messages when they detect constraint violation. The purpose of error handling implementation on the database schema level is to convert the database specific error messages to user friendly representation. Open Core provides default implementation, but more specific messages can be provided by overriding handleSQLException method. This method is called automatically by every database operation class we have discussed earlier if an error occurs during the execution of the operation.

   public void handleSQLException(
      SQLException exc, 
      Connection   dbConnection, 
      int          iOperationType, 
      int          iDataType,
      Object       data
   ) throws OSSException
   {
      OSSInvalidDataException ideException = null;
      
      switch(iOperationType)
      {
         case DBOP_INSERT:
         {
            if (iDataType == DataConstant.BLOG_DATA_TYPE)
            {
               if ((exc.getMessage().toUpperCase().indexOf("BF_BLOG_FLDR_UQ") > -1)
                    // MySQL handles blog folder unique constraint exception as 'KEY 2'
                    || ((exc.getMessage().toUpperCase()).endsWith("KEY 2"))
                    // IBM DB2 handles blog folder unique constraint exception as "2"
                    || ((exc.getMessage().toUpperCase()).indexOf("\"2\"") > -1)
                   )
               {
                  ideException = OSSInvalidDataException.addException(ideException,
                                    Messages.NONSPECIFIC_ERRORS,
                                    "Folder has to be unique.",
                                    exc);
               }
            }
            else if (iDataType == DataConstant.BLOGENTRY_DATA_TYPE)
            {
               if (exc.getMessage().toUpperCase().indexOf("BF_BLOGENTR_FK") > -1)
               {
                  throw new OSSInvalidContextException(
                        "Blog to create entry in does not exist.", 
                        exc);
               }
            }
            break;
         }
         case DBOP_UPDATE:
         {
         ...
         }
         default:
         {
            // Some operations do not need any specific handling
         }
      }
      if (ideException != null)
      {
         // If a specific exception was created throw it back to client
         throw ideException;
      }
      else
      {
         // No special handling was needed so execute the default exception handling
         super.handleSQLException(exc, dbConnection, iOperationType, iDataType, data);
      }   
               

At this time we have implemented as much common functionality as possible and we can take a look at what needs to be done separately for each supported database.

Database dependent schemas

In this tutorial we will provide implementation for each database supported by Open Core. In reality, when developing your applications, you may decide to support only one or subset of all possible databases. This will significantly decrease the scope of work that needs to be done during development and testing. It is still useful to follow the described architecture and separation of responsibility since it makes the code more modular and maintainable and allows to add support for additional databases later.

The most important functionality that has to be implemented for each database separately is creation of database objects, such as sequences, tables, views, indexes and stored procedures. This is essential because most vendors define extensions to the SQL standard to allow fully utilize features of their DBMS. Open Core defines two methods that take care of the creation of database objects. When it detects that a given database schema is being instantiated for the first time, it calls database schema method create. When it detects that there is already an earlier version of the current database schema present, it calls method upgrade. Advantage of this approach is that the application can prepare the database exactly to your specification and without intervention from the user. That improves the user experience and makes the application accessible even to novice users.

Since this is the first version of the tutorial, all database specific schemas will provide only method create. The body of this method executes the SQL to prepare the database for use by the application that would have to be otherwise run interactively or using script. Open Core passes to the method an active connection to the database and the entire implementation is just a matter of cut and paste from the SQL scripts. Please refer to the source code of the individual database dependent blog schemas to see how are the database objects created for various databases: DB2 View source, HSQLDB View source, MaxDB View source, MS SQL Server View source, MS SQL Server View source, Oracle View source, PostgreSQL View source, SAP DB View source, Sybase ASE View source

The last step is to provide implementation for the methods that vary from database to database. Below is an example of one such method demonstrating how the syntax of the query required for this method differs from database to database. For DB2 we just use stored procedure, that will take care of all the details and returns the required values.

   public String getInsertBlogAndFetchGeneratedValues(
   ) throws OSSException
   {
      return  "call INSERT_BF_BLOG (?, ?, ?, ?, ?, ?)";
   }
               

PostgreSQL is using stored procedure as well but the syntax of the call is different.

   public String getInsertBlogAndFetchGeneratedValues(
   ) throws OSSException
   {
      return  "select INTGR, TMSTP from INSERT_BF_BLOG (?, ?, ?, ?)";
   }
               

HSQLDB doesn't support stored procedure and therefore the method returns just a simple query. It relies on Open Core to determine and fetch the values that needs to be returned as a result of this operation.

   public String getInsertBlogAndFetchGeneratedValues(
   ) throws OSSException
   {
      return getInsertBlog();
   }
               
   public String getInsertBlog(
   ) throws OSSException
   {
      StringBuffer buffer = new StringBuffer();
      
      // HSQLDB requires for IDENTITY columns to specify ID and pass null as a value
      buffer.append("INSERT INTO BF_BLOG(");
      buffer.append(BLOG_COLUMNS);
      buffer.append(") VALUES (null, ?, ?, ?, ?, ");
      buffer.append(DatabaseImpl.getInstance().getCurrentTimestampFunctionCall());
      buffer.append(", ");
      buffer.append(DatabaseImpl.getInstance().getCurrentTimestampFunctionCall());
      buffer.append(")");

      return buffer.toString();
   }
               

In case you wonder, how are these queries used, the answer is that Open Core database operations will execute them using the method insertAndFetchGeneratedValues defined in the Database interface. Open Core provides efficient implementation of all the methods in this interface for each database.

The persistence tier is now completely implemented and we can focus on the business logic of our application.

Next: Developing the business logic
Previous: Designing the data model