Why do ETL jobs fail post TeamForge upgrade?

ETL jobs can fail due to reasons such as incompatibility between the database and JDBC driver versions and ETL jobs not being able to connect to the Datamart. Try the following solutions.

Pentaho, used by TeamForge for data integration and transformation jobs, recommends using compatible JDBC drivers meant for specific database versions. See Pentaho's JDBC Drivers Reference for more information.

If ETL jobs fail post TeamForge upgrade due to incompatibility between the database and JDBC driver versions:
  1. Refer to Pentaho's JDBC Drivers Reference page.
  2. Click the JDBC driver reference URL corresponding to your database, Oracle or PostgreSQL.
  3. Identify and download the compatible JDBC driver for your database.
  4. Replace the JDBC driver found in the following directories with the one you downloaded. (The TeamForge ETL process refers to the JDBC driver available in these directories.)
    • /opt/collabnet/teamforge/dist/tomcat/commonlib/
    • /opt/collabnet/teamforge/runtime/tomcat_etl/webapps/etl/WEB-INF/lib
Note: You can also refer to this page for more information about Pentaho-special database issues and resolutions.
If ETL jobs fail due to unavailable connections to the PostgreSQL Datamart:
  • Make sure that the following error message is found in etl.log:Invalid JNDI connection java:comp/env/jdbc/ReportsDS : FATAL: remaining connection slots are reserved for non-replication superuser connections
If yes, restart the ETL service and restart the failed ETL jobs manually using ./etl-client.py script in the /opt/collabnet/teamforge/runtime/scripts/ directory. The ETL jobs should be able to connect to the PostgreSQL Datamart after the restart.
Note: If the problem persists even after restarting, contact CollabNet Support.