Sabrix

Development of new modules and maintenance of batch components.

 

Execution

The Sales Systems had to communicate with the MTS through online and batch process. I was responsible for developing the online communication and to provide maintenances to the batch process. For the online part I designed a component to be implemented in C++ on Mainframe platform. This component would be linked to a PL/1 code that access Sales Systems. I implemented methods to receive the invoice data and transform it into a XML format. The code then opens a low level socket connection to a specific MTS Servlet hosted on a Linux on System z platform, compress the content using gzip libraries, and send the HTTP request. I created the code to parse the multiple HTML response packages back and merge them so that a gzip library could expand it. The final content was then parsed by a XML library and methods are made available to the PL/1 code to retrieve the taxes. In order to develop this component I had to study the C++ APIs available on the Mainframe platform and research how I could reproduce this environment on a Windows platform to improve productivity. I found that I could use the gcc compiler available through cygwin to reproduce the environment. I also had to research gzip and XML libraries in C++ that could be reused. The batch communication was already implemented as a Java multi-thread stand-alone component using MQ Series. It receives invoices from Sales Systems, applies transformations rules, and then forwards them to MTS. I had to apply changes to the transformation rules of this component.

The MTS is installed on two countries: Denmark and on USA. The MTS installation on Denmark receives automatic tax updates from Thomson Reuters Company. After the database has been updated with the new taxes an operator access the MTS console and dispatch a memory refresh to update the servers cache. This process is called cache synchronization. Although the MTS is a J2EE application that can run in a cluster, the internal cache system can only be refreshed between the servers on the same sub-net. Therefore, it could not be propagated from the Denmark environment over the USA environment. The DB2 instances were configured to propagate any changes from the Denmark database over the USA database. But the need to a local operator to dispatch a cache synchronization in USA had to be eliminated. To solve this problem, I had to implement new application called Cache Synchronization (CS). It identifies the tax updates on the master MTS installation on Denmark then dispatch a cache synchronization command on the slave MTS console in USA only after the database has been synchronized. The dispatch of the synchronization command had to be done by simulating a user accessing the MTS console through the web. So, the component had to automatically navigate through the menus and issuing a cache synchronization command. I proved that the solution proposed by the architects would not work. The specified product Rational Functional Tester (RFT) could not simulate a user navigating on the MTS console. The main reason is that a graphical user interface like KDE or GNOME had to be available on Linux on System z environment and the session had to be unlocked during the entire process otherwise the RFT could not lock on the buttons on the MTS console page. I negotiated a time extension to analyze other possible replacements for RFT. I started an investigation of how to replace the RFT solution. I analyzed many products and libraries. I finally implemented a proof of concept that these steps can be done by using HTTPUnit library. I tailored the company methodology and generated a sequence of work products that I was going to deliver. I created the:

  • Implementation Schedule,

  • Business Rules Catalog,

  • Macro Design,

  • Business Rules for Micro Design,

  • Component Model,

  • Physical Components Diagrams,

  • Class Diagram,

  • State Chart for the Master CS,

  • State Chart for the Slave CS,

  • Logical Data Model,

  • Physical Database Design,

  • User Support Materials, and

  • System Test Results.

System Test Results.

I designed and implemented four components with minimal interfaces to reduce dependencies. I named them Event Engine, Persistence Control, Communication Manager, and Notification Component. The Event Engine implements a state machine to control the events life cycle. The events controlled by this component have a well defined sequence of possible status depending on the type of the installation. One installation is selected to be the Master and there can be many Slave installations, although one Slave was required at the moment. This allows the solution to be easily scalable. The Persistence Control has the objective of hiding the way a status is persisted from the Event Engine. The media used to store the information can be changed from database to flat file, XML file or any other format without having to change the Event Engine. The Communication Manager is designed to handle all communications between the Master CS and the Slaves CS. The Event Engine is unaware of how the messages get transmitted from the Master CS to the Slaves CS and vice versa. This is because the technology or product used to perform the message transmission is hidden inside the Communication Manager. This technology or product can be changed without affecting the other components of CS. I implemented it with MQ Series product. The Notification Component handles how the support team gets notified when an exception event is generated by the CS. Currently it implements notification by email. But this technology can be changed to pager, mobile messages or any other technology. Any of these changes would affect only the Notification Component. The configuration file for CS had to be a single XML file. I created a Swing application to handle this configuration file because of it's size and complexity. This configuration file had to store much different information like:

  • Contact information for the support team including a test feature to validate it during the configuration,

  • General configuration like the frequency of event pooling and number of days before automatic cleaning the log files,

  • A configuration for the Master installation like DB2 information for CS instance and the MTS database to be monitored,

  • A configuration for the Master states with the state name, the timeout, the next state if success, the next state if a failure happens, and the class that implement the state with a button to test the class instantiation,

  • A configuration for the Slave states the same way of the Master instance,

  • The addition of slave instances with the database used by the CS component, the MTS database being monitored, and the user and password to the MTS console (the passwords were are encrypted on the XML file using AES 128 bits.),

  • The list of the MTS monitored tables with the table name and the name of the column that stores the last updated timestamp, and finally

  • A feature to read the last event logs directly on the configuration tool.

The access to MTS console is granted only if the user is cataloged on MTS internal database. So a custom code was developed to first authenticate the user against the company authentication site. I was responsible for maintaining this component by applying fixes on the code like user identifications that were being rejected because it was not according to the internal regular expression for valid IDs. I fixed the regular expressions.

An initial application called Tax Inquire was created to make the MTS accessible through the Intranet. The Tax Inquire application consists of two components. The first receives the quotes information from the user through an Intranet complying with company standards and generates a WebService request. The second component is a Web Service that receives quotes request and transform it into a plain HTTP request to a MTS servlet that calculates the taxes. I was responsible for many fixes and enhancements on both components. I had to make the code comply with Intranet Design and Accessibility Standards. I used Web King to validate the HTML generated by the Tax Inquired and fixed many non-compliant codes. I used JAWs screen reader to simulate people with disabilities. I had to guarantee accessibility for blind people. I fixed many errors that prevented the JAWs from reading the page correctly. I had to add new business logic to the code and improve the maintainability by removing many unnecessary property files. For the WebService component I had to execute many batch tests to validate the WebServices responses. I installed the Rational Performance Test (RPT) and installed the WebServices Feature Pack. I configured the RPT to use secure request using SSL and Digital Certificate. I transformed a spreadsheet with many test data into a datapool. I created the mapping between the datapool and the fields specified on the WSDL file. I executed the tests using a full report to capture the SOAP requests and responses. I forwarded the report to a finance contact that would analyze the correctness of data and validations.

To guarantee the performance of the Sabrix platform being delivered a group was engaged to execute a set of stress tests. This group is called HIPODS. They would reproduce the MTS environment on AIX platform and execute tests and tune the WebSphere Application Servers. I assisted the HIPODS group on replication the entire environment with all the solution components. I was responsible implementing feeders for this platform. These feeders simulate the periods of the day when there are mainly online workload, like user accessing Tax Inquires. The feeders worked fine. We reviewed the HIPODS recommendations of optimum products configuration and we accepted their final report.

I had to interact with the Denmark hosting team. They were responsible for maintaining the development environment with only one server, the test environment with eight servers and also the production environment also with eight servers. I had to open tickets on their version of the ManageNow application anytime there is a need to move code from development to test environment.

I had to troubleshoot a problem when applying a WebSphere Application Server Fixpack 6.1.0.25. The Sabrix tax interface completely stopped working. I contacted WebSphere Subject Matters Experts through Practitioner Support Network. With their help I was able to narrow the search for the problem on the gzip compression streams. After some tests using HTTP sniffer tools I discovered that the request was being compressed but the response was not. I found that WebSphere was automatically decompressing the request and Sabrix was receiving a decompressed request, therefore sending also a decompressed response. The problem was solved by setting the flag AutoDecompressed to false on the HTTP inbound channel.

The codes for all components were stored on a CVS code repository. I had to maintain the code updated on the CVS and create releases every time a successful deliver was promoted to the test environment.

I had full access to the WebSphere Application Server console on development and exceptionally read access to the console of the test environment. I was responsible for deploying and maintaining the configurations of the applications on the development environment. I had to troubleshoot problems during promotions to test environment like reproducing the bluepages LDAP configuration and update JVM parameters and paths to make CA component available to the MTS application.

I kept the project manager updated by attending weekly meetings to inform status, answer questions and provide plans updates when necessary using a concise and effective verbal communication. I also sent weekly reports by notes and kept the defect control tool updated with the most recent information.