GIS to MAXIMO Enhancements – Increasing Performance by 70,000% the SSP Way

September 30, 2015 — Colton Frazier

Over the years SSP has been tasked with countless projects whether it has been upgrades, integrations, designs, and much more. We are happy to tell you about one of our latest projects in which SSP has been quite excited about.

How It All Started.

A client wanted us to examine their current GIS to MAXIMO integration. There were questions as to how some of the components worked, how the components should work, and the current performance of the implementation. Not very long after being contacted with these questions and concerns SSP was on-site and ready to start digging into the client’s GIS to MAXIMO integration.

 

What Did We Find?

After digging into the code of this GIS to MAXIMO integration SSP had found some big issues. What we had discovered upon running through the implementation was as follows.

  • Asset Tracking: An asset tracking table is versioned and responsible for holding the asset tracking records for a session. These asset tracking records are then sent to MAXIMO as a session is posted. As each message is sent to MAXIMO the record should also be removed from the asset tracking table. This indicates that the message has been sent to the MAXIMO SOA table successfully. A problem we had found is that one record at max would be removed from the asset tracking table for each successfully posted session. Over time this had led to the SDE.Default asset tracking table containing over 300,000 asset tracking records! These records would then be transferred to any child version that was created from SDE.Default and therefore leading into another problem we identified. Before a session would start sending messages to MAXIMO, the asset tracking table would need to be “consolidated”. The purpose of the consolidation process is to go through the asset tracking table and make sure that only the final record for each edited feature would remain. While digging into this process SSP had identified major inefficiencies in the consolidation algorithm as well as seeing 300,000+ asset records being ran through the consolidation process on every posting.
  • Sending Asset Tracking Messages: Asset tracking messages are sent to MAXIMO when a session is successfully posted. When running through this code we noticed a few problems. The first problem we noticed was, as stated above, that the asset tracking table wasn’t being updated properly after any given session was posted. The second problem we noticed was inefficiency in both generating and sending asset tracking messages to MAXIMO. Finally, we saw inefficiencies in how the asset tracking table was updated once an asset tracking message was successfully sent to the MAXIMO SOA table.
  • MAXIMO Queue Control: The purpose of the MAXIMO queue control in this GIS to MAXIMO implementation is to connect and send the asset tracking messages to a SOA table. MAXIMO will then grab and process the messages from this table. While running through the code for the MAXIMO queue control we identified some inefficiencies in this area as well. When a session would be posted and the messages were sent, the MAXIMO queue control would be re-instantiated for every message. Every time the MAXIMO queue control was instantiated a new connection to the MAXIMO SOA table would also be opened as well.
  • GIS Web Services Interface: The purpose of the GIS Web Services is to pick up messages from the GIS SOA table and then process them accordingly. One of the tasks in which the Web Services does is process “bounce-back” messages from MAXIMO. These are reply messages from MAXIMO in which GIS has initially sent to MAXIMO. Here we noticed a new session would be posted to GDBM for every bounce-back message that had been received from MAXIMO. Our concern was that a large volume of bounce-back messages would result in an equal volume of sessions posted to GDBM. This led to a growing state lineage tree in which resulted in slower overall system performance.
  • Logging: Logging is an important item when identifying if a component is working correctly or not. The way in which logging is implemented has a huge effect on the time it takes to both identify a problem as well as locate where the problem is occurring. Throughout the process of debugging the GIS to MAXIMO interface we came across multiple instances of where very little to nothing was being logged whether the component was working or not. logged items were also being sent to multiple different locations in the file system.

 

Our Solutions.

After locating the areas in which we saw problems and inefficiencies, SSP was ready to implement solutions. Here are the changes SSP made as well as the results of our changes.

  • Asset Tracking: The original approach for consolidating the asset tracking table was executing unnecessary tasks. Such tasks included deleting/adding a record in the asset tracking table before checking if the record needed to be consolidated, loading/processing irrelevant asset tracking information, and using unneeded custom objects. Our approach was to grab asset tracking records from the current session and to consolidate on a need to need basis. After adding logging to this piece of code we had both a responsive and informative approach for consolidating the asset tracking table. Our new approach now takes .00135% of the original time (yes, that is 73,900% faster then the original).
  • Sending Asset Tracking Messages: The original code for sending asset tracking messages used custom objects instead of .NET tables. It also used a data reader for querying tables as well as custom objects for creating XML formatted asset tracking messages. the amount of custom objects being used made the implementation complex and inefficient. Another area that needed improvement was updating the asset tracking table after each message had been sent to the MAXIMO SOA table. Our solution was to use .Net objects as well as ArcObjects instead of many of the custom objects. We used the more efficient data adapter to query tables as well as caching the results for consecutive uses. In addition to these changes we added detailed logging around the work flow of this task. Results are as follows: querying the tables now takes 33% (300% faster) of the original time, generating asset messages takes 4.16% (2400% faster) of the original time, and updating the asset tracking, add, and delete tables now yields a 1.16% (8600% faster) reduction in the original time.
  • MAXIMO Queue Control: As noted earlier, the MAXIMO queue control is responsible for sending the final XML formatted messages to the MAXIMO SOA table. We noticed this piece of code was being re-instantiated for every message. A new connection to the MAXIMO SOA table was being instantiated for every message that was sent as well. Our approach was to instantiate the MAXIMO queue control once per session and to keep the connection open until all messages were done sending. These were some of our simpler changes and yet resulted in a 2.04% (4900% faster) reduction in the original time.
  • GIS Web Services Interface: The original GIS Web Services would post a new session to GDBM for every bounce-back message that was received from MAXIMO. This worried us because a large amount of messages would cause the state lineage tree to grow. The bigger the state lineage tree gets the more it effects overall system performance. Our approach was to create a child version of SDE.Default in which is continually updated as messages are received by the Web Services. At the same time a separate thread sleeps in the background. After a configurable amount of time has passed the thread then posts the current edits in the child version to GDBM. This resulted in fewer posts to the GDBM from the Web Services.
  • Logging: As noted before, the original logging was set up so that subcomponents would log to different locations on the file system. If a change was made to logging it would have to be made across each of the 19 configurations for logging. We have now made it so that there is a central logging location in which detailed information is logged to for the processes in which we re-factored. Another thing we added to the logging is the separation of debug and error logging within the specified file location. Any changes to logging are now made to just a few simple configurations.

 

Have questions on any of our solutions mentioned in this article? Feel free to give us a call!

We Wrote the Book

The Indispensible Guide to ArcGIS Online

Download It for Free

Colton Frazier

Senior Software Engineer

What do you think?

Leave a comment, and share your thoughts

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes:

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>


This site uses Akismet to reduce spam. Learn how your comment data is processed.