SSP Nightly Batch Suite (NBS): A Complete Video Overview

May 10, 2016 — Skye Perry  [17:31]

The SSP Nightly Batch Suite (NBS) is a configurable set of applications intended to keep a geodatabase performing optimally while giving management the information they need to make decisions. The product includes a batch framework which allows for batch applications to be easily configured and scheduled for nightly, weekly, or monthly execution.

There are a number of product applications that are included with the SSP NBS and the framework also provides customers with a reusable implementation pattern for adding custom batch applications. These often include custom GIS update applications or systems integration points for batch updates between the GIS and other common utility systems. Watch to learn more!

Transcript

Hi my name is Skye Perry with SSP Innovations. I'm here to talk about one of our most popular products with the customers. This is probably our most installed product across all our customers. While it does not have a big user interface, it has strict and important aspects to it. I wanted to give a little bit an overview of what the SSP Nightly Batch Suite (NBS) does for your utility. I want to start off by going all the left here to the framework. We call the NBS a framework because it provides several things that are out of the box and provided for all the applications I'm going to be talking about today.

First, in which is scheduling can be nightly, weekly, monthly, and annual. The framework can easily handle scheduling on those types of basis. Next, is logging. We do extensive logging within the text files, and this handles a nicely formatted way to do that with all your applications. All your licensing, be it Esri or Schneider Electric, you don’t have to worry about geodatabase access. You can just call, and get a connection. Process framework access, to get access to your sessions and designs. SSP WFM from our work management site, full access there. Handles all types of notifications, this is typically things like email notifications.

Finally, framework code. What we mean there is that it provides a framework that allows us to plug in custom applications that may not fit within the product. When we go to work, a lot of our customers who only own the framework without having all the product applications. That allows us to quickly develop and deploy batch applications that need these things without having to write them custom every time. Therefore, we just focus in the actual value to the customer. As opposed having to think about all we must do to set up around it. Some folks just own the framework as well as some custom applications. Many others, though, own the full suite of product applications. That's what I'll run you though next.

It's all about the geodatabase health. The performance in giving the geodatabase owner or manager (whoever that is in the organization) all the tools and information you need on the daily basis to make educated decision about the use of the geodatabase. We got our geodatabase drawn out here. I've pre-drawn a couple of items. The first one is the versioning tree here showing SDE.Default. The second one is a batch edit version 1, 2, and 3. The third one is our process framework. This is a pretty standard set for all the utilities we've worked with as we start often. We are running through a nightly type or routine. We start off with several product applications we usually implement. The first one comes in and it does a cleanup on your process framework. So why is this important?

This is a front half, essentially an orphaned version cleanup program, but it automates it for you and does all the logging. We are going to go up and find any sessions and add any designs that needs to be cleaned out of the system to allow our system to operate more efficiently. This can be a deleted record here. It could be one that have already been posted but not cleaned up because of locks. Whatever that may-be we'll get those cleaned out first.

Once we have those out of the system, we can move down to the versions (this is sort of the second half of the orphaned version clean up). We will do a version clean up here. That is version number 2. Now we are getting in and deleting the orphaned version meaning the actual SDE versions from the Esri side out of the geodatabase. That's important because that's going to invalidate some state IDs to allow our compress to be affective. We got the process framework cleaned up, and we got the versions cleaned up. Now we are ready to move forward.

Remember three things we typically do, is an automated posting engine. This isn't meant to replace geodatabase manager or any other BRP functionality, but it's for versions such as a batch edit version. This is a static version that will stay in our system from day to day. Maybe it's being edited by a web application or an external interface like Maximo that might be hitting this. This allows this to be posted up in a nightly basis, to push any edits in that day upwards into SDE.Default. That is our third, automatic posting. Let's switch colors since it's getting a little bit busy here. We are going to move on to our fourth. This is one of the most important things you probably do some form of it today, but again we are automating it here in a way that parses it and does it very effectively.

That is a batch reconcile. We start with SDE.Default. If you have any intermediate versions, we first determine the optimal reconcile order down from the top to the bottom of the tree. We are doing our reconciles down on all of these versions here, within the state tree. As we go through there, number four will be our reconcile. Reconcile, as we've talked about in our other videos are extremely important because that is what identifies any conflicts and what synchronizes your versions with SDE.Default. Therefore, making your state IDs able to cleaned out of the database, so it reconciles there. As we go through that, we aren't sending you emails on every single conflict we find.

Again, if you have 100s or 1000s of versions, you might have several conflicts every night, still want to do it. At the very end, out of that reconcile, we send a single email for conflict notification. We talked and joked about those with a lot of customers. This is an email that's waiting for you when you arrive the next following morning. A little bit of apprehension as you open this email to determine “how many conflicts do I have to resolve today?” The key point is we are doing it through this stage daily. You have a good handle on your conflicts. We got many customers talk about how large their conflict list started. As they work through them, they realized how it became much smaller.

You may have a handful on a given day. Very importantly, showing all the conflicts in the system so that you can address them proactively. Number 6 is, as you might have guessed, when you get beyond getting a lot of versions cleaned up, reconcile down, and post up etc... We do perform a compress. This is not going to be a surprise to anybody, but the compress is what takes the invalid state IDs and moves all the edits from the A and D (the add and delete tables) up into the base tables.

So, we've moved to the base table which provides increased performance to the entire geodatabase. Some folks do this weekly, maybe monthly, but we are putting it into a nightly basis where we ensure maximum performance based on all the other things we've done to get to the point number 6 (which is our compress). Now that we got our database compressed, we would do some additional things to clean things up, add performance, and keep you as the GIS manager informed. Number 7 is where we are going to come in and do an add and delete report. I'll put that in as a A/D report. You might ask "what is an A and D report"?

We've defined this as the ability to monitor all of your add and delete tables under the versioning tree. Nothing too complicated here, but we have it all configurable so we can go in and parse the entire versioning tree, look at every single A and D table, and there are configurable thresholds. We might say if any tables get above a thousand for a small organization or for a larger organization it might be ten-thousand. If we see any A and D tables with more than ten-thousand records in it, this will automatically send via notification email out to the GIS manager. Now you know about these, and you know the exact tables where your problems are. If your whole table is getting big, you got that notification proactively, and you know of the potential performance before your users call. That's not something many folks can say. Now we got the database in good shape, let's move on to performance.

Our number 8 up here, is going to be a geodatabase indexing. This is a combination of a geodatabase task along with your basic Oracle or SQL server indexing. When you use this compress, we are moving a lot of data around the geodatabase from those A and D tables to the base table, so it's very important for us to re-build those indexes so that the performance in the system goes up. We haven't re-build the indexes. It will still get some performance boost, but not nearly as much as we re-do your indexing. So, we do that typically in a nightly basis.

We move onto number 9 which is typically related. This is a geodatabase statistics. I will note that as stats. Statistics in a very similar way is updating the way the database understands the size, the volume, the location of all that data within the database. Typically, we would do this after your indexing. We often recommend that this is not a nightly test, but we shoot for a weekly. Maybe on every Saturday: after we've done all these steps, compressed, and indexed, we will go ahead and rebuild the statistics in the database. Again, this is for all around optimizing the performance.

Let’s move forward to our nightly application. Now we are getting into some of the more of the reports that we do. We move back over here to the process framework. One of interesting things we can do on the business level, so this is more business than geodatabase level, is to parse out your sessions and your designs into what we call an aging report. This will be number 10 for aging report. This aging report over here is looking at all the sessions and designs, how long they've been outstanding, giving you an automated list including the information you need most to keep an eye on the system. If you've got sessions that may have been in there for 30 or 60 days, your eyes get drawn to those very quickly in the next morning. So maybe we would want to go check with that editor to see if they need to post that session up or what the status is at least.

Designs outstanding: you know work management integrated designs systems, we get these designs to sit out there for months, maybe even over a year, hopefully not multiple years (we've seen that too). When those sit out there, it's important to be proactive because every design has a geodatabase version. Every version of course has A and D edits which has impact to your performance. We would want to keep a good eye on those designs and maybe chasing some of those down as time goes on. Again, this creates a full circle to improve the performance. The aging report gives you the information. It could be daily. It could be weekly. You can configure however you want to with the framework, but it gives you the information you need to make some decisions.

Let's move onto a monthly report. I'm going to move to this side here. This is going to be number 11, which is our permissions report. Importantly, in a large-scale system (our permissions), we may have thousands of users that may have access to the geodatabase. We need to manage these folks, these roles, and these permissions proactively. If we just let it go on the side, things are guaranteed to get out of sync. We are going to look at the three different reports we often generate here, and this is monthly basis. The first one, is going to be users in a role. This is an XCEL spread sheet that is going to get emailed proactively every month, and it's going to show every role in the system: your MM admin, your MM user, your electric edit, your electric view, your land base edit. All those roles that have been defined, and it's going to show you in columns - every single user account that has access to that role which defines what they can do in the system. You should never have a question of having to chase down who has access to be an electric editor vs. a gas vs. telecom.

You got that report proactively sitting there in a monthly basis. The next one we do (which is important in a larger organization) is related to the users again, but It’s a user’s delta report. This delta report is similar to the users and role (looking at the same data), but now we are actually capturing data on a month to month basis. This calls your eye very quickly to what has changed. This will show me who has been added to the electric editor role, who has been removed from the electric editor role, and I can determine "did that occur correctly?" If may be, a user has left our organization, I will now know have they've been removed from the geodatabase correctly or do I need to take further action. The new users come in, and I can also validate that they got all the permissions that they needed on day one. Let's move down to our third permissions report.

This is about the role permissions. On our role permissions report, we basically get another spread sheet which has every role in the database again listed across the top. However, now we are not focused on users. We are focused on what that role can do within the geodatabase. We are thinking now, the electric editor role should have edited against the pole, against the conductor, and against the service point. If we see that the electric editor role has access against the land to edit parcels that might be a red flag to us. So how often do you know exactly what each role in this system can do? This puts it into a spread sheet formatting in proactively delivered to your inbox on typically monthly. These three things important again, the larger your organization the more important these are going to be to you.

Our final one (I'll draw here as a dotted line down here for number 12) is that we have some very generic clean up applications. We do things like cleaning up your file disk. Example, if you deploy your transformer manager or work management system, you can use this to clean up our dynamic reports directory which can grow over time. You might run this weekly, nothing too big there. Another one we recently added is an Oracle keyset cleanup.

If you use Oracle, you are very familiar with the keyset tables. Per each user, these things get stacked up inside the geodatabase and there’s table upon table of keyset tables. You don't need those after the fact, so we created a batch application. We'll clean those out in the geodatabase. There are a handful of others here that we don't want to go into details now, but sort of looking holistically at what the product does. Hopefully, you have a better feel. It's all about the geodatabase. It's all about the performance and efficiency. Again, informing you as the GIS manager, the information you need to know before you need to know it. That will allow you to make better decisions.

The final thing I want to talk about is just some other product application over here, just so that you know that they exist. If you are an SSP Workforce Management (SSP WFM) product user from SSP, we've got a handful here with the work request status sync between the work management in GIS, GIS meta data extract, designer CU sync, and more things that plug in. How do they plug in? The answer is with the framework code. Those are all the products. You literally flip these things on, integrate them in, and they can run easily. Our transformer manager product, similar things here around the life cycle synchronization between transformer manager and the GIS.

Finally, almost everywhere we go, some folks use our products, but we've always got a handful of custom applications at most locations. You might guess these centers often on system integration points where our system need to talk to an external system, but it doesn't need to be in real time. We might be pinging that system in a nightly basis for transferring data. The framework easily allows us spend the time writing the application code that matters to the integration.

Not worrying about all these other things on the far left. We've got custom applications there. Tons of other uses as you've might have guessed. Talk to one of our users. They more than likely have the SSP Nightly Batch Suite. Can tell you a little more about the custom applications they have, and whatever other flavor combination of the product application they use as well. In the end, though, our goal is to make you smarter, faster, better, regarding your geodatabase, and hopefully this tool explanation will help you understand how that accomplishes that task.

We Wrote the Book

The Indispensible Guide to ArcGIS Online

Download It for Free

Skye Perry

What do you think?

Leave a comment, and share your thoughts

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes:

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>


This site uses Akismet to reduce spam. Learn how your comment data is processed.