SSP Replay and SSP Delta

All Edits State 0 Used to Replay Offshore Edits, Part 2

June 24, 2016 — Matthew Stuart

In last month’s post (All Edits State 0 Used to Replay Offshore Edits), we talked about a multi-year partnership between RAMTeCH and SSP to make GIS data improvements for a large utility. That article focused on a single delivery, and the steps necessary to process over 30,000 edits. This article will focus on the Lessons Learned.

Lesson Learned #1: Good QA Takes A Long Time 

Each delivery consists of 20 versions, and each version contains an average of 1,500 created, edited, or deleted features. Before Delivery 1, the QA team decided that one engineer would check one version each day. So four engineers, five versions each per week, 20 versions. A snap, right? Not really.

First we quickly determined that QA really couldn’t be done by a single person. Instead, we had three-person teams: a GIS expert, a linen expert, and a scorekeeper. The RAMTeCH data miners were (and are) capturing data off linens that each contain a significant amount of data — some of which is decades old. It takes a trained eye to intrepret the linen.

So we would have the engineer read the linen, a clerk navigate GIS, and a scorekeeper record the findings. This didn’t really speed up the process, but it ensured the integrity of it. Now three people — not just one — had to agree on the error before it was scored. Instead of a week, we have found that QA takes on average two full weeks and part of a third.

Lesson Learned #2: Scoring Isn’t Easy

The spreadsheet that RAMTeCH provided with Delivery 1 showed the total number of created and edited features by version. The QA team’s first question: what happens when we when we need to document an error? How do we do that? SSP came to the rescue by producing a child score sheet for each version. The child spreadsheet showed the details.

If RAMTeCH showed, for example, that Version 7 contained 27 created Gas Mains and 68 edited Gas Mains, the child score sheet would show the Object IDs for each of the individual Gas Mains. This helped the QA team know what to check. It also allowed the QA team to record notes if a particular Gas Main had an issue for RAMTeCH to address.

Learned Learned #3: There Must Be A Gap Between Extract and Replay

For the first couple of deliveries, RAMTeCH would deliver the source database (containing the 30,000 features) on a Friday. SSP would then do the extract and replay all in one weekend. This left little buffer in case something happened, and it left even less time to do any kind of reconciliation between the number of features delivered and the number of features extracted.

So we moved the extract back a day. But that still didn’t leave a lot of time. So beginning with Delivery 4, we started doing the extract on a Tuesday night. This allowed us to generate a “post-extract” report comparing what was delivered vs. what was extracted. Any differences would be investigated and resolved on Wednesday and Thursday.

This led to a much less stressful weekend — especially since everyone knew that any issues with the deliveries were already resolved at that point. We only replay what we extract, so if the extract matches the source exactly, there is added confidence that the replay will be fine too.

Lesson Learned #4: Communication is Key

A big challenge early on was communication. How could SSP, RAMTeCH, and the onsite QA team — which were all based in different states — communicate effectively throughout the QA week? At first, we just exchanged e-mails. (“Hey, this is a trend one of the QA teams is seeing…”) But simply trading e-mails was overwhelming, especially with the first delivery because there was so much that was new.

After that, the QA team tried documenting everything in an Excel spreadsheet that was sent to RAMTeCH each night. But that didn’t work well either. For Deliveries 2 and 3, we used SSP’s BugNet site. This allowed for an easy exchange of ideas, screenshots, suggestions, etc. — and it also has a really good audit trail, which is something we missed in Delivery 1. BugNet worked well for Deliveries 2 and 3, but by Delivery 4 we really didn’t need it as much.

The new stuff wasn’t really new anymore. If there were things that needed to be changed or updated, we used formal change control processes to document them. But a formal change process can be slow. The best thing we found in the beginning is just talking: daily calls, weekly meetings, post-QA debriefs, etc. We found that the frequent exchange of ideas — especially early in the process — helped all teams immensely.

So like any big project, there were some growing pains in the beginning. Ultimately, however, the lessons learned from the first few deliveries has led to a much smoother process for the last few.

We are now well-positioned for future success!

We Wrote the Book

The Indispensible Guide to ArcGIS Online

Download It for Free

Matthew Stuart

Matthew Stuart works as a Senior Consultant for the Utility and Telecommunications GIS consulting company SSP Innovations, headquartered in Centennial, Colorado. Matt has 25 years of experience specializing in Project Management, Testing, and Quality Assurance.

What do you think?

Leave a comment, and share your thoughts

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes:

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>