A man (or woman) can deny that gravity exist until they fall on their backside. The cloud is just a modern day equivalent to the legacy mainframe, and it is somebody else’s maintenance responsibility. The cloud is not some magical forest accessed via a piece of furniture.
The practical mainframe was started in the 1960s. It mostly housed the CPU with a bunch of child dependent terminals. In the 1990’s, I was able to see one of the last traditional bipolar mainframes of the USGS in Louisville, Kentucky. It maintained all the stream gauge information which was served over the internet. Remember dial-up?
The problem was that this room-sized machine was slow, and would often overheat because of a cooling system failure. When working for the Kentucky River Authority, the monitoring of stream levels was extremely important during drought.
Today’s “mainframe” is grossly different. Two years ago, I was lucky enough to tour a “mainframe” (a.k.a Data Center) outside Brussels, Belgium. To enter the isolated, secured facility, I had to go through what looked like a giant vacuum tube at a bank drive-thru (although I didn’t do a deep drop like in “Spies Like Us” after getting a Pepsi at the drive-in).
There was a room devoted to back-up power. There was another room with the sole purpose for fire suppression. In the event of fire, chemicals would be released throughout the facility that would stiffle the fire, but not kill any unlucky individuals remaining inside (they would only be rendered unconscious because the remaining oxygen would be enough to sustain life). Great.
Figure 2 (below) is of two fiber patch panels that I took recently when deploying Fiber Manager™. True data centers contain rows and rows of racks where each rack contains several of these panels. Fact is Data Centers are HUGE. Check out the Google Data Center online tour.
There are also fiber optic lines that transmit data across hundreds of miles practically instantaneously from data centers (as described) to your home or work.
So what is the point?
The point is….What difference does it make if the physical computer infrastructure exist right next to you (where one must manage all the computer issues) or a thousand miles away? In some cases, contracting offsite infrastructure with a trusted business makes sense.
Although there are different methods of deployment, SSP’s basic recommended ArcGIS Online implementation (Figure 3) consist of an ArcGIS Server machine located behind the firewall. Secure communications are made with a machine located within the DMZ (outside the firewall).
This DMZ machine is the internet traffic cop that manages the exchange between ArcGIS Online and ArcGIS Server.
In a recent ArcGIS Online deployment, SSP deployed (at the request of the client) ArcGIS Server and Web Adaptor on a cloud-based computer as illustrated in Figure 4. This external computer could also be used to deploy the more isolated, intranet-based Portal. Portal is become an increasingly popular option for SSP’s clients (Future Blog Post).
Fact is that there are numerous options/scenarios in cloud-based deployments.
Before this post looks like a recommendation for a cloud-based infrastructure deployment, it isn’t all red-eye gravy. After a cloud-based deployment a few months ago, our client learned a few unexpected things. Dealing with an external party produced an additional hurdle.
Initially, our client’s contractor would conduct maintenance (i.e. install patches and/or restart the server) without providing notice. This has since been rectified by improving dialog that provides notice of pending or potential service disruptions.
In conclusion, the cloud-based infrastructure model is a viable option for ArcGIS Online or Portal deployments. If/when implementing this configuration, verify the communication mechanism and time frame with the infrastructure contractor as to when services are to be (or potentially) disrupted.
What do you think?