My son decided to change majors from biodesign to GIS. I had a short moment when I almost told him not to bring all this on himself but then thought differently. I could use my years of experience to help him get the perfect degree in GIS and get a great job and still do what he wants.
He’s one semester into the program so he really hasn’t taken too many classes. There has been the typical Esri, SPSS and Google Maps discussion, but nothing getting into the weeds. Plus he’s taking Geography courses as well so he’s got that going for him. Since he’s at Arizona State University, he’s going through the same program as I did, but it’s a bit different. When I was at ASU, Planning was in the Architectural College. Now it’s tied with Geography in a new School of Geographical Sciences & Urban Planning.
I have to be honest, this is smart, I started my GIS career working for a planning department at a large city. The other thing I noticed is a ton of my professors are still teaching. I mean how awesome is that? I suddenly don’t feel so old anymore.
I’ve stayed out of his classes for the past semester in hopes that he can form his own thoughts on GIS and its applicability. I probably will continue to help him focus on where to spend his electives (more Computer Science and less History of the German Empire 1894-1910). He’s such a smart kid, I know he’s going to do a great job and he was one who spent time in that Esri UC Kids Fair back when I used to go to the User Conference. Now he could be getting paid to use Esri software or whatever tool best accomplishes his goals.
I’ve talked about Natural Language Processing (NLP) before and how it is beginning to change the BIM/GIS space. But NLP is just part of the whole solution to change how analysis is run. I look at this as three parts:
Natural Language Processing
NLP is understanding ontologies more than anything else. When I ask how “big” something is, what do I mean by this. Let’s abstract this away a bit.
How big is Jupiter?
One could look at this a couple ways. What is the mass of Jupiter? What is the diameter of Jupiter? What is the volume of Jupiter? Being able to figure out intent of the question is critical to having everything else work. We all remember Siri and Alexa when they first started. They were pretty good at figuring out the weather but once you got out of those canned queries all bets were off. It is the same with using NLP with BIM or GIS. How long is something? Easy! Show me all mixed-use commercial zoned space near my project? Hard. Do we know what mixed-use commercial zoning is? Do we know where my project is? That because we need to know more about the ontology of our domain. How do we do this, learn about our domain? We need lots of data to teach the NLP and then run it through a Machine Learning (ML) tool such as Amazon Comprehend to figure out the context of the data and structure it in a way the NLP can understand out intents.
As discussed above, curated data to figure out ontology is important but it’s also important to help users run analysis without understanding what they need. Imagine using Siri, but you needed to provide your own weather service to find out the current temperature? While I have many friends who would love to do this, most people just don’t care. Keep it simple and tell me how warm it is. Same with this knowledge engine we’re talking about. I want to know zoning for New York City? It should be available and ready to use. Not only that, curated so it is normalized across geographies. Asking a question in New York or Boston (while there are unique rules in every city) should’t be difficult. Having this data isn’t as sexy as the NLP, but it sure as heck makes that NLP so much better and smarter. Plus, who wants to worry about do they have the latest zoning for a city, it should always be available and on demand.
Lastly once we understand the context of the natural language query and have data to analysis, we need to run the algorithms on the question. This is what we typically think of as GIS. Rather than manually running that buffer and identity, we use AI/ML to figure out the intent of the user using the ontology and grab the data for the analysis from the curated data repository. This used to be something very special, you needed to use some monolithic tool such as ArcGIS or MapInfo to accomplish the dynamic computation. But today these algorithms are open and available to anyone. Natural language lets us figure out what the user is asking and then run the correct analysis, even if they call it something different from what a GIS person might. The “Alexa-like” natural language demos where the computer talks to users is fun, but much like the AR examples we see these days, not really useful in the context of real world use. Who wants their computer talking to them in an open office environment? But giving users who don’t know anything about structured GIS analysis the ability to perform complex GIS analysis is the game changer. It isn’t about how many seats of some GIS program are on everyones desk but how easy these NLP/AI/ML systems can be integrated into the existing workflows or websites. That’s where I see 2019 going, GIS everywhere.
I think you can usually tell when a GIS Professional learned GIS by how they use their keyboard. Those who learned either on UNIX command line programs such as ArcInfo or GDAL seem to go out of their way to type commands either through keystrokes or scripting while those who learned in the GUI era, either ArcView 3.x or ArcGIS Desktop prefer to use a mouse. Now generalizing is always dangerous but it highlights things about how GIS analysis is done.
I almost feel like Yakov Smirnoff saying “What a country!” when you realize that most of the complicated scripting commands of the 90s are completed almost perfectly by dropping a couple GIS layers on a wizard and keep clicking next. Esri should be commended for making these tools drop-dead simple to use. But it brings up the issue of does anyone under stand what is going on with these tools when they run them? Let’s take a simple example for Intersect.
Esri Intersect Tool
So simple right? You just take your input features, choose where the output feature goes and hit OK. Done. But what about those optional items below. How many people actually ever set those? Not many of course and many times you don’t need to set them but not understanding why they are options makes it dangerous that you might not perform your analysis correctly. I’ll say you don’t understand how to run a GIS command unless you understand not only what the command does but all the options.
You don’t have to learn Python to be a GIS Analyst, running Model Builder or just the tools from ArcCatalog is good enough. But if you find yourself not even seeing these options on the bottom, let alone understand what they are and why they are used, you aren’t anything more than a button pusher. And button pushers are easily replaced. The Esri Intersect Tool has many options and using it like below will only give you minimum power and understanding of how GIS works.
Esri Intersect Tool with blinders on.
In the old days of keyboards, you have to type commands out and know what each one did. In fact many commands wouldn’t run unless you put an option in. Part of it is when you type the words “fuzzy_tollerance” enough times you want to know what they heck it is. I think keyboard GIS connected users to the commands and concepts of GIS more than wizards do. Much like working with your hands connects people to woodworking, working with your keyboard connects people to GIS.
I hadn’t really thought of the article in that context, I was just looking at a quick way to turn a CSV into a GeoJSON file quickly. But let’s look at Brian’s point, is desktop GIS heavy?
I’ve maintained since Esri abandoned ArcInfo Workstation in the early 2000s, GIS has become difficult to use. Not in the sense that any idiot1 can click the next button, but the simple fact they have no idea what they’re doing. To accomplish this, Esri spent tons of R&D to make GIS as simple as drag a couple of layers to a dialog and just click next until you have an output. You don’t even need to understand the setting, they default pretty much out of the box. Setting fuzzy tolerance? Not a problem, it’s labeled as optional. The need to understand why you are performing analysis is not needed.
Now that isn’t to say Esri is doing something bad. They’re simplifying something that was very scientific and required understanding of FORTRAN or UNIX into something that almost anyone can do. I think at some level they should be commended for making GIS easier and not limited to a bunch of weirdos with Sun SPARCstation 20 workstations. But in doing so they turned something lightweight into something of a beast. Thus Brian’s heavy comment.
But that’s not the end to the story, at least from an Esri perspective. Esri at the same time they were throwing wizards in from of every tool in ArcGIS Desktop, created one of the most powerful GIS libraries ever created, ArcPy. It’s everything we wanted ArcInfo Workstation to become, a modern, no proprietary scripting language with tons of GIS analysis tools. But for some reason, Esri doesn’t highlight it as they should. Just go to Esri.com and search for ArcPy. Typical Esri results, it’s a mess. Brian is reading this now nodding, “GIS is heavy”.
Heavy GIS is starting up ArcMap, starting up ArcCatalog, dragging and dropping into a wizard, and fighting through the next screens. The process is similar in QGIS which seems to be adopting some of the same wizard dialogs as ArcGIS. They’re heavy because that’s what they need to be. Scott Morehouse years ago told me ArcGIS was complicated because it is “scientific software”. At the time I laughed but I do get it. It’s the long tail of long tails in GIS, solving GIS analysis in so many edge cases that it gets bloated.
Esri should3 have a section of their website devoted to Python scripting. Showing how much easier (and faster) it is to do your analysis with ArcPy over ArcGIS Toolbox4. There are pieces all over their website about Python and ArcGIS, but “Scripting” section. That would go a long way to making Desktop GIS not heavy. Searching Google for “Esri Scripting” gives you a dead-end to ArcScripts. That should change.
If there is one thing I’ve learned over the years it is that workflows are critical to creating a repeatable, defensible process. The thing with GIS is that we’ve got to use so many different file formats and systems1. I’ve been working on a relatively simple workflow, one that must be automated. The whole process is stuck on a proprietary format by a vendor who makes Esri look like an open book. Workflows generally are very easy to automate because so much of what we do in GIS is based upon APIs. Heck we were using APIs before we know what we were working with was an API2. But too much of what we do is based upon needing a license to export a binary format into an open one.
We can talk all we want about open data formats, LAS battles and every other GIS format war we want to argue about, but in the end we are usually up against a format that can’t be cracked, can’t be avoided or contractually is required. The binary format industrial complex is strong but I refuse to be backed into these corners anymore. Time to pivot into taking down this one.
I was actually going to type “silos” there but I felt dirty. Honestly thought that is what we do ↩
I have been working with and teaching undergrad and grad students GIS for 4+ years now and have compiled a list of the 10 most frequent problems that they encounter. In my current position, I spend about 15 hours per week holding office hours in the main GIS Lab on campus, where students, staff, and faculty can visit for GIS assistance, and rarely do I have a free moment. (Well, perhaps during intercession.) I often find myself explaining the same concepts and pointing out the same resources over and over again, so I wanted to pull together this list. ESRI ArcGIS is the main software application used on campus and so many of the examples below refer to this application.
I’ve noticed all these while using slave labor (sometimes called interns) on projects. The inability manage project work just kills them as well as making assumptions. I always tell them, just ask as many questions as you like because we’ve got many years of GIS experience here in our little shop and no sense reinventing the wheel.
Modern mapmaking now starts with GIS data from state or local government that includes way more information than you really want. The task nowadays is to remove the unwanted data from the map to reduce clutter and focus on the desired information. One of the tasks these Illustartor users have is to create linked networks of nodes to create the streets and highways we see on a map.
The idea that Google/MSN/Yahoo is bad news for ESRI is based solely on the whiz-bang-flash of the new mass awakening to the fact that things can be put on a map. Anyone who thinks that Google is going to extend Google Earth to the point of enabling a city to manage it’s parcel base is delirious. Apart from the fact that it’s very difficult, there is no benefit to them. While Google does have a staff of geniuses, this does not mean they can simply whip up a full fledged professional GIS system. As for ESRI – I think they can only benefit from the increased attention paid to mapping in general, and GIS in particular. Once the public really starts to “get” maps, ESRI will be well positioned to facilitate “doing” something with the map – besides just plotting a point location.
Dave Bouwman has just written a great article on the relationship between Google/Microsoft, ESRI and GIS as a whole. Dave hits it right on the head with ArcGIS vs Google Earth (or similar “consumer GIS” programs). Some have said that ArcGIS is the world’s largest software application built with Microsoft’s COM and while that may or may not be true, the plain fact is that ArcGIS has so many tools at the ready and these tools have decades of development behind them, that Google/Microsoft would be very hard pressed to compete. Now at the consumer end, that is a different story and it may be that GE and MapPoint eventually close the gap toward being a low end GIS tool, but even then you have to wonder about the quality of analysis that these tools may give the user given the lack of experience with GIS.
Time will tell, but as Dave points out so well in his post, Google Earth and ArcGIS are aimed at two very different markets and there is almost no overlap between them.
It is interesting to read and learn about all the things that Google is up to with respect to maps. Like I said a while ago, here; it has opened the door to increasing mapping awareness, especially to, although not limited to younger folks ‘ move over MTV.
Jeff pretty much says what I’ve been saying and what ESRI’s focus should be on. Tools to create GIS data are extremely important. Google isn’t a creator of this information, they are a consumer. Our GIS workflows are integrated into ESRI’s tools so for us to be successful, we need ESRI to continue to push the envelope. Our biggest problem though has been on the reader side of things. Jeff says ArcGlobe is ESRI’s “Google Earth”, but that requires you to have at least an ArcView license. Jeff does say why Google Earth has been successful and it is plainly the simplicity of the application. If you’ve ever used ArcGlobe you know you have total control over just about everything, but zooming and panning can get out of control (well maybe it is just I can’t work in a total 3d world).
The key point of Jeff’s post is that we can’t get to the future without improved spatial tools and he is right on. But the problem today is consumers are beginning to want to consume GIS services and the tools to do so are limited and into that void falls Google Earth.