Nothing Ever Ends

So remember when I said GIS was dead to me? Well nothing is ever set in stone, I’m back in the GIS world. More Monday…

Oh and this blog is back.

March 23, 2019 blog Thoughts






17 Years of QGIS

Gary Sherman tweeted this morning:

Quite the journey, from basically a viewer application to now a full fledged GIS application. I looked back and the first time I mentioned QGIS was for the release of QGIS 0.7 back in 2005. I mention back then that I was impressed how far QGIS had come but almost 15 years later clearly it has come so much farther. I probably wished back in 2005 that QGIS could eventually become my preferred GIS application but today I know I can use it without any problems to accomplish any task I need to with GIS. In fact, I’m downloading 3.6 right now.

February 25, 2019 qgis Thoughts






17 Years of QGIS

Gary Sherman tweeted this morning:

Quite the journey, from basically a viewer application to now a full fledged GIS application. I looked back and the first time I mentioned QGIS was for the release of QGIS 0.7 back in 2005. I mention back then that I was impressed how far QGIS had come but almost 15 years later clearly it has come so much farther. I probably wished back in 2005 that QGIS could eventually become my preferred GIS application but today I know I can use it without any problems to accomplish any task I need to with GIS. In fact, I’m downloading 3.6 right now.

February 25, 2019 qgis Thoughts






Natural Language Processing is All Talk

I’ve talked about Natural Language Processing (NLP) before and how it is beginning to change the BIM/GIS space. But NLP is just part of the whole solution to change how analysis is run. I look at this as three parts:

  1. Natural Language Processing
  2. Curated Datasets
  3. Dynamic Computation

NLP is understanding ontologies more than anything else. When I ask how big” something is, what do I mean by this. Let’s abstract this away a bit.

How big is Jupiter?

One could look at this a couple ways. What is the mass of Jupiter? What is the diameter of Jupiter? What is the volume of Jupiter? Being able to figure out intent of the question is critical to having everything else work. We all remember Siri and Alexa when they first started. They were pretty good at figuring out the weather but once you got out of those canned queries all bets were off. It is the same with using NLP with BIM or GIS. How long is something? Easy! Show me all mixed-use commercial zoned space near my project? Hard. Do we know what mixed-use commercial zoning is? Do we know where my project is? That because we need to know more about the ontology of our domain. How do we do this, learn about our domain? We need lots of data to teach the NLP and then run it through a Machine Learning (ML) tool such as Amazon Comprehend to figure out the context of the data and structure it in a way the NLP can understand out intents.

As discussed above, curated data to figure out ontology is important but it’s also important to help users run analysis without understanding what they need. Imagine using Siri, but you needed to provide your own weather service to find out the current temperature? While I have many friends who would love to do this, most people just don’t care. Keep it simple and tell me how warm it is. Same with this knowledge engine we’re talking about. I want to know zoning for New York City? It should be available and ready to use. Not only that, curated so it is normalized across geographies. Asking a question in New York or Boston (while there are unique rules in every city) should’t be difficult. Having this data isn’t as sexy as the NLP, but it sure as heck makes that NLP so much better and smarter. Plus, who wants to worry about do they have the latest zoning for a city, it should always be available and on demand.

Lastly once we understand the context of the natural language query and have data to analysis, we need to run the algorithms on the question. This is what we typically think of as GIS. Rather than manually running that buffer and identity, we use AI/ML to figure out the intent of the user using the ontology and grab the data for the analysis from the curated data repository. This used to be something very special, you needed to use some monolithic tool such as ArcGIS or MapInfo to accomplish the dynamic computation. But today these algorithms are open and available to anyone. Natural language lets us figure out what the user is asking and then run the correct analysis, even if they call it something different from what a GIS person might.
The Alexa-like” natural language demos where the computer talks to users is fun, but much like the AR examples we see these days, not really useful in the context of real world use. Who wants their computer talking to them in an open office environment? But giving users who don’t know anything about structured GIS analysis the ability to perform complex GIS analysis is the game changer. It isn’t about how many seats of some GIS program are on everyones desk but how easy these NLP/AI/ML systems can be integrated into the existing workflows or websites. That’s where I see 2019 going, GIS everywhere.

January 16, 2019 bim GIS nlp Thoughts






Natural Language Processing is All Talk

I’ve talked about Natural Language Processing (NLP) before and how it is beginning to change the BIM/GIS space. But NLP is just part of the whole solution to change how analysis is run. I look at this as three parts:

  1. Natural Language Processing
  2. Curated Datasets
  3. Dynamic Computation

NLP is understanding ontologies more than anything else. When I ask how big” something is, what do I mean by this. Let’s abstract this away a bit.

How big is Jupiter?

One could look at this a couple ways. What is the mass of Jupiter? What is the diameter of Jupiter? What is the volume of Jupiter? Being able to figure out intent of the question is critical to having everything else work. We all remember Siri and Alexa when they first started. They were pretty good at figuring out the weather but once you got out of those canned queries all bets were off. It is the same with using NLP with BIM or GIS. How long is something? Easy! Show me all mixed-use commercial zoned space near my project? Hard. Do we know what mixed-use commercial zoning is? Do we know where my project is? That because we need to know more about the ontology of our domain. How do we do this, learn about our domain? We need lots of data to teach the NLP and then run it through a Machine Learning (ML) tool such as Amazon Comprehend to figure out the context of the data and structure it in a way the NLP can understand out intents.

As discussed above, curated data to figure out ontology is important but it’s also important to help users run analysis without understanding what they need. Imagine using Siri, but you needed to provide your own weather service to find out the current temperature? While I have many friends who would love to do this, most people just don’t care. Keep it simple and tell me how warm it is. Same with this knowledge engine we’re talking about. I want to know zoning for New York City? It should be available and ready to use. Not only that, curated so it is normalized across geographies. Asking a question in New York or Boston (while there are unique rules in every city) should’t be difficult. Having this data isn’t as sexy as the NLP, but it sure as heck makes that NLP so much better and smarter. Plus, who wants to worry about do they have the latest zoning for a city, it should always be available and on demand.

Lastly once we understand the context of the natural language query and have data to analysis, we need to run the algorithms on the question. This is what we typically think of as GIS. Rather than manually running that buffer and identity, we use AI/ML to figure out the intent of the user using the ontology and grab the data for the analysis from the curated data repository. This used to be something very special, you needed to use some monolithic tool such as ArcGIS or MapInfo to accomplish the dynamic computation. But today these algorithms are open and available to anyone. Natural language lets us figure out what the user is asking and then run the correct analysis, even if they call it something different from what a GIS person might.
The Alexa-like” natural language demos where the computer talks to users is fun, but much like the AR examples we see these days, not really useful in the context of real world use. Who wants their computer talking to them in an open office environment? But giving users who don’t know anything about structured GIS analysis the ability to perform complex GIS analysis is the game changer. It isn’t about how many seats of some GIS program are on everyones desk but how easy these NLP/AI/ML systems can be integrated into the existing workflows or websites. That’s where I see 2019 going, GIS everywhere.

January 16, 2019 bim GIS nlp Thoughts






Underground Digital Twins

We all have used 3D maps. From Google Earth, to Google and Apple Maps, to Esri, Mapbox and others, we are very used to seeing 3D buildings rendered on our devices. But think of the iceberg analogy…

Below is a bigger deal than above…

Icebergs are so much bigger than they appear. This is the case with the built environment. Look out your window and you see a complex city. But what you don’t see is what is below. We know that these underground assets are hit on average every 60 seconds in the United States which costs over $1B dollars in losses. What we can’t see is costing cities and developers money that could be better spent on making these cities sustainable.

But getting a hold on this issue is not easy. The ownership of these assets is many times private and those companies do not wish to share anything about what is underground for business or security reasons. Plus even if sharing was something that interested people, there isn’t a good unified underground model to place them in (we have many of these available for above ground assets). But there seems to be some progress in this area. Writes Geoff Zeiss:

At the December Open Geospatial Consortium (OGC) Energy Summit at EPRI in Charlotte, Josh Lieberman of the OGC presented an overview of the progress of OGCs underground information initiative, with the appropriate acronym MUDDI, which is intended to provide an open standards-based way to share information about the below ground.

The part that gets my attention is that MUDDI model is intended to build on and be compatible with many existing reference models. This is a big deal because many of the stakeholders in underground assets have already invested time and money into supporting these. As Geoff writes:

MUDDI is not an attempt to replace existing standards, but to build on and augment existing standards to create a unified model supporting multiple perspectives.

I’m totally on board with this. Creating a new model that handles all these edge-cases only will result in a model nobody wants. As we work toward integrating underground models into Digital Twin platforms, MUDDI will be a huge deal. It’s not ready by any means yet but because it support existing standards everyone can get involved immediately and start working at creating underground digital twins.

January 10, 2019 bim Thoughts underground