Author: James

  • Latest Podcast Episode: PostGIS

    Bill Dollins and I have our latest podcast out today. It is a day late because of some work related stuff but that’s the best news this week for me. This was was a lot of fun, we dive into PostGIS with some suggestions on how to get started and tools we use to get the best value out of everything we do in PostGIS. Please enjoy and rate us on iTunes or Google Play if you have time.

  • The Scooter Infestation

    As Spatial Networks is in St. Petersburg I’m spending my first week at the company here for orientation and meeting the team. If you’ve never been to St. Pete its a great small city that isn’t that complicated. The one thing I’ve noticed walking around the city is the lack of electric scooters. There are no Bird, Lime, Razor, Uber or whatever latest company is throwing scooters around. In Tempe, there are so many brands that I can’t even keep track of them anymore. But not here in St. Pete. It makes the sidewalks less crowded, you don’t seem scooters knocked over on ever street corner and honestly seeing everyone actually walking around feels relaxing. You just don’t notice it until you don’t see them. I miss those days…

  • Spatial Networks

    Well today is a big day for me. I’ve join Spatial Networks as VP of Professional Services. This is a great opportunity for me because the team that Tony has put together is really second to none. I was going to list all the people at SNI that I’m exicte to work with but realized it was basically the whole team. On the latest podcast (which isn’t out yet) Bill and I talked about how I’ve worked with so many of these great people but never at the same company. Well that’s no longer the case.

    I’ll be heading up the Professional Services team at SNI so this is a great opportunity to work with many of the companies I have in the past, implementing some of the best technology and data sources out there. I’m in St. Petersburg for much of this week and then back again in 2 weeks for the whole week. Should give me some great 1 on 1 time with most of the staff and catch up with those who I respect greatly.

    As I alluded to the other day, this blog is back and I’ll be blogging about technology and what gets me excited these days. Some GIS, some programming, some hardware and probably some photoshops. Should be a great time, I’m very excited!

  • Reading Workflow

    I never had a Amazon Kindle, either I read books with a physical copy or I just used my iPad with the Apple Books app. This worked great for years but lately I’ve found myself being distracted more by life and reading has become difficult on the iPad. There are so many distractions with notifications, alerts and tweets that concentration becomes difficult. I’ve noticed that I listen to books more than read them, not because I have a long commute, but that when you listen to a book, you’re not looking at anything.

    I’ve wanted to get back into reading because I find it very therapeutic and I finally bit the bullet and bought an Amazon Kindle Paperwhite. I looked at a Kobo but the Paperwhite 2018 was just so compelling. Since this is my first Kindle, I was taken back a bit by the E-ink technology given I was used to the high fidelity iPad displays but after reading books for a couple minutes, it was clear that my eyes preferred the E-ink to Apple’s True Tone displays. As this Kindle is my first one, I can’t compare it to the previous devices but it feels high quality and given I paid only $100 for it on sale, it feels like a steal.

    So after the past 2 weeks of reading with the Kindle, what have I noticed?

    • I read much more than before and for much longer. I don’t get distracted.
    • The Kindle Store is so slick, buying a book is so easy and smooth that I dare call it enjoyable.
    • I can use the Libby app to check out books at my local bookstore. I find myself using the library much more than I did before.
    • Selection is so much better than Apple’s book store. I no longer have to compromise what I read because of Apple’s lack of focus.

    I feel so late to this party but I’m OK with that. I’m always nervous about Amazon’s ecosystem knowing more about me but so far it feels good. Libby is working great for me so using the Tempe Library seems to be my biggest use so far. 2019 seems to be where reading gets back on top of my todo list.

  • Nothing Ever Ends

    So remember when I said GIS was dead to me? Well nothing is ever set in stone, I’m back in the GIS world. More Monday…

    Oh and this blog is back.

  • 17 Years of QGIS

    Gary Sherman tweeted this morning:

    Quite the journey, from basically a viewer application to now a full fledged GIS application. I looked back and the first time I mentioned QGIS was for the release of QGIS 0.7 back in 2005. I mention back then that I was impressed how far QGIS had come but almost 15 years later clearly it has come so much farther. I probably wished back in 2005 that QGIS could eventually become my preferred GIS application but today I know I can use it without any problems to accomplish any task I need to with GIS. In fact, I’m downloading 3.6 right now.

  • Natural Language Processing is All Talk

    I’ve talked about Natural Language Processing (NLP) before and how it is beginning to change the BIM/GIS space. But NLP is just part of the whole solution to change how analysis is run. I look at this as three parts:

    1. Natural Language Processing
    2. Curated Datasets
    3. Dynamic Computation

    NLP is understanding ontologies more than anything else. When I ask how “big” something is, what do I mean by this. Let’s abstract this away a bit.

    How big is Jupiter?

    One could look at this a couple ways. What is the mass of Jupiter? What is the diameter of Jupiter? What is the volume of Jupiter? Being able to figure out intent of the question is critical to having everything else work. We all remember Siri and Alexa when they first started. They were pretty good at figuring out the weather but once you got out of those canned queries all bets were off. It is the same with using NLP with BIM or GIS. How long is something? Easy! Show me all mixed-use commercial zoned space near my project? Hard. Do we know what mixed-use commercial zoning is? Do we know where my project is? That because we need to know more about the ontology of our domain. How do we do this, learn about our domain? We need lots of data to teach the NLP and then run it through a Machine Learning (ML) tool such as Amazon Comprehend to figure out the context of the data and structure it in a way the NLP can understand out intents.

    As discussed above, curated data to figure out ontology is important but it’s also important to help users run analysis without understanding what they need. Imagine using Siri, but you needed to provide your own weather service to find out the current temperature? While I have many friends who would love to do this, most people just don’t care. Keep it simple and tell me how warm it is. Same with this knowledge engine we’re talking about. I want to know zoning for New York City? It should be available and ready to use. Not only that, curated so it is normalized across geographies. Asking a question in New York or Boston (while there are unique rules in every city) should’t be difficult. Having this data isn’t as sexy as the NLP, but it sure as heck makes that NLP so much better and smarter. Plus, who wants to worry about do they have the latest zoning for a city, it should always be available and on demand.

    Lastly once we understand the context of the natural language query and have data to analysis, we need to run the algorithms on the question. This is what we typically think of as GIS. Rather than manually running that buffer and identity, we use AI/ML to figure out the intent of the user using the ontology and grab the data for the analysis from the curated data repository. This used to be something very special, you needed to use some monolithic tool such as ArcGIS or MapInfo to accomplish the dynamic computation. But today these algorithms are open and available to anyone. Natural language lets us figure out what the user is asking and then run the correct analysis, even if they call it something different from what a GIS person might.
    The “Alexa-like” natural language demos where the computer talks to users is fun, but much like the AR examples we see these days, not really useful in the context of real world use. Who wants their computer talking to them in an open office environment? But giving users who don’t know anything about structured GIS analysis the ability to perform complex GIS analysis is the game changer. It isn’t about how many seats of some GIS program are on everyones desk but how easy these NLP/AI/ML systems can be integrated into the existing workflows or websites. That’s where I see 2019 going, GIS everywhere.

  • Underground Digital Twins

    We all have used 3D maps. From Google Earth, to Google and Apple Maps, to Esri, Mapbox and others, we are very used to seeing 3D buildings rendered on our devices. But think of the iceberg analogy…

    Below is a bigger deal than above…

    Icebergs are so much bigger than they appear. This is the case with the built environment. Look out your window and you see a complex city. But what you don’t see is what is below. We know that these underground assets are hit on average every 60 seconds in the United States which costs over $1B dollars in losses. What we can’t see is costing cities and developers money that could be better spent on making these cities sustainable.

    But getting a hold on this issue is not easy. The ownership of these assets is many times private and those companies do not wish to share anything about what is underground for business or security reasons. Plus even if sharing was something that interested people, there isn’t a good unified underground model to place them in (we have many of these available for above ground assets). But there seems to be some progress in this area. Writes Geoff Zeiss:

    At the December Open Geospatial Consortium (OGC) Energy Summit at EPRI in Charlotte, Josh Lieberman of the OGC presented an overview of the progress of OGC’s underground information initiative, with the appropriate acronym MUDDI, which is intended to provide an open standards-based way to share information about the below ground.

    The part that gets my attention is that MUDDI model is intended to build on and be compatible with many existing reference models. This is a big deal because many of the stakeholders in underground assets have already invested time and money into supporting these. As Geoff writes:

    MUDDI is not an attempt to replace existing standards, but to build on and augment existing standards to create a unified model supporting multiple perspectives.

    I’m totally on board with this. Creating a new model that handles all these edge-cases only will result in a model nobody wants. As we work toward integrating underground models into Digital Twin platforms, MUDDI will be a huge deal. It’s not ready by any means yet but because it support existing standards everyone can get involved immediately and start working at creating underground digital twins.

  • BIM vs. Digital Twin

    The thing with BIM is that BIM models are VERY complicated. That’s just the nature of BIM. People talk about digital twins all the time, and BIM (as an extension of CAD) is probably one of the first representations of a digital twin. BIM though by its nature isn’t an “as-built.” It is just a picture of what the real world object should be, where-as a digital twin is a digital copy of an existing asset. Now the best way to start a digital twin is to import a BIM model, but there are some areas you need to be aware of before doing so.

    1. A BIM model might not be an as-built. As I said above, BIM is what something should be, not what it ends up being. During construction, changes are always made to the building, and in doing so, the BIM model ceases to be a digital twin. Just importing a BIM model without field verification can result in your digital twin not genuinely being a digital twin.

    2. What detail do you need in hour digital twin? A BIM model might have millions of entities making up even a simple asset, such as a window frame that is unique and requires high accuracy. This is very important in the construction phase where even a millimeter off can cause problems, but for a digital twin, that detail is not needed. This is where BIM and digital twins diverge; the BIM model is the engineering representation of something vs. a digital twin is just the digital replica. There is no reason why you couldn’t import in such an elaborate window frame of course, but throughout a whole building or even a city, these extra details get lost in the LOD. The key here is knowing what your LOD is and how you want to view it. There is much going on in the 3D space where you can use LOD to display the elaborate window frame above, yet still be performant where needed.
    3. Aftermarket features are generally part of a digital twin. BIM models are idealized in that they only show what was spec’d out. Digital twins need to show all those modifications that were made after the building was turned over to the owner. Doors removed, walls put up, windows boarded over. These things all need to be reflected in your digital twin. Just importing a BIM model that doesn’t address these changes means that when you go to link up your digital twin to IoT or other services, there is no one-to-one relationship. Preparation work of that BIM model before ingestion into a digital twin helps immeasurably.

    It is easy to want to jump into creating digital twins of your buildings but it is critical to make sure that before you do so you’ve review your files to ensure that they are as-built and a twin of the real world asset.

  • The End of GIS

    So yea, link bait title, sue me.  But I felt like it needed to be there.  But before I go into what that all means, I’m going to continue blogging over on Medium but with a new focus. The RSS feed, email blasts, and Twitter account will cease to produce original content.

    So why is there an end to anything?  I’ve been working toward this end for some time, the focus has been to move away from proprietary stacks and toward open source.  But there is a more significant theme to this.  Pivoting away from specialized software that is good at one thing, towards libraries to get things done.  Regardless I’m now working at a company that specializes in aggregating, analyzing and visualizing 3D data.  GIS has been useful at many things, but 3D was never one of it.

    The start of Spatially Adjusted happened over the course of a family vacation to my wife at the time’s in-laws in rural Texas.  I can’t recall exactly what made me start, but it was the intersection of Esri and Open Source.  This was pre-OSGeo, things in my life were still ArcGIS and mostly ArcIMS.  There was a ton about me being excited about EDN when that first arrived and unboxing ArcGIS 9.1.  But I was getting into open source.  In fact, the first time I blogged about PostGIS, Sean Gillies was quick to put me in my place.  Because of course I was a big Esri supporter, and all he saw was someone complaining about the quality of the software.

    My blog has a big story arc in it.  I go from “Esri blogger” to “Esri hater.”  Early on I used to get Esri passing me info to get the word out.  The reality of this was there was no Twitter or Facebook yet, so the only place people could be open was my comments on my blog posts.  But over the years I grew bitter about the software.  I grew tired of competing against Esri on contracts.  I became angry at software being half-baked and having to rewrite things every few years.  Look, there was a ton to like about the Esri Web ADF…  No wait, there wasn’t.  I’m sure people worked very hard on it, and they probably take it personally when I call it a POS but it was.  Engineers aren’t at fault for creating the Web ADF, Esri marketing is at fault for choosing to push it.

    I honestly could write pages on why I dislike things about Esri but I won’t.  I’m honestly over it.  I look back at ArcObjects, MapObjects, Web ADF and the rest and I feel like it was a different person.  I cannot picture myself doing that work anymore, and that’s OK, we all grow up and grow into what we enjoy.  That’s the big picture through this journey, being open to change.  The “threat” of Google Earth, the “threat” of open source, the “threat” of the ELA.  All irrelevant in the end.  The most prominent part of our professional lives is our ability to handle change.  Don’t assume anything, just look for ways to improve your workflows, provide better service toward others and be proud of your career.

    Throughout this journey, there have been a couple people who have affected great change in me.  Early on I can only think of two people; Howard Butler and Sean Gillies.  Both forced me to look at how I perceived open tools such as GDAL, UMN MapServer, and PostGIS.  Sean more than anyone called out my proprietary bullshit and while I didn’t agree with everything he said, it did open my eyes.  Later on, blogging brought me into contact with more developers.  People such as Bill Dollins, Dave Bouwman, and Brian Flood.  The work they were doing, even in the Esri ecosystem really helped me grow.  Even inside Esri, the creation of EDN and the DevSummit introduced me to Brian Goldin, Steve Pousty and Rob Elkins who basically made the first DevSummit my Woodstock.

    I also can’t stress enough how many people I’ve met over the years because of this blog.  Not a conference goes by where someone introduces themselves to me and tells me they follow me.  That means a ton as personal networks is what drives us all.  It has been those who introduce me to the fantastic stuff they are working on that inspires my passion.  But that is why I think my story arc went from “Esri blogger” to the intersection of 3D BIM and GIS.

    I really can’t think of anyone I’ve met over these years I don’t have a ton of respect for.  From Art Haddad pushing ArcGIS Server to be something more than a hacked together project to Jim Barry always making sure I could find the right documentation or developer help, I’ve always been lucky enough to find the right person to help out.  I really could go, but everyone should know what a great asset you are and still will be.

    So what now for me?  At Cityzenith I’m focused on building the platform that the real estate and AEC industries can use to make a better world.  This blog has been on so many different platforms over the years.  Best I can recall the progress went; Blogger -> MovableType -> WordPress -> Octopress -> WordPress -> Github Pages -> WordPress and rather than port it over to yet another platform I think it has earned the right to relax.  Just like PlanetGS.com got to retire in dignity, so will Spatially Adjusted.

    So follow me over at Medium where I’ll be talking about Elastic, Unity, Mapbox, Turf.js, Tippecanoe, Safe FME, 3D formats, AWS (including Lex, Lambda and Comprehend) and using Unity inside web browsers and mobile devices.  Should be a blast!

    So I think I’ll just leave this here because it is how I feel.

    GIS has been won!