Tag: iot

  • Book Review: Hands-On Azure Digital Twins: A practical guide to building distributed IoT solutions

    Finished reading: Hands-On Azure Digital Twins: A practical guide to building distributed IoT solutions by Alexander Meijers

    I’ve been working with Digital Twins for almost 10 years and as simple as the concept is, ontologies get in the way of implementations. The big deal is how can you implement, how does one actually create a digital twin and deploy it to your users.

    Let’s be honest though, the Azure Digital Twin service/product is complex and requires a ton of work. It isn’t an upload CAD drawing and connect some data sources. In this case Meijers does a great job of walking through how to get started. But it isn’t for beginners, you’ll need to have previous experience with Azure Cloud services, Microsoft Visual Studio and the ability to debug code. But if you’ve got even a general understand of this, the walk throughs are detailed enough to learn the idiosyncrasies of the Azure Digital Twin process.

    The book does take you through the process of understanding what an Azure Digital Twin model is, how to upload them, developing relationships between models and how to query them. After you have an understanding on this, Meijers dives into connecting the model to services, updating the Azure Digital Twin models and then connecting to Microsoft Azure Maps to view the model on maps. Finally he showcases how these Digital Twins can become smart buildings which is the hopeful outcome of doing all the work.

    The book has a lot of code examples and ability to download it all from a Github repository. Knowledge of JSON and JavaScript, Python and .NET or Java is probably required. BUT, even if you don’t know how to code, this book is a good introduction to Azure Digital Twins. While there are pages of code examples, Meijers does a good job of explaining the how and why you would use Azure Digital Twins. If you’re interesting in how you can use a hosted Digital Twin service that is managed by a cloud service, this is a great resource.

    I felt like I knew Azure Digital Twins before reading this book, but it taught me a lot about how and why Microsoft did what they did with the service. Many aspects that caused me to scratch my head became clearer to me and I felt like this book gave me additional background that I didn’t have before. This book requires an understanding of programming but after finishing it I felt like Meijers’ ability to describe the process outside of code makes the book well worth it to anyone who wants to understand the concept and architecture of Azure Digital Twins.

    Thoroughly enjoyed the book.

  • Moving Towards a Digital Twin Ecosystem

    Smart Cities really start to become valuable when they integrate with Digital Twins. Smart Cities do really well with transportation networks and adjusting when things happen. Take, for example, construction on an important Interstate highway that connects the city core with the suburbs causes backups and a smart city can adjust traffic lights, rail, and other modes of transportation to help adjudicate the problems. This works really well because the transportation system talk to each other and decisions can be made to refocus commutes toward other modes of transportation or other routes. But unfortunately, Digital Twins don’t do a great job talking to Smart Cities.

    Photo by Victor Garcia on Unsplash

    A few months ago I talked about Digital Twins and messaging. The idea that:

    Digital twins require connectivity to work. A digital twin without messaging is just a hollow shell, it might as well be a PDF or a JPG. But connecting all the infrastructure of the real world up to a digital twin replicates the real world in a virtual environment. Networks collect data and store it in databases all over the place, sometimes these are SQL-based such as Postgres or Oracle, and other times they are simple as SQLite or flat-file text files. But data should be treated as messages back and forth between clients.

    This was in the context of a Digital Twin talking to services that might not be hardware-based, but the idea stands up for how and why a Digital Twin should be messaging the Smart City at large. Whatever benefits a Digital Twin gains from an ecosystem that collects and analyzes data for decision-making those benefits become multiplied when those systems connect to other Digital Twins. But think outside a group of Digital Twins and the benefit of the Smart City when all these buildings are talking to each other and the city to make better decisions about energy use, transportation, and other shared infrastructure across the city or even the region (where multiple Smart Cities talk to each other).

    When all these buildings talk to each other, they can help a city plan, grow and evolve into a clean city.

    What we don’t have is a common data environment (CDE) that cities can use. We have seen data sharing on a small scale in developments but not on a city-wide or regional scale. To do this we need to agree on model standards that allow not only Digital Twins to talk to each other (Something open like Bentley’s iTwin.js) and share ontologies. Then we need that Smart City CDE where data is shared, stored, and analyzed at a large scale.

    One great outcome of this CDE is all this data can be combined with City ordinances to give tools like Delve from Sidewalk Labs even more data to create their generative design options. Buildings are not a bubble in a city and their impacts on the city extend out beyond the boundaries of the parcel they are built on. That’s what so exciting about this opportunity, manage assets in a Digital Twin on a micro-scale, but share generalized data about those decisions to the city at large which then can share them with other Digital Twins.

    Graphic showing chart of change over time

    And lastly, individual Smart Cities aren’t bubbles either. They have huge impacts on the region or even the country that they are in. If we can figure out how to create a national CDE, one that covers a country as diverse as the United States, we can have something that can even benefit the world at large. Clean cities are the future and thinking about them on a small scale will only result in the gentrification of affluent areas and leave less well areas behind. I don’t want my children to grow up in a world like that and we have the processes in place to ensure that they have a better place than use to grow up in.

  • COVID-19 is Showing How Smart Cities Protect Citizens

    I feel like there is a before COVID and an after COVID with citizens’ feelings for Smart City technology. Now there is an election tomorrow in the United States that will probably dictate how this all moves forward and after 2016, I’ve learned to not predict anything when it comes to the current president. But, outside that huge elephant in the background, Smart City concepts have been thrust into the spotlight.

    Photo by Michael Walter on Unsplash

    Most cities have sent their non-essential workers home, so IoT and other feeds to their work dashboards have become critical to their success. The data collection and analysis of the pulse of a city is now so important that traditional field collection tools have become outdated.

    Even how cities engage with their citizens has changed. Before COVID, here in Scottsdale, you needed to head to a library to get a library card in person. But since COVID restrictions, the city has allowed library card applications in person which is a huge change. The core structure of city digital infrastructure has to change to manage this new need. Not only engaging citizens deeper with technology but need to ensure those who don’t have access to the internet or even a computer are represented. I’ve seen much better smartphone access on websites over the summer and this will continue.

    Even moving from a public space to a digital space for city council meetings has implications. The physicality of citizens before their elected leaders is a check on their power, but being a small zoom box in a monitor of zoom boxes puts citizens in a corner. Much will have to be developed to have a way for those who don’t wish to be in person be represented as well as those who choose to attend meetings in person.

    COVID has also broken down barriers to sharing data. The imagined dashboard where Police, Fire, Parks & Rec, City Council, and other stakeholders have come to fruition. The single pane of glass where decision-makers can get together to run the city remotely is only going to improve now that the value has been shown.

    Lastly, ignoring the possible election tomorrow, contact tracing, and other methods of monitoring citizens as they go around the city has changed mostly how people feel. Before COVID, the idea that a city could track them even anonymously scared the daylights out of people. But today we are starting to see the value in anonymous tracking so that not only we see who has been in contact with each other but how they interact in a city with social distancing restrictions.

    Future planning of cities is changing and accelerated because of COVID. The outcome of this pandemic will result in cities that are more resilient, better managed, planned for social distancing, and are working toward carbon neutral environments. In the despair of this unprecedented pandemic, we see humanity coming together to create a better future for our cities and our planet.

  • The iPhone 12 Pro LiDAR Scanner is the Gateway to AR, But Not in the Way You Think

    I’m sure everyone knows about it by now, the iPhone 12 Pro has a LiDAR scanner. Apple touts it to help you take better pictures in low light and do some rudimentary AR on the iPhone. But, what this scanner does today isn’t where the power will be tomorrow.

    Apple cares a ton about photo quality, so a LiDAR scanner helps immensely with taking these pictures. If there is one reason today to have that scanner, it is for pictures. But the real power of the scanner is for AR. And AR isn’t ready today, no matter how many demos you see in Apple’s event. Holding up an iPhone and seeing how big a couch in your room is interesting, just as interesting as using your phone to find the nearest Starbucks.

    Apple has spent a lot of time working on interior spaces in Apple Maps. They’ve also spent a ton of time working on sensors in the phone for positioning inside buildings. This is all building to an AR navigation space inside public buildings and private buildings in which owners share their 3D plans. But what if hundreds of millions of mobile devices could create these 3D worlds automatically as they go about their business helping users find that Starbucks?

    The future is so bright though with this scanner. It helps Apple and developers get familiar with what LiDAR can do for AR applications. This is critically important on the hardware side because Apple Glass, no matter how little is known about it, is the future for AR. Same with Google Glass too, the eventual consumer product (ignoring the junk that the first Google Glass was) of these wearable AR devices will change the world, not so much in that you’ll see an arrow as you navigate to the Starbucks, but give you the insight into smart buildings and all the IoT devices that are around.

    The inevitable outcome is in the maintenance of smart buildings

    Digital Twins are valuable when they link data feeds to a 3D world that can be interrogated. But the real value comes when those 3D worlds can be leveraged using Augmented Reality to give owners, maintenance workers, planners, engineers, and tenants the information they need to service their buildings and improve the quality of building maintenance. The best built LEED building is only as good as the ongoing maintenance put on it.

    The iPhone 12 Pro and the iPad Pro that Apple has released this year both have LiDAR to improve their use with photo taking and rudimentary AR, but the experience gained seeing the real-world use of consumer LiDAR in millions of devices will bring great strides to making these Apple/Google Glass devices truly usable in real-world use. I’m still waiting to get my iPhone 12, but my wife’s arrived today. I’m looking forward to seeing what the LiDAR can do.

  • Developing a Method to Discover Assets Inside Digital Twins

    On Monday I had a bit of a tweetstorm to get some thoughts on paper.

    In there I laid out what I thought addressing inside a building should look like. A couple of responses came to the “why” or “this isn’t an issue” but the important thing here is with smart buildings, they need to be able to route people not only to offices for “business” but workers to IoT devices to act upon issues that might occur (like a water valve leaking in a utility closet). Sure one, could just pull out an as-built drawing and navigate, or in the case of visiting a company, the guard at the front door, but if things such as Apple Glass and Google Glass start becoming a real thing, we’ll need a true addressing system to get people where they need to be.

    Apple and Google are working this out themselves inside their ecosystems but there needs to be an open standard that people can use inside their applications to share data. I mentioned Placekey as a good starting point with their what@where.

    The what is an address – poi encoding and the where is based on Uber’s H3 system. As great as all this is, it doesn’t help us figure out where the leaky valve is in the utility closet. This all is much better than other systems and is a great way to get close. I’ve not seen any way to create extensions to Placekey to do this but we’ll punt the linking problem for now.

    The other problem with addressing inside a building is the digital twin might not be in any projection that our maps understand. So we’ll need to create a custom grid to figure out where the IoT and other interesting features are located. But there seems to be a standard being created that solves just this problem, UBID.

    UBID builds on the open-source grid reference system and is essentially the north axis-aligned “bounding box” of the building’s footprint represented as a centroid along with four cardinal extents.

    I really like this, it might even compete with Placekey, but that’s not my battle, I’m more concerned with buildings in this use case. There is so much to UBID to digest and I encourage you to read the Github to learn more.

    But if we can link these grids of buildings, with a Placekey, we have a superb method of navigating to a building POI and then drilling down into navigating to that location using all the great work that companies like Pixel8 are doing. But all that navigation stuff is not my battle, just a location of an IoT sensor in a digital twin that may or may not be in a project we can use.

    Working toward that link, a unique grid of a digital twin to a Placekey would solve all problems with figuring out where an asset inside a building is and what is going on at that location. The ontologies to link this could open up whole new methods of interrogation of IoT devices and so much more. e911 and similar systems could greatly benefit from this as well.

  • IoT is not About Hardware

    When you think about IoT you think about little devices everywhere doing their own thing. From Nest thermostats and Ring doorbells to Honeywell environmental controls and Thales biometrics; you imagine hardware. Sure, there is the “I” part of IoT that conveys some sort of digital aspect, but we imagine the “things” part. But the simple truth of IoT is the hardware is a commodity and the true power in IoT is in the “I” part or the messaging.

    IoT messages can inundate us but they are the true power of these sensors

    IoT messages are usually HTTP, WebSockets, and MQTT or some derivative of them. MQTT is the one that I’m always most interested, but anything works which is what is so great about IoT as a service. MQTT is leveraged greatly by AWS IoT and Azure IoT and both services work so well at messaging that you can use either in replacement of something like RabbitMQ, which my daughter loves because of the rabbit icon. I could write a whole post on MQTT but we’ll leave that for another day.

    IoT itself is built upon this messaging. That individual hardware devices have UIDs (unique identifiers) that by their very nature allow them to be unique. The packets of information that are sent back and forth between the device and the host are short messages that require no human interaction.

    The best part of this is that you don’t need to hardware for IoT. Everything that you want to interact with should be an IoT message, no matter if it is an email, data query or text message. Looking at IoT as more than just hardware opens connectivity opportunities that had been much harder in the past.

    Digital twins require connectivity to work. A digital twin without messaging is just a hollow shell, it might as well be a PDF or a JPG. But connecting all the infrastructure of the real world up to a digital twin replicates the real world in a virtual environment. Networks collect data and store it in databases all over the place, sometimes these are SQL-based such as Postgres or Oracle and other times they are simple as SQLite or flat file text files. But data should be treated as messages back and forth between clients.

    All I see is IoT messages

    When I look at the world, I see messaging opportunities, how we connect devices between each other. Seeing the world this way allows new ways to bring in data to Digital Twins, think of GIS services being IoT devices, and much easier ways to get more out of your digital investments.

  • Cityzenith Smart World Professional IoT

    I know, I used the buzzword IoT in my title above.  Stay with me though!  We think about IoT as a link between a physical device (your Nest thermostat for example) and the digital world (your Nest app on your iPhone), but it is so much more.  While we have been working with many IoT providers such as Current by GE we’ve also fundamentally changed how our backend APIs work to embrace this messaging and communication platform.

    Using AWS IoT Services everything that happens in our backend API can alert our front end apps to their status.  This ties very nicely into our Unity front-end Smart World Professional application because it can tell you exactly what is happening to your data.  Uploading a detailed Revit model?  The conversion to glTF occurs in the background, but you know exactly where the process is and exactly what is going on.  Those throbber graphics web apps throw up while they wait for a response from the API are worthless.  Is the conversion process two thirds the way through or just 10%?  Makes a big difference don’t you think?

    Where this really starts to matter is our analytics engine, Mapalyze.  If I’m running a line of sight analysis for a project in downtown Chicago, there is a ton that is going on from the 3D models of all the buildings to trees, cars and the rest that can affect what you can see and can’t see.  Or detailed climate analysis where there are so many variables from the sun, weather (wind, temperature, rain) and human impacts that these models can take a very long time to run.  By building the AWS IoT platform into our backend, we can provide updates on the status of any app, not just ours.  So if you want to call Smart World Professional Mapalyze from within Grasshopper or QGIS, you won’t get a black box.

    In the end what this means is Smart World Professional is “just another IoT device” that you will be able to bring into your own workflows.  Really how this is all supposed to work, isn’t it?  For those who want to get deeper on how we’re doing this, read up on MQTT, there is a standard under here that everyone can work with even if you’re not on the AWS platform.