Rhize Up

Rhize Up w/ David Schultz: Advancing the UNS with the Uber Broker (feat. Rick Bullota)

David Schultz Season 1 Episode 4

David: All right, let’s go ahead and get this fired off. Good morning, good afternoon, good evening. Welcome to the Rhize Up podcast. I am David Schultz, and today, we’re going to continue our conversation around the Unified Namespace.

Today, we’re going to be joined by a person who really needs no introduction: Rick Bullotta. We want to talk a little bit about what he has termed the “Uber Broker.” So, without further ado, Rick, please make an introduction for people in the audience who might not be familiar with you.

Rick: Happy to. And, good to be with you. Yeah, I’ll give you the express version. My background has pretty much been all around every facet of manufacturing, whether working in operations, systems integration, engineering support to operations, or software companies in the space.

So, that’s the world I live in. From the industrial sector, I started a couple of companies in the space: Lighthammer, which became part of SAP, and ThingWorx, which became part of PTC. I’ve worked at Microsoft, Wonderware/AVEVA, and in sales, marketing and technical roles.

It’s an interesting perspective on what works and what doesn’t work in the space. As you know, I’ve never been one to be short on opinions. Hopefully, we’ll get to the bottom of what UNS is today.

David: Yeah, absolutely. And I think it’s, it’s defining that UNS as well as, what does an Uber Broker bring to that equation.

At some point, we talk about there always being nodes within that ecosystem, and those nodes are going to have various capabilities. So, where do you move things around? At some point, we’re going to have to do something with the data. We’re going to have to have solutions. What do we want that to look like?

And it’s purely going to be a function of the problem that we’re trying to solve. There are a lot of different ways to go about skinning the cat. You know, I think what’s important to me is this is not your first barbecue with manufacturing, manufacturing data and trying to solve some very large challenges that you have, you know, especially at the time.

You mentioned a couple of products that you’ve been involved in, the first one being Lighthammer. Of course, the second one is ThingWorx or PTC. So, let’s start with a Lighthammer. Can you tell us a little bit about what went into that product? What was the problem you’re trying to solve? Let’s really get into a little bit of the weeds of some of the design considerations that you made with Lighthammer.

THE INFLUENCE OF LIGHTHAMMER

Rick: For sure. So, let’s put a time frame on it. This was around 1998 when we launched a Lighthammer, but I had just finished my second stint at Wonderware—sales and product management.

And if you look at SCADA and HMI systems at the time, they were abstracting away all the devices, controllers and things like that to provide a unified visualization platform. You could have some logic in there, event-driven at the time. Right? Even most of the SCADA systems were event-driven then.

But the problem that we saw was, all right, this is great for abstracting away all the devices. And then you look around at a plant, and you’ve got multiple HMI SCADA systems, historians, SQL databases, and people being used for MES, lab applications, quality, ERP, you name it. It’s a problem as old as manufacturing technology.

It didn’t seem like anyone was addressing the need to bring that all together. So, the first epiphany was the importance of abstraction. If I’m dealing with a stream of time series data coming out of OSI Pi, a bunch of timestamp values coming out of it, or events coming out of a database or whatever, why should they be different? Why should I, as a consumer, have to deal with those in a proprietary way?

The first step was the idea of connectors that could talk to these things and provide that data in a unified format. At the time, it was XML-based. The next step was to provide a common API to let you query, consume, and interact with that data.

The first goal was something we called Lighthammer Illuminator. Illuminator was all about visualization. At the time, if you wanted to see some historian data along with some LIMS data or some other operational data, you know, it’s the swivel chair approach, right? You go over to one app and another app.

I remember one of the first demos. It was a display with data from about 16 different sources on the same webpage. Coincidentally, this was about the same time that web technologies were starting to get some traction. So, a browser was now an accepted way to interact with your operational data. So we were building, you know, Rest APIs and XML stuff.

At the time, the only real option for rich visualization that would run in many different places was Java applets. It sounds so dated now, but it’s really all we had then. I joke all the time that one fundamental flaw of the web at that point was that you couldn’t draw a circle. You could do squares and, you know, text and everything.

To represent data, we needed a rich and interactive user experience. So, we built a bunch of Java applets and a back end that let you interact with it through web APIs, and that’s what really got a lot of traction right out of the gate just for remote visualization, for the ability to see data from multiple sources. 

Then, we kind of had the next light bulb moment. All right, if we can get this, we already have APIs that our visualization clients are using. The logical next step is to make that data available to other systems. That’s when we started getting into ERP integration and those kinds of application areas. We built a visual flowchart almost Node-Red-like. It’s called Execute, a drag-and-drop environment for building sequential logic, data transformations, and all that kind of stuff.

With the combination of those two, customers did some absolutely amazing stuff with integrating into adaptive planning. Hey, based on material availability, energy, and data from our historians we’re going to readjust the schedule—just some interesting things. We got on SAP’s radar map because some of our largest customers were their largest customers, and in 2005, we became part of SAP, and that became SAP–MII and Manufacturing Intelligence & Integration. Interestingly, that product’s going to be sunsetted, I think, in about 2 or 3 years. So, it’s had a good run. The big takeaways were unified access, consistent data representation, web APIs and extensibility.

You had toolkits, and you could build your own connectors. That was the approach that we went for.

David: Yeah. That’s amazing because it sounds like the problem that you were trying to solve in 1998 continues to be a problem that we’re trying to solve now. Maybe the names are a little bit different, but, there still continues to be those same problems.

The challenge is still there. How do we do that using modern architecture? Certainly, from the mid-2000s to now, the technology has been revolutionary, and it continues.

Rick: One other comment, David. The other reality that we had there was when you have data distributed in lots of different systems, sometimes it’s not acceptable or even optimal to bring it all into something like a data lake or a common database. So, the other light bulb moment was looking at a lot of dashboard solutions, and they’re very one-way, right?

It tends to be very difficult to drill down between data sources. That’s something where we found customers doing some super interesting stuff around quality. We have these events occur over here. I want to see context from what was running at the time, what materials I was using and the process conditions at that time.

That was extremely hard to do at that time with the tools we had for visualization. So the idea of the user experience being much more interactive, being able to drill down across data sources, was another key capability that came out of that.

David: Yeah. I mean, this is what’s amazing about how ahead of its time Lighthammers happened to be. It’s almost like what is old is new again. It seems like, “Oh, wow, this is amazing. Look at what we can do.” And it’s like, “Yeah, we were actually doing that a long time ago.” The technologies were a little bit different. But you know, it still persists. I was always fascinated by it. It’s just like fashion. What’s old is new again.

Within digital transformation, a lot of the work that I do involves trying to connect to all these different disparate assets. Lighthammer was certainly connecting all these different data sources, but now we’re trying to get more into the machines. And that’s where a lot of my time is focused on how to get data out of these things. And my understanding of how ThingWorx is that they were designed to do something similar.

Talk a little bit about ThingWorx now. Why was it created, and what was the problem that it was trying to solve? How is it used? And maybe some of the highlights or key takeaway moments from that experience?

THE INFLUENCE OF THINGWORX

Rick: Sure. This was five years, really ten years from when the Lighthammer products were first started. So we had a clean sheet of paper and some better technologies to work with. But that’s actually secondary to, at the time, this buzzword Internet of Things starting to get a lot of buzz.

You had companies like Cisco and IBM advertising on National TV for the Super Bowl, right? Talking about a smarter planet and “Internet of Everything” and so on. The reality was most of those companies weren’t actually doing much or delivering much product. So they did a lot of great marketing. pull for what’s possible in connected products, connected services, and connected spaces. I also did some research work at SAP on what we call real-world awareness. What happens when you connect the physical world to manufacturing, public safety, retail, healthcare—a little bit of everything? 

And once again, the takeaway was it’s just too hard. Everything was a custom one-off, so we perhaps had a bit too ambitious of a goal to try and build a platform. So, I tried to get that going within SAP. It was just timing-wise wasn’t going to happen. So, we tried to serve two masters, this Internet of Things—a connected fleet of homogeneous devices, typically electric meters or blood analyzers or whatever, but you’re managing those devices remotely. Then, there is the industrial IoT use case, which is more of the bread and butter we knew from the Lighthammer days. 

In hindsight, we probably should have forked that at some point and had two variants, with 80-90% of the code remaining the same. Nevertheless, that was the goal: to serve those two communities.

Along the way, I learned a lot about our shortcomings in ThingWorx regarding things like device management and device provisioning for that IoT use case. 

We also built a much, much richer user experience at the time. Better development environment, better visualization tools. And then we said, “Well, we’re going to need a scripting language. We’ve already abstracted away all the data. We already have a rich twin model” – a kind of concept of types of things and instances of those things.

Then we said, “There’s a new generation of folks coming into this world where something like JavaScript, and HTML is very natural and native to them.” So, we built our scripting engine for that around JavaScript and exposed all the twin models and all that stuff into it. I think that was rocket fuel for the adoption of it, both in the IIoT side and the general IoT side.

Once again, it’s this whole concept that we want to normalize and abstract away data. I think the big stretch from Lighthammer was introducing digital twin models. More than just data, they had data events. They had services that they could expose. It was actually a very sophisticated object model. It had a graph database under the hood to let you model the relationships between things. Integrated search was a novel capability at the time. It was inherently event-driven because you would attach scripts to events just like you do with any kind of web stuff. It actually did have an external Pub/Sub at that point. It’s since been taken out of the product.

I think another thing that customers found more so in the IIoT space was that it provided a common place for access control. If you’ve got these 20 different systems, it gave you a control plane for accessing those systems in a relatively secure way. Who’s allowed to get at what information, particularly when it starts to stretch beyond just people in your company? Your customers want to see some quality data. Your machine providers want to manage or optimize performance of their filling machines or whatever. So, we put a lot of time into that aspect of it as well. And we also learned that things that are easy in a prototype, you need to rethink them dramatically if you’re going to deploy them at scale. Scale being thousands, tens of thousands, millions of stuff. Everything changes the user experience, how you do stuff, the technical approaches. It was a really good learning exercise for a lot of those issues as well.

I think the big takeaway was trying to serve those two different distinct markets and bringing a coherent object model or twin model, and also considering the implications of both scale and distributed systems.

David: Yeah, when I first looked at ThingWorx, this would have been 2016 or 2017. A lot of the information I found was that it was more of an IoT than an IIoT platform. What I mean by that is that smart cities were prominent at that time, and that’s where I at least saw ThingWorx being used more. Certainly, there have been some acquisitions and some relationships there that I’m seeing more ThingWorx deployments within what I would call an IIoT, so within that industrial space. 

I would still characterize ThingWorx as an IIoT platform. And the intent is that we’re connecting to disparate systems. We’re bringing in the context and providing the visualization. It’s very powerful, and it does a lot of amazing things. But that also tends to make it a little complex. That said, you can solve some amazing problems with it. 

Rick: In retrospect, we introduce too many weird terms, right? Things-shapes and thing-templates. But for anyone who’s been a programmer, the concepts were very natural. It was types and classes, but there was a steep learning curve. Once you got it, it was an incredibly powerful twin model that you could throw a program against both event-driven and externally through APIs.

THE IMPORTANCE OF EXTENSIBILITY

Another thing that both Lighthammer and ThingWorx helped me emphasize—when we talk Uber Broker or when I talk to any vendors doing anything—I always emphasize the importance of extensibility. You think you know how everything your customers need. And that’s absolutely, of course, wrong. So think about how can you let your customers your integration partners or your VAR partners, or OEMs add their own connectors, logic, and visualizations.

When I look at product architectures and approaches, there’s always an emphasis on open APIs for everything. Anything you can do from a configuration environment, you should be able to do through an API. And then APIs to get at the data and functionality, and then extensibility for what you can talk to and how you can visualize that. To me, that’s a must-have for any platform.

David: Yeah, absolutely. I tell people that I can solve a problem, but can I scale that solution and more importantly, can I extend that solution? This is like a perfect segue into the concept of the Unified Namespace because that’s the problem or the challenge that it tries to solve. It’s having a mechanism to where we can both scale and extend, what it is we’re trying to do.

As I mentioned before we got going on the call, we’re trying to socialize the definition of a Unified Namespace. What we’re using is it’s an approach to an event-driven architecture that uses Pub/Sub technology and semantic data models published or subscribed through a defined topic namespace.

While that sounds very nebulous, it’s because it’s more of a concept or an idea. It’s an approach, not as much an architecture, but at some point you will then build in this Unified Namespace architecture using tools like ThingWorx or HighByte or Ignition. From the standpoint of the definition I’ve just given, how do you define the Unified Namespace?

DISCUSSING THE UNIFIED NAMESPACE DEFINITION

Rick: First of all, let’s give Walker and his team an enormous amount of credit for bringing this to the forefront. It’s now a word that when you have the analysts using it, the bigger analysts, you know, there’s some traction, right? So for me, it’s a concept that was natural. I was like, “Of course, that’s how you do stuff.” Because that’s kind of how I approached the importance of discovery. What good is access to information if you can’t find out what information you have to access? I think perhaps the expansive view that I have is the accepted view. A lot of people right now think it’s kind of the current state of the business. It’s a snapshot of the now.

I personally think that if you have that model, many of those same topics, data points, and events also have a historical exhaust. I think it should be the common consumption layer for both current and historical. It’s not just a Unified Namespace for the data, it’s also the Unified Namespace for the metadata. 

The other unified aspect that’s very important to me is that I should be able to know that each of those nodes in that topic in that namespace has a consistent data format. Yes, it’s fine if you have a bunch of weird data formats. You can live with that. But it’s a lot of work. It’s a lot of unnecessary translation effort. We have to get where the other U is a unified data format: unified APIs to consume that beyond Pub/Sub, and then unified access control. 

I know that’s a bit of a stretch, but, I had a discussion thread on LinkedIn the other day. We were talking about how standards can help with that. My vibe was that standards are helping a bit. However some companies and users are also using the standard as a bounding box, not a starting point. And to me, it should be the other way around. Standards give us this common set of interoperability. We can build upon rather than limit our functionality.

When we get to the Uber Broker discussion, that’s the idea. Let’s go beyond what the foundational UNS definition is.

David: Yeah. I’m not familiar with the conversation, but my experience has been that the standards are often misunderstood. There’s a lack of understanding of what it’s saying in there. And we get pigeonholed into this idea of, “Well, no, according to that standard, we have to do this.” And the standard doesn’t say something like that. ISA-95 is one of them that I think is very misunderstood by practitioners within the market. But, you know, that’s a topic for another time.

Rick: I think that’s an important point because how that gets communicated to the end user base and the systems integrators and the implementers is pretty important, right?

I mean, I’ve been involved in both S88 and S95 as a committee member, and we introduced what we call weasel words. Weasel words were if you couldn’t get everybody to agree on something, you know, you use “shall” instead of “must.” You leave enough flexibility that kills interoperability. It is important because we just have to set the expectation with the customer community that it’s not magic, right? Think of them as best practices. That’s kind of the way I do it. It doesn’t just work,

David: Yeah. To me it’s take what works and leave the rest. So if there’s something that doesn’t fit, don’t die on the hill of, “No. We have to conform to it.” Do what makes sense. There’s always going to be a unique way you’re going about doing things. So like I said, don’t die on that hill. 

Rick: And don’t stop at the border of the standard. Press on. 

David: Absolutely. Totally agree with that.

So you know, going back to this UNS, we’ve talked about Pub/Sub. One of the most common brokering technologies or transport application layer is MQTT and of course it’s companions Sparkplug B. For you, I know you’ve been critical of MQTT and Sparkplug. I think it would be important from the Pub/Sub part of the UNS to talk a little bit about that. So, tell us some about some of the MQTT Sparkplug B concerns that you have.

A CRITICAL LOOK AT MQTT AND SPARKPLUG

Rick: Yeah. And let’s preface it with I use MQTT on a daily basis, like my lab is all connected up through MQTT. So, I’m a fan. What Arlen and Andy and the team implemented is fantastic. I think where things fall apart is when we try and make a technology do things that it really wasn’t intended to do. 

First of all, at its core, it’s an extremely powerful way to implement a Pub/Sub environment. There are others, of course. There’s AMQP and many others. But, for our domain, what’s the T? Telemetry, right? It’s designed for this kind of data connectivity. I think where something got lost is in terms of the way we applied it to UNS is number one is discoverability. I think that’s probably the single biggest problem right now. There is no intrinsic way to connect to a broker and say what’s your topic structure. So, that needs to live somewhere else. You need some other API, some other connection. To me, that’s the foundational piece. 

They’re opaque payloads, right? There’s no accepted representation of data in there. There’s been efforts to try, and whether it’s OPCUA over MQTT, Sparkplug, generic JSON, countless others, that’s a problem that needs to be solved. So we need to be able to discover it and find out what its formats are going to be. We need to be able to decorate those with metadata, like just a topic name is like a tag name in a SCADA system. It can only tell you so much, right? Why can’t we have description, range, and entity type? Today we’ve got things in the MQTT protocol that let us send extra stuff along with messages, but there’s no place to easily persist that without topic bloat and all that kind of stuff.

I personally love to see the ability to add metadata to make MQTT topics durable, as well as APIs to let you query the topic namespace. There are things that could be done better that Sparkplug tried to address, where you can publish multiple values in a single message.

I’d love to see the MQTT committee consider that capability. For the most part it’s good at what it does. I think we’re trying to push it a bit too far. Then the question becomes, where does that go? We’ll talk about that on the Uber Broker idea. 

On the Sparkplug side again, a very well-intended effort focused initially on SCADA to device integration and solving that problem and being efficient in how we move data back and forth – binary formats that are very compressed and efficient, being able to publish metadata and interrogate metadata, which is super powerful.

Again, somewhere along the way, I think there are improvements to the implementation that could have been made and that need to be made. So, I’m trying to provide as much input as I can into that. I encourage the user community to let their needs be known as well. The team that’s working on defining the next-gen Sparkplug is very receptive. They’re great people. We just need to have an open dialog about what are the most important things that will bring the most value to the collective community.

My two big ones there—I think we have to get to a point where JSON is an accepted format for Sparkplug. And then, one going back to ThingWorx, the idea that services or commands or functions or methods, whatever we’re going to call them, are another thing that needs to live in a Unified Namespace.

So an example, a service that can let me query my history for a machine or run a calibration sequence and get some results back. There are some minimal capabilities there and commands in Sparkplug, but I believe we need to both build that out to support request response, fully typed metadata. And I think there’s a lot of value, at the MQTT level or above it. Let’s solve the problem whether it’s over MQTT, or sideband or something like that, to support a request-response mechanism as well.

I will say again, the Sparkplug team has been super receptive. I think there are just a couple little things to kind of push it over top that can make it into the next gen of sparkplug will be extremely helpful. The metadata discovery and the efficiency of that. How that birth process works, there’s great work being done to advance that. So overall, I’m skeptically optimistic I guess is the word I’d use.

MQTT moves at a different pace. It’s kind of a more traditional standards body. I don’t think there’s enough end-user involvement. There doesn’t appear to be enough feedback on it going to that team. But I also think that it doesn’t need to happen at that layer. We can solve those problems that will layer above MQTT. And this is where the idea of don’t constrain it to what the standard does. Let’s build standardized layers or standardize broker capabilities that vendors can implement that backfill those deficiencies. I think there’s a very real opportunity to do that fairly easily.

David: Yeah, absolutely. I’ve been working with Sparkplug 4 or 5 years now. I really contain it, if you will, within the overall architecture. It’s very good for plant floor data. If you take a look at the companies that are supporting it. It came out of Inductive Automation and their Ignition or Canary. It’s really good for telemetry data and SCADA data, because it does is SCADA-related activities. As you get into larger areas, flat MQTT tends to work better. So, that’s where I am. I mean we’re just trying to get data move around.

Rick: And a great example, there is another critical feature I hope gets in there. Sparkplug is very opaque. You’ve got a bunch of metrics. It could be one to M and M could be very big. But if I’m interested in subscribing to machine stoppage or temperature alert or the speed of the machine, I just can’t do that today. And to me, that’s a fundamental flaw very easily addressed with a standardized approach that a Sparkplug-compliant broker should optionally support metric expansion where it takes those in and blows them out into some pattern-based namespace.  I contributed code to HiveMQ as an extension to do it for that. It’s very doable. And maybe that’s the source of my frustration overall. I know these things are so doable. That’s the frustration. They’re so doable. Let’s just get it done.

David: Yeah. My observation around this has been certainly we could build it into the standard, but at what point should it be something unique to, say, HiveMQ or EMQX or even Cirrus Link? They have Chariot Scada their broker that it’s more of a feature that’s not specific to the standard. It’s something that we do to expose some of this information. 

I think a lot of what started this whole UNS conversation is the first question that people almost invariably ask: Where do I store my data? How do I access historical data there? And I always said, well, maybe that’s just a function of the broker or the function of something we’re getting very close to. 

Let’s fire off the Uber Broker conversation, unless you have any final comments on MQTT before we get there.

Rick: Sure. I mean, if you look at it. Litmus now has an MQTT. NATS. HighByte has an embedding. That’s table stakes now. Right? I mean, what we do with it, and what we do around it. To your point, all these companies are doing some very innovative things around that.

It would be great if we could agree on the APIs, the approach, or the data formats so there’s interoperability at those levels. If it doesn’t happen, that’s an opportunity loss. But let’s deliver customer value—let’s deliver the stuff people need.

David: Absolutely. As we’ve talked about the the Uber Broker, it gets to some of the issues of we want to have a common endpoint. The UNS is not just this broker. It’s an event-driven architecture. So there’s going to be other things there. But if we think of it purely as an MQTT broker, that’s 311 and 50 compliant, and it supports Sparkplug, the UNS is more than that. But conventional brokering technology is limited to some of the things that you’re going to need to do. And it sounds like that’s what the Uber Broker is designed to solve.

Let’s just get right into it. What’s the Uber Broker? I think we have a good understanding of the problem we’re trying to solve. But let’s level-set, here’s the problem I saw and here’s how I’d go about fixing it.

WHAT IS AN “UBER BROKER?”

Rick: Yeah, let’s step up maybe one level. What do the Venn diagrams look like? What’s a UNS? What’s an IoT platform? What’s an Uber Broker? You know, an Uber Broker is just a term I came up with. And I did it specifically to say, let’s build on top of the brokers. Let’s not throw away the broker as a key piece of that. Quite frankly, it offers the opportunity for a new generation of players to become IoT platforms. 

For me, sometimes it’s easier to speak in code. It’s easier for me to build a working prototype and show the concepts. So, that’s what I did. I whipped up something that intercepted any MQTT messages, Sparkplug messages and expanded them. You could automatically historize into a time series database. It provided a query API for the topics and their metadata. It provided a query mechanism for the historical data from each of those values. It’s not intended to be at production level or even a product, but it’s an approach that helps me communicate ideas.

And ultimately, I would love to see the broker vendors, or anybody that wants to build a next-generation IoT company like Rhize take it. I don’t care how it’s done. It’s just functionally these are the capabilities these platforms need. That’s really the genesis. What other capabilities can we deliver to address the shortcomings we talked about? You know, generically in IIoT platforms and Pub/Sub architectures and kind of the things we’ve referred to as UNS.

David: Yeah. At the hub of our system, we use NATS for that. And then it does have the ability to use MQTT. But we also expose an API. So, I know we’ve talked a lot about we need to have access to say some functions or methods. There’s certain things that I can do to access that data. But, the UNS is the current state. So there’s a broker that now going to be moving around. It’s that Pub/Sub for that transport mechanism. And also having this API that has functions, methods and query ability of what is referred to as the warm data and the cold data that I can start retrieving things back. But when you build that into a full UNS, there’s also a standard ontology of that’s what the data is going to look like. If I query this, I’m always going to get this type of information back.

Rick: A mantra I’ve always had is to make the easy stuff easy in the hard stuff possible. So, if I want to historize as new values come in, why should I have to write a script for that? That’s insane. That should just be a checkbox, right? A declarative part of your object models and twin model. Just make it easy. So, a lot of what I’ve tried to do in those prototypes is exactly that. Just make it declarative, not code to accomplish the simple stuff, but give you the headroom to do whatever you want. In that Uber Broker, you have a REST API for publish, a REST API for reading the most recent values, and all the historical values. 

Another debate is what should topic namespaces look like? Certainly, for a lot of folks an ISA-95 based approach makes sense. For others, other models make sense. I’m a believer in this. Think of a (and this is really geeking out) but think of Unix file system. You have symlinks, right? I can have the same file in multiple places. It’s one file. It’s one piece of data. Why can’t we have that too? Why can’t I have a hierarchy, or namespace that fits well for my maintenance team? Another one that’s for operations? Another one that’s for the quality folks? We should be flexible in that as well.

David: Yeah. I like symlinks in Linux because I make fewer typing mistakes because sometimes you’ll have these files that are like, well, somebody must have gotten paid by the number of characters in this, so we’ll just create a little symlink. And now I can get at it. It’s for dummies like me to be able to do certain things with it.

Let’s just go ahead and wrap it up. Any final thoughts? We’ve talked about a lot. It seems like going all the way back to the beginning of ThingWorx, we’re still trying to solve a lot of the same types of of problems – creating the semantic data, being able to easily access data through this structure.

Going forward, what do you foresee on the horizon? We’re eally getting rich into this industry, 4.0 in companies going through digital transformation. Where do you see this finally landing or what do you think going to be? Any ideas on what that might look like?

THE FUTURE OF INDUSTRIAL TECHNOLOGY

Rick: If I knew, all my investments would have done well, but, I don’t know. I guess on the positive side, every indication, although there’s the border skirmishes, every indication is that everyone wants to get to approximately the same place, which is super encouraging.

I just hope the customers don’t get stuck in the middle of lack of progress. Standards, by the way, can be the most awful outcome for customers because sometimes you end up with the lowest common denominator. The other thing things take a while to be realized, and in the 3 to 5 years that slow moving takes to get something done, the underlying requirements have changed dramatically. The technologies at our disposal have changed dramatically. So, I’m naturally the poster child of “let’s let’s go fast.” So I would say that’s my hope. Call to action for everybody in the community is let’s just let’s move things forward. Right? I kind of feel like we’ve been spinning in circles for a while. Ultimately, the number one voice is the end user. The people who write checks have the ultimate voice, right? 

Other things I see, and I’m excited to see this: a new generation of folks coming into industrial technology and into manufacturing who have a bit more of a DIY comfort. They’re comfortable with a new set of technologies and tools. It’s a cautionary tale. We also kind of need to be thoughtful about what’s old is new again. The amount of reinventing the wheel that happens every single day is just mind-boggling to me. Let’s use well-established stuff for this stuff.

I think I mentioned this in the Industry 4.0 Discord. Jeffrey Moore, he has this concept called “core and context.” How your company processes a purchase order is not a strategic advantage versus how someone else does, but the way you manage your supply chain may very well be. Generically, that concept is for the stuff that’s generic, let’s have that built into the platforms. Let vendors innovate around that, and then let customers, integrators and OEMs really go nuts around that. And we have a generation of folks that are coming into the space that are well equipped to do that kind of stuff.

David: Yeah, I wholeheartedly agree. So, with that, Rick, thank you so much for your time. Always great talking with you because a lot of the things that I think about, the challenges, I don’t want to reinvent the wheel. What you’ve looked at and analyzed before, the guidance and impact are very much appreciated.

So, thank you for spending some time with us here today. With that, thank you, everybody, and we’ll see you on the next episode of Rhize Up. Talk to you soon.

People on this episode