I'm My name is Kunal Katke, and I'm here with my coworker Nate today .
We'll also discuss how to
use IoT data for healthcare research.
T
o begin, I'll go over the quick session plan, which will focus on IoT in healthcare.
Consider some of the issues you can encounter if you use IoT.
Then we'll go through what we've done so far in the health cloud and data team to address these issues, and then we'll go over a few instances of how you can utilize it.
Some of the technologies we've created use IoT data.
Through a full-
fledged solution
To begin, what is IoT in healthcare, or healthcare as we refer to it?
I'm T, which stands for Internet of Medical Things, so consider this a subset of medical devices that deal with it.
Patient information, to be precise.
Many of you are aware with gadgets like Fitbit and Apple Watch, which come in a variety of forms and sizes and may be worn.
That somebody might be able to help.
What happened to all the time?
They're also keeping an eye on things.
There's also an ambient device category, which includes sensors found in places like a hospital bed or a monitoring room.
They can collect medical readings or vital signs, or they report on the behavior of patients.
And there is a class of gadgets that may be employed, such as injectable pills and other similar devices.
So, what are a few of the most important scenarios?
We think of them as three pillars, the first of which is patient monitoring.
So, where do sensors come into play?
a Piero's, which stands for patient reported outcomes and refers to assessments of a patient's willpower.
To contribute subjective input to any of the sensor data, fill out this form.
And this is useful in a lot of distant care situations.
Scenarios using telehealth and chronic illness management
Another pillar we've noticed is in research and life sciences, particularly in the area of clinical trials.
There's a lab, there's data, there's analytics, and there's
Finally, there is a subcategory of smart hospitals.
We're not going to give this presentation our full attention.
But I simply wanted to let folks know that it exists.
Another point they make is the usage of IoT and IOT terms.
The number of devices used in healthcare is continually increasing.
It's expected to rise to 340 billion dollars by the end of 2025, making it an attractive and rapidly expanding market.
So, now that we've covered the basics, what are some of the obstacles to deploying an IoT solution?
So, if you're planning to construct your own IMT solution, there are a few things you should keep in mind.
The first is high-frequency data intake.
High
frequency gadgets aren't all the same.
Some of them may only be read once a day, while many others can generate data at a rapid rate.
every second or fraction of a second
As a result, if you wish to interface with that type of device, you'll need to account for it.
You're also looking for a low latency experience.
You need to be able to get or consume data and make it accessible.
as quickly as possible
To make the data from the device relevant, you must additionally link it to a patient record.
There are also a lot of gadgets in the ecosystem, thus there aren't any.
There is a lot of standardization out there, but each company reports data in its own way.
Interoperability is another important factor to consider, particularly in the field of healthcare data and specifically here.
We're aiming for fire here, which is a healthy open healthcare norm.
This is gaining traction in the neighborhood.
It also allows us to share data between hospital systems.
You must also consider privacy and security concerns.
What are you going to do as a result of them, and what are you going to do as a result of
What should you do if the data coming from the device has to be updated?
So, now that we've gone over the main issues, what are some of them?
What are the difficulties you'll face, particularly if you're developing your own solution?
Late - arriving data is one of the most important.
That necessitates the use of a gateway, especially if you have a wearable device.
An internet connection, such as a smartphone, might gather data for hours or even days at a time, and then that data will all be transferred and sent once internet access is established, so you must be able to manage that burst of data and then make sense of it.
correctly and in conjunction with the suitable time period.
Similarly, there is no standard for how data can be sent from the device.
If it's been offline for a while, it could send the most current data first.
And then go backwards in time.
It might also be sent out, based on the most recent statistics.
And there's no specific sequence, so any solution you come up with would have to take that into consideration.
There's also the issue of duplication to contend with.
Because it was in the process of transmission, the device may opt to resend data.
It shattered the connection.
UPS did not acknowledge receipt of a certain piece of data, yet it was received successfully.
Yes, to be able to deal with data ties to patients and.
There's this as well.
There is a conflict between balancing latency and system load.
So, as I previously stated, your goal was to have the data available as soon as feasible.
However, if you go with near real
-
time or real
-
time processing, it adds value.On the back end, there are certain difficulties.
There are other devices that, as I said earlier, deliver or represent data streams in a variety of ways.
There isn't much in the way of standardization.
And the last one is dealing with the influx of resources brought on by fire.
In the case of fire, sources might be thought of as entities.
When dealing with health data particular to a patient, we call them observations.
So, if I have a device that broadcasts health or heart rate data every second and I'm making a separate observation for each of them, that might offer some issues in terms of enumerating or retrieving that data.
Rewinding a patient's days.
Thousands of records are shown in a worth of data.
So we've had numerous prototypes of partners with whom we've worked throughout the years.
Ann wanted to help us come up with solutions while we brainstormed.
go over some of the things you've learnt thus far.
What kind of thing are we talking about?
We've made advantage of it.
When we designed their solution, we used the following concepts as a guide.
One of the points I made in the previous slide was that single value observations are insufficient, therefore.
We don't simply want to save or have it; we want to have the choice of saving that heart rate data in one place.
Thankfully, fire has done so.
Sample is a time series format.
data into a time series format, which allows us to basically categorize collections.
You are one of the things we do here.
Describe an observation that may last an hour.
Then, throughout that hour, every single measurement for that heart rate would be reflected inside one observation.
One of the first places we visited when we were looking at prototypes or how we may approach this challenge was.
What kind of content may we put on the device? you?
How do we standardize and have something that is fundamental?
any sort of programming that would run on the device to deal with it?
the recording of measurements and their transmission to the cloud?
That caused a number of issues, so we decided to try to make the solution device agnostic and, in essence, allow.
In any case, we'd want to obtain the data.
Devices and intricacy are two of the issues.
They have a wide range of similarities.
So I have a lot of processing power, but only a little in. A little in, too, if you're doing everything on the device
entry introduces them.
The person who creates the gadget or the person who writes it.
There may be a buffering interval on the gateway.
That's OK if they want to do it that way, but we didn't want to.
have a built
-
in solution
There was buil
t
-
in delay. Then code portability to each device, which runs on its own OS or firmware version, resulting in an A1 size that fits all solutions on the device. problematic. Another thing to consider is that fire is a constantly changing standard.
If we were conducting the conversion and collecting into fire on the device itself, one of the things we didn't want to do is assure.
If we wish to upgrade the version of fire, we must do so.
That is when we must ensure that all of the devices have been updated.
So, if we do it on the cloud, we can decouple a little bit.
Those are fascinating facts.
When we get into these. values, there is a moment of conflict.
Data from a sample.
Because some of these devices are so small, we wanted to keep the footprint to a minimum. Time series formats the fire payloads themselves may be very huge, and we wanted to keep it to a minimum.
So that's how we came up with the two answers we came up with.
As a result, both are included in the OSS version.
The Azure IMT Fire connection for Azure, as well as a pass option, are now in public preview on Azure.
The Azure IMT Fire Connector allows you to
You have complete control over the code, so if you want to take the fundamental building blocks and modify or improve them to meet your unique case, you may.
However, if you'd want to have a one
-
you may use the solution we designed.
I recommend checking out the Azure IoT Connector for Fire.
So, what exactly is the IMT?
As a result, we provide a high
-
frequency IMT data processing endpoint. We analyze mixed
data payloads by first normalizing the data, which entails taking data in a non- fire format and merging it into a common intermediate format.
The data is then divided into groups based on several attributes.
Subsequently we do the transform phase, which converts the fire observations into fire observations, which is then stored in the target fire server.
So let's have a look at the steps one by one.
I'll go through everything in greater depth to illustrate what's going on.
So, in the normalization stage, we represent these arbitrary device payloads.
Heart rate is abbreviated as HR.
Consider these three devices to be three separate devices that each sign and deliver their own unique payload to the cloud.
As the data comes in, it will be normalized into a common model, with the output being based on a basic example heart rate.
59 beats per minute. 88 beats per minute.
Normalization is another crucial thing we can accomplish.
It's only a guess.
As a result, devices frequently send out essentially the same message with varied properties for multiple vital indicators.
This is wonderful for the device because you're generally sampling them at the same frequency and transmitting one message, but it's not so great for processing and storage.
You spend a lot of time where your heart rate isn't particularly high.
set fire to
So, are there any normalization procedures to do this?
Is it also in favor of the forecast?
Such you can set it up so that they are separated out, and then they can be saved as independent observations and fires whenever we come to that point.
And what's the name of the band?
We have data sorted according to kind during the buffering step.
So consider it this way.
Someone configures something similar to a semantic type.
The system monitors heart rate, step count, blood pressure, and other parameters.
Your end
-
to
-
end latency is
effectively controlled by this.
One of the possibilities is latency.
You can alter it because it's open source.
This is customizable, however in the public preview, it is set at 15 minutes.
However, it will be something that is adjustable that we will open up.
As a result, this determines how frequently data is egressed from the connection into fire.
So, if you reduce it, you'll receive data into fire faster, but you'll also potentially raise the stress on your fire server.
So, depending on your use case, you might want to keep it small because you'll be doing some serious labor.
That's more if real
-
time analytics, real-
time processing, or you're
doing stuff.
For analytical reasons, you can choose a greater value.
It makes no difference if you desire the system.
If it's currently available on the fire server, use it.
Finally, there's the fire conversion.
We have a variety of alternatives for you to choose from.
So, if you're utilizing the time series structure as I specified, you define the period.
As such, you may say.
This heart rate observation should be bucketed by hour, so that the connection can figure it out every hour when data is streamed in or uploaded.
We'll be able to tell which time period the data belongs to in our system and integrate it with that observation.
This is also where you would get help for diverse codes.
For annotating your data, define your loinc snomed or other coding systems.
The data type can also be changed.
Because the measurements are uncommon, you may specify this if you're using the sample data type for time series I described, or if you have scenarios for string data or value.
We handle it for you by translating the data to the appropriate field.
We make observations based on the time they were recorded as being noticed, as well as the time they were reported as being observed.
It's a deterministic identifier, so it'll keep track of data as it comes in.
If data for the same observation is available, we can preserve it and use it later.
That will need to be changed later, and we'll be able to spot it.
and then everything else will be updated.
We relate the observations we make to both the patient and the equipment.
We support you, Ann, for those who are familiar with fire.
There is a concept of components, not just single value observations.
You have a blood pressure and observation, and you can have components for diastolic and systolic blood pressure, as an example.
Here's a simple illustration of fire. observation that the connector would make.
We may include things like the internal ID and resource type here.
In this situation, we have the device and topic. references, as well as the patient reference.
Coding is right around the corner.
For heart rate, we have a link code.
There is a difference between the effective period and this one.
It's merely a value quantity.
I'm going to walk you through some of the configuration stages now.
We have an idea of a device, thus that is one of the normalization steps.
So, here's how it would look:
I've got a sample payload from a device we're working on.
a large number of distinct signals are being captured
as well as other attributes regarding the measurement's date and time of recording
Anna is the device ID.
Configuring a template is your responsibility while utilizing the system.
t will be able to recognize and match this, so some crucial.
We have a template that is intended to map the pieces here.
This is a sort of heart rate that is semantic.
Then we use this type match phrase to do that.
In other words, if this evaluates to true, we have.
As a message, it has been identified.
After that, you'll get a normalized value as a result.
On the fire mapping side, a collection of normalized data like this will be required.
Combine it with a template in which we map depending on the semantic type defined by the system's configuration.
Then utilize some more characteristics to express yourself.
We want the time interval 0. meaning as a single instance in this situation, so we'll just construct it as is.
The code we wish to associate with it, as well as how we extract and display the values, are all on fire.
And this is what you'd get as a result.
So.
You might be wondering how to go about it.
The Iot connection is used.
As part of a bigger system, here's an example of how we envision the connection being utilized.
So we have the ingestion on the left hand side.
Where data is coming from devices, we may use a device gateway, one of our Azure Iot solutions like Iot Central Iot Hub, or go through a phone gateway and connect directly to the Iot. connector.
The data is then funneled into our managed Azure API for fire, which is the fire server that you can use to retrieve the data as it comes in.
You may also use this to feed data into the Azure API from EHS, our company, and third parties. Once there, you can utilize it in a variety of apps.
Similarly, if you wanted to perform analytics, once it's in the Azure API for Fire, you can export it, anonymize it, and then use it in multiple Azure services.
With that, I'll turn it over to Nate to begin the demonstration.
I'll get my screen down here, honey.
Is it possible for you to view my computer screen?
I'm sure I can.
Please direct me to this location.
So there you have it.
It's a demonstration of remote monitoring.
It works fine; I'm on the correct screen now.
Kudos, Kunal.
Also, thank you to everyone who came.
So here's the demonstration I'm going to show you today.
It was really simple to build up an end
-
to
-
utilizing the Azure Iot.
connector for fire.
It didn't even need developing any more code; all it took was configuring the various components.
So, first and foremost, I'm going to take a minute to walk through.
a some of the elements in this demo
We utilize Azure API for Fire for the persistent layer, therefore we deployed an instance of it.
As a result, the Azure API for fire complies with HIPAA regulations and.
It's also idolized, so it's safe to use.
Protected health information is stored in a secure manner.
I activated and Azure Iot connection for fire inside the Azure API for fire control plane, and established the mappings to process the device data that we'll be delivering.
So Kunal spoke about the device mappings that are used to fire and alter an arbitrary Jason payload.
will get into the details of how this conversion works.
data taken from a real device
So I've got a gadget here.
I'm going to keep it.
camera.
It's a blood oxygen saturation monitoring gadget for the eyes.
It's only a store
-
I believe it was around $40.
I'm also projecting my iPhone on the right side of my screen, and you can see that I installed the eye health app.
Is the orange indicator labeled "I" health, and when should I use this gadget to take a measurement?
The blood oxygen saturation measurement for that health kit will be recorded in the VI Health app.
So, I'd want to take a moment to discuss healthcare.
On the iPhone, you can get healthcare.
It was created by Apple and is integrated into the operating system, and it may be used to store health data by apps and devices.
The iPhone has a health kit, and this information may be safely shared with third - party apps.
Our team took use of this feature and created the Healthcare on Fire open
-
source swiftlibrary when and if a user allows permission to exchange data.
Healthcare on Fire automatically exports data from Healthkit to the IoT connection.
This library, as well as documentation, is accessible on GitHub.
A fast start tutorial is included, as well as an example app that can be installed on an iPhone for testing and review.
You can see that I deployed the example app for this demonstration.
It's the "Iot Fire and pre
-
demo"white app icon.
I set up the IoT connection endpoint to deliver data to a custom IOC connector that I built.
In addition, I built an Azure client with a fire API to represent myself.
Saving the blood oxygen saturation measurement to the health kit is now instantaneous using the eye health application.
exported to the Iot connection, processed, and saved in the Azure API for firing as an observation resource.
So, after the data is collected, we may put it to good use.
Create a report with the Power BI Fire connection, and here's what it looks like.
As a result, I made a
It's a crude dashboard that replicates a situation in which a physician is monitoring blood auction saturation for a group of patients.
The measurements obtained today are on the top page, and I'm at the bottom.
My rose has a golden accent.
This indicates that the measurement was not taken today and that it is less than 95 percent accurate.
So, as is the case with Grace Owens.
With the exception of myself, I'd like to point out that some of the patients on this list are fake.
All of their info is made up.
So, let's do a fast measurement to see what occurs.
So the first thing I need to do is start developing an eye health app.
So we'll go ahead and start that, and then I'll turn on the gadget.
The oxygen saturation of the blood is measured using this device.
So, what happens when it connects to Bluetooth?
When I take the gadget off with my finger, the phone continues processing data and creates a single blood oxygen saturation measurement.
That will be included in the health kit.
As a result, I was able to pull it off.
The figure is 98 percent.
My heart rate is 128 beats per minute.
That's a lot of money.
As a consequence, the on
-
fire health -
care system noticesthe new data right away and uploads it to the connection, which kicks off the normalization process.
It extracts the device identity that I'm measuring, as well as the measurement data itself. of curd and the measurement data itself.
At this point, I'm going to have a look under the hood to see what's going on.
So, here's how the payload from the healthcare on fire looks.
The measurement may be found at the top of the page, along with the timestamp and device ID, or it can be found here.
Kunal also discussed the two mapping files that aid in the transformation of any Jason payload into fire.
The first mapping, shown on the right, was used to standardize Jason's data into a format that the IoT connection could understand.
To decide if this mapping should be applied to the incoming payload, we utilize the Jason Path expression.
We can start extracting the additional data now that we've found a match.
This will be utilized to construct the fire observation.
To find the extract, use the timestamp expression.
The time stamps are added so that we can determine the date and time of the measurements.
To find it, we utilize the device ID expression.
We'll retrieve the device ID so that we can refer to the device that was created in the observation resource.
The calculation has finished.
The value expression is then used.
To obtain the value of measurement.
We generate a normalized model once all of the values have been retrieved.
The IoT connection uses this data item internally.
As a result, we may accept Jason payloads from a variety of devices, each of which may have a distinct data format.
Now that the data has been standardized, we may organize the data if necessary.
We don't need to group in this case because it's a single measurement, although grouping is beneficial.
It is anticipated that it will be broadcasted often.
It's beneficial because, as previously said, we may restrict the amount of observations generated.
Now we'll generate a fire observation using the normalized data.
So I'll get right in and show you how it's done.
This is an example of the second mapping, JSON, which uses the JSON path to identify and extract data.
So the first step is to see if this fire mapping should be applied to the normalized data obtained during the first transformation stage.
This fire mapping should be applied to the oxygen saturation type, and because it is normalized, we can see that this normalized data is of the oxygen saturation type.
We may begin the process of making an observation if the data type matches.
As a result, we determined the sort of value that will be included in the observation.
As you can see, this observation will have a value quantity, which means value quantity may be utilized for single measurements like the one we're doing now.
There are, however, additional sorts.
Values and sample data, which may be utilized for streaming, followed by string values and codable types, I believe Kunal indicated.
As a result, we support a variety of quantity kinds.
So we find them in the normalized data, in this example, auction saturation, by value by name.
We then add it to the observation resource and as a result.
Because the observation value represents a quantity, it is changed from a string to a number if it is normalized, as you can see.
We just write the number exactly as it is.
The system unit and code are then copied from the fire mapping to the observation, ensuring that the value is appropriately coded.