IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

8 Mind-blowing Uses of Wearable Technology (Seriously...)

Some of the most interesting applications of wearable technology will come from the fusion of technologies such as artificial intelligence and big data.

wearbale technology graphic
What would your colleagues say if you asked them what wearable technology was, or what it can do?

You’ll quickly hear about smart watches like Samsung’s Galaxy Gear, fitness trackers like those produced by Fitbit and smart glasses projects like Google Glass.

As for what these products can be used for, after a bit of thought most people would come up with a few sensible use cases – like taking pictures using voice commands, keeping track of your daily calorie expenditure or receiving inbound messages.

Now try asking what wearable technology might be used for in the future – as in 10 years in the future?

It’s at this point that you’ll probably start getting blank looks.

In this research paper we describe 8 remarkable use cases for wearable technology that we believe will become possible within 10 years.

These 8 use cases are several full market cycles ahead of those that Google has  envisaged for Google Glass.

Some of the most interesting applications of wearable technology will come from the fusion of three different technologies:

Advanced wearable products: The three main wearable technology product categories are body sensors, smart glasses and smart watches. We expect dramatic improvements in all three categories in the coming years - in terms of capabilities, optimization of the user interface, size, battery life and price;
    
Artificial intelligence (AI) platforms: Technologies like Now (Google),  Siri (Apple), Cortana (Microsoft) and, most recently, IBM’s decision to provide mobile app developers with access to its supercomputer platform, Watson point to a future where it will be possible for users to carry out advanced web tasks simply by speaking or, in some cases, just thinking;
    
Big data: Whether it is the location of an individual, data about an individual’s search history or ad preferences or even information about what that person was looking at 30 days ago there is no shortage of data. If developers can gain opt-in programmatic access to very granular data about a user’s behavior – a goal that is extremely important to Google – then a whole panorama of new service possibilities will open up.

We think that when all three of these technological ingredients are combined then some truly remarkable and, in some cases, rather unsettling use cases will become possible:

Use Case 1: Healthcare

Sector size: The value of the global healthcare industry in the 20 largest global economies is about USD 3.8 trillion per year.

Today’s range of body sensors can already measure an impressive range of parameters:

•    Stride length, distance, step count, cadence and speed;
•    Heart rate, heart rate variability, heart rate recovery, respiration rate, skin temperature, skin   moisture levels, breathing rate, breathing volume, activity intensity;
•    Body temperature;
•    Calories burned, distance travelled;
•    Sleep quality, sleep patterns;
•    Wearer’s brainwaves -  can be used to control electronic devices/services by thought;
•    Back posture: sitting position, chest and shoulders;
•    Force of impact to the head (used in contact sports);
•    Exposure to the sun (UV measurement);
•    Biomechanical data collected while running (e.g. L/R pressure etc.);
•    Altitude and rate of ascent/descent;
•    Location (3D);
•    Motion parameters including speed and acceleration;
•    Repetitions of specific physical activities (e.g. sit ups, dips, press ups).

But there is presently no way to measure aspects of blood chemistry, or other parameters that can only be measured by sensors that reside inside the body. 

If such an ‘in-body’ sensor could be developed then this would open up a completely new paradigm for healthcare.

More than just an ‘early warning system;’ for serious illnesses, such in-body sensors could have a major impact on the following areas of the global healthcare industry:

Medical research would be transformed: Medical researchers operating in the private and public sectors would have access to a vast database of continually updated non-personal medical data on 100s of millions or even billions of individuals. Researchers would be able to correlate trends in measured parameters with other user data, for instance, age, eating behaviour, location, socio-economic status etc.
    
Average life expectancy would increase: The ability to ‘listen’ for the warning signs of serious illnesses or diseases would mean that the average lifespan would be increased;
    
GDP would increase: If the average lifespan on the workforce was increased by, say, 5 years then this would imply a substantial increase in overall economic activity;
    
Insurance costs would fall: If insurers could gain access to more accurate data then they would be better able to price policies for the needs of individual users. There are, however, obvious issues here including whether someone who had a genetic condition could gain access to affordable healthcare. Government intervention might be required in such situations;
    
Healthcare costs would be reduced: Because the treatment costs for a disease that is identified at an early stage are typically lower than if the disease is identified at a later stage we would expect total healthcare costs to fall;
    
The accuracy of medical diagnoses would be improved: Medical practitioners would have access to a patient’s ‘bio history’ which would help identify the cause of a patient’s symptoms.

Different sensors could perhaps be developed for different users at different age bands – sensors for nutritional analysis, sensors for specific diseases and general purpose sensors.

These devices might contain a mixture of semiconductor electronics and organic electronics and a wireless connection would allow data to be transferred to a device which would reside outside the body, for instance a smartphone. The sensor could be recharged using a wireless charging system with the charger unit residing beside the wearer’s bed or even integrated into the mattress.

As to how the sensor would be inserted into the body, this could be by means of a small operation.

Use Case 2: Diet Management


Sector size: The value of the worldwide diet supplement market will be USD 180 billion in 2014.

This use case would combine an in-body sensor that could measure key nutritional parameters about the user (above) with a cloud-based service that could analyse those parameters to provide feedback to the user about what they should be eating.

If it were possible for an in-body sensor to send a semi-real time report into the cloud about the user’s diet – because it could measure those key parameters directly – then the user could chose to make this data available to a third party service provider for analysis.

The service provider would then be able to make recommendations as to what the user should buy when at the supermarket – the user’s location would be used by the service to determine when the user was in a food shopping mall.

Furthermore, the same service could make recommendations for a personal or family-optimized menu at mealtimes. Because the service would know whether they user had any mineral or other deficiencies or excesses, then a suitable menu could be recommended. This could be in the form of tablet supplements or just a recommendation like “How about salmon tonight – you should have some in the freezer?”

Use Case 3: Car Insurance

Sector size: The gross value of the worldwide motor vehicle insurance industry will be about USD 500 billion in 2014.

One of the biggest costs of an insurance company lies in the processing of insurance claims.

If a user was wearing a suitable pair of smart glasses, which might be required for navigation purposes anyway, then those glasses would be able to record the entire journey using a roiling window to minimise data storage requirements. This would mean that any accident could be recorded on video. The same pair of smart glasses could also record exactly what the user was looking at at the time of the accident, as well as other information such as speed.

Far more interesting would be what happens if an in-body sensor (above)were to be combined with such a pair of smart glasses. In this case, it would be technically possible for the insurance company to determine if the user was driving after a lack of sleep (because the sensor would monitor sleeping patterns) or if the user’s blood contained too much alcohol.

If such technology were to become available then we think that users who were willing to use the technology - which would offer many positive benefits - would enjoy significantly reduced car insurance premiums.

The flip side of this would be that those who refused to use the technology would face higher premiums and so, in this way, the market would price the technology so it became part of everyday use.

Use Case 4: Police and Security

It is interesting to think what might become possible if smart glasses are combined with police databases and facial recognition software.

We are not too far from the point when a police officer could user a pair of smart glasses to automatically obtain information about a person that was within the officer’s field of view – simply by asking or by setting a default.

This could be possible in real time as a police officer was speaking to a person with the resulting information being projected onto the officer’s field of view.

The smart glasses could take a picture of that person and send it to a cloud-based police service where facial recognition technology would match the picture with an entry in a police database. The police office might not know who he was taking to but this technology would be able to alert the officer if the person was a suspect in an in ongoing case, had a criminal record or, hopefully in most cases, had no police history at all.

Thinking a stage further ahead, then we can foresee the facial recognition camera technology that is already installed at most security gates at airports being integrated into smart glasses, so that a police officer walking down a crowded street would automatically be alerted to the presence of a suspect walking towards him.

Use Case 5: Outdoor Pursuits Navigation

One of the major use cases for Google Maps is navigation. But so far most of Google’s navigation services have been focused on cars or pedestrians in urban areas.

We think that an opportunity exists for navigation services to be developed specifically for those who engage in outdoor pursuits.

To take one example, it is very difficult and potentially dangerous to ski in bad visibility. Mist, cloud and snow can combine into a single featureless morass where it is extremely hard – even for an expert skier – to assess speed, angle of slope or what the upcoming terrain is like. If a Google Glass-like technology were to be integrated into a pair or ski goggles then this problem could be alleviated.

Based on official route data that been recorded by park authorities, or potentially user-uploaded data that had been validated for accuracy, then the skier would be able to ski in white-out conditions with the safe route and terrain precisely marked on their field of view.

While by no means a green light to ski normally, such a system could help save lives, especially in off-piste skiing situations where the skier has to avoid cliffs and other serious hazards.

We can see similar applications in mountain walking, sailing and off-road running.

Use Case 6: Personal Trainer

One of the main problems that individuals encounter when embarking on a personal fitness programme is lack of knowledge to train efficiently – rather than just train.

If a fitness tracking system were to be combined with a wireless earbud system and a cloud-based ‘personal trainer’ service then this problem could be addressed.

Having set up an online account and uploaded a week’s data the service would have a fairly detailed understanding of the user’s current fitness. This understanding could be enhanced by the user inserting their wireless earbud to receive requests such as ‘perform 10 press ups’ and then ‘perform 10 squats’ etc. The results of this exercise could be used by the service to fine tune its understanding of the user’s current fitness level.

Next, having chosen their desired fitness objective, the service could provide encouragement and advice to the user during each workout – such as “Mary, you could do with working a bit harder here – I’d like to see your heart rate at 160.”

When combined with an voice recognition system then this could become a complete AIU-based two-way personal training system.

User Case 7: Arranging a Meeting

Consider how wearable technology in the form of an advanced smart watch could transform the common, but oftentimes rather tedious process of arranging a work meeting:

wearable-chart.png


This use case implies that the smart watch incorporates a voice-controlled UI, a cloud-based AI engine and a service delivery infrastructure that is accessed in real time over the web.

Because the device would be connected to the web via a wireless connection, it would have real-time access to cloud-based services, data and network intelligence.

Just as the building blocks needed to make a wrist-mounted mobile phone already exist so the building blocks needed to enabled the use case outlined in the right hand column of the above table also exist.

Use Case 8: Memory Aid

One of the most annoying personal limitations is our memory, which is often not able to recall specific details of what has been in the user’s field of view in the past.

With a future generation of smart glasses it might become possible to subscribe to a ‘remember this’ service where specific ‘frames’ or ‘time slots’ are recorded for searching at a later time. This might be the business card of a partner, the menu at a restaurant, a technical specification or a particularity beautiful sunset. Each item could be tagged by the user using voice commands – location, time and date could be automatically added – and then stored in a user’s cloud-based personal folder.

All of these items could then be accessible at a later time – from anywhere.

With a further stage of innovation, optical character recognition technology could be used to translate material that the user had recorded, such as a price list. This would allow the user to speak a query such as ‘search personal folder: price list, cisco, router” – and then the price list that the user had photographed 3 months earlier would immediately be displayed.

Conclusion

In this paper we have outlined some futuristic use cases which we think are technologically feasible, based on where wearable technology is today and where we believe it is heading in the future.

We are not putting forward any of these options as being particularly ‘good ’or ‘bad’ – the purpose has been to illustrate the sort of use cases that the enabling technologies might spawn in the years ahead.

Depending on your perspective, some of the above use cases might seem abhorrent, or motivational.

Regardless, it is our experience is that if something is technically possible then it is likely to enter the marketplace at some point as a commercial product or service...

Clearly, wearable technology is not just about jumping out of hot air balloons while entering a Google Hangout: this new category certainly promises a whole panorama of new, value-added service, but it also raises some very serious privacy and related questions that we think you should be aware of, if you are not already.

The Wearable Technology: 2014 Report is available here: http://www.generatorresearch.com/report/wearable-technology-2014

This article was republished with permission from Generator Research.
 
Image Credit: Shutterstock.com