Insurance has always been one of the most obvious candidates for usage of tracking data. John Hancock, a life insurer, is partnering with Amazon to give Halo health and fitness tracker wristbands in exchange for data, which will be used to adjust insurance premiums (through discounts and rewards).
Exactly. I find it generally hard to convince people to care about their privacy because they feel mass surveillance doesnāt affect them (they arenāt important enough).
But when I bring up the topic of insurance generally they start getting the point of how this may impact them. So Iām not surprised that this is happening but in the USA they donāt have significant privacy laws.
I donāt know about Europe, Iāve just read this article. I suppose it could happen based on consent, because this is all explicit. I donāt know if other laws, not related to privacy, could also play a role.
Iāve seen that argument about not being important enough to be affected by mass surveillance, but yes, this kind of automated assessment, AI algorithms, etc, show that you donāt have to be special to be targeted. Itās not like thereās a committee meeting in a dark room, analyzing your file and deciding to go after you, which seems to be whatās on peopleās minds when using that reasoning.
Anyway, I know itās kind of pointless to say this in this forum, where people do worry about privacy. I also find it hard to convince others that itās an important issue.
I think there are transparency laws that require explainability of results. Something that black box algorithms canāt do (like the most common artificial neural nets).
Yup. Out of sight, out of mind. Thereās a nice paper somehow related to this about steps one has to take to start caring about their privacy called āWhy doesnāt Jane Protect her Privacyā
About explainability, I think in this case the rules can be very simple: hit some exercise goals and get a discount. I was wondering about other things like insurance regulations, but I have no idea.
" Amazon Claims āHaloā Device Will Monitor Userās Voice for āEmotional Well-Beingā
Despite the exceptional privacy risks of biometric data collection and opaque, unproven algorithms, Amazon last week unveiled Halo, a wearable device that purports to measure ātoneā and āemotional well-beingā based on a userās voice. According to Amazon, the device āuses machine learning to analyze energy and positivity in a customerās voice so they can better understand how they may sound to others[.]ā The device also monitors physical activity, assigns a sleep score, and can scan a userās body to estimate body fat percentage and weight. In recent years, Amazon has come under fire for its development of biased and inaccurate facial surveillance tools, its marketing of home surveillance camera Ring, and its controversial partnerships with law enforcement agencies. Last year, EPIC filed a Federal Trade Commission complaint against Hirevue, an AI hiring tool that claims to evaluate ācognitive ability,ā āpsychological traits,ā and āemotional intelligenceā based on videos of job candidates. EPIC has long advocated for algorithmic transparency and the adoption of the Universal Guidelines for AI."