Last week, the world was taken aback when an artificial intelligence-powered Russian app that allows people to see what they will look like in the future went viral.
With a simplistic name called FaceApp, the app soon became more popular than major news headlines and even Facebook’s Libra. Within a few days, FaceApp has garnered over 150 million users who used the app to share various looks and persona while sharing it via various mainstream social media platforms.
However, it turns out that the Russian-backed app might be a Trojan horse and many of its users could have opened themselves for future identity theft, privacy risk and other consequences.
Apart from laying a claim to all the pictures and names shared on the app, FaceApp said in its privacy agreement that it ‘may share your information as well as information from tools like cookies, log files, and device identifiers and location data, with third-party organizations’.
While it is a standard practice for advertising-funded apps to share such data in an anonymous manner, the aspect of owing the pictures and other personal data is dodgy.
As a takeout, there is no doubt that FaceApp’s image augmentation AI is perhaps the best so far and the innovation should be commended.
However, as global standards on privacy are concerned, the tactics of FaceApp’s backers is dodgy, to say the least, and should not be emulated by marketers.
It is not good enough to spoof user data for marketing purposes, the process and explicit permissions of the audience is also very important to get the best result.
Just like the experience of Facebook with Cambridge Analytica scandal, the consequences is often worse than the reward garnered from such complex albeit dodgy data collection process.