Apple Ramps Up Privacy at WWDC21 and Why It Matters?
While watching Apple’s WWDC21 event, with the expected announcements of their latest OS updates and enhanced developer tools, there was one moment that was particularly poignant from our point of view. Craig Federighi, Apple’s senior vice president of Software Engineering, started his segment on privacy by stating that Apple believes that privacy is a fundamental human right. Tim Cook, Apple’s CEO, made the exact statement during his keynote speech at a privacy conference in 2018 in Brussels. So, their messaging on this issue has been consistent, though controversial.
Apple was accused of hypocrisy in the past on this stance since it has not banned such apps as Facebook and WhatsApp from its AppStore, for example. Apple’s apparent move against surveillance capitalism, targetting data collectors, data brokers, advertisers and many Software as a Service (SaaS) platforms, might be seen as a pain in the rear by companies that offer “AI-powered” recommendations and personalisation services. Users may also become unhappy that their personalised experiences feel somewhat less intuitive (though this point is debatable).
Nonetheless, regardless of its reasoning, we welcome Apple’s continued pro-privacy move to reshape how technology companies offer their services in a world where our data privacy is not respected.
Protecting Data from Third Parties
During the event, Apple previewed their new privacy protections in the upcoming iOS 15, iPadOS 15, macOS Monterey, and watchOS 8, which help users better control and manage access to their data. From what we have seen from these previews, Apple has significantly expanded their privacy features, including App Tracking Transparency and Privacy Nutrition Labels on the App Store. Furthermore, Apple’s Mail app introduced Mail Privacy Protection, which stops senders from using invisible pixels to collect information about the user. This new feature, Apple says, helps users prevent senders from knowing when they open an email and masks their IP address so it can’t be linked to other online activity or used to determine their location. About time!
They also enhanced Safari’s Intelligent Tracking Prevention by hiding the user’s IP address from trackers. This means they can’t utilise the user’s IP address as a unique identifier to connect their activity across websites and build a profile about them. These are some of the key examples that we think are significant in fighting against data vultures. Apple introduced several other features throughout their ecosystem to help their users control and monitor apps’ use of their data, which you can find out about in more detail on their WWDC21 Page.
Of course, the key stakeholders who benefited from sucking up endless amounts of our private data for many years will not simply accept these developments without a fight. For example, earlier this year, German business groups filed an antitrust complaint against Apple’s requirement for app developers, who want to collect digital advertising identifiers from iOS devices, to show a pop-up stating that the app “would like permission to track you across apps and websites owned by other companies.”
More notably, companies such as Facebook are fiercely fighting against Apple’s “unfair” move against data collectors while at the same time carrying out suspicious activities in collecting user data. In 2019, TechCrunch reported that Facebook secretly paid users to install a “Facebook Research” VPN that lets the company absorb all of the individual users’ phones and web activities, similar to Facebook’s Onavo Protect app, which Apple banned in June 2018.
Of course, there are many other examples of such activities throughout the technology sector. Not to mention, Governments seem to have started to behave like data brokers during the pandemic. It will be interesting to see how Apple and governments will deal with this particular situation. The bottom line is that we are not out of the woods yet, but our hope is that Apple’s move will facilitate change in a similar manner that they successfully achieved in the past. Remember their controversial ban of Flash in 2010? It eventually worked out well and gave rise to better web experiences (sorry, I admit it, I hated Flash).
Our hope at Nebuli that the more key players in the tech industry take digital ethics and user privacy more seriously, the higher the chances we will witness more imaginative technological solutions that positively impact our lives. We pledged to build dedicated ecosystems for ethical AI as we firmly believe that the most intelligent technologies are those which encompass ethics, human behaviours and philosophy. All with the fundamental aim of serving humanity, not replacing them.
Current AI solutions that dismiss our privacy rights or see ethics as obstacles to innovation are what we describe as lazy AI, and it has failed time and time again. The MIT Sloan School of Management reported in 2020 this lack of tangible progress in AI innovation and overhyped investments. According to the report, only 10% of companies obtain significant financial benefits from artificial intelligence. Hence, our vision of the People-first approach is the future powered by augmented intelligence principles that empower all of us rather than further empowering data brokers.