Advertisement
Australia markets closed
  • ALL ORDS

    7,817.40
    -81.50 (-1.03%)
     
  • ASX 200

    7,567.30
    -74.80 (-0.98%)
     
  • AUD/USD

    0.6419
    -0.0007 (-0.11%)
     
  • OIL

    83.25
    +0.52 (+0.63%)
     
  • GOLD

    2,402.90
    +4.90 (+0.20%)
     
  • Bitcoin AUD

    99,951.27
    +1,223.17 (+1.24%)
     
  • CMC Crypto 200

    1,382.77
    +70.15 (+5.35%)
     
  • AUD/EUR

    0.6022
    -0.0008 (-0.14%)
     
  • AUD/NZD

    1.0901
    +0.0027 (+0.24%)
     
  • NZX 50

    11,796.21
    -39.83 (-0.34%)
     
  • NASDAQ

    17,037.65
    -356.67 (-2.05%)
     
  • FTSE

    7,895.85
    +18.80 (+0.24%)
     
  • Dow Jones

    37,986.40
    +211.02 (+0.56%)
     
  • DAX

    17,737.36
    -100.04 (-0.56%)
     
  • Hang Seng

    16,224.14
    -161.73 (-0.99%)
     
  • NIKKEI 225

    37,068.35
    -1,011.35 (-2.66%)
     

Online targeting needs tighter controls, UK data ethics body suggests

A UK government advisory body on AI and data ethics has recommended tighter controls on how platform giants can use ad targeting and content personalization.

Concerns about the largely unregulated eyeball-grabbing targeting tactics of online platforms -- be it via serving "personalized content" or "microtargeted ads" to individuals or groups of users -- include the risk of generating addictive behaviors; the exploitation and/or discrimination of vulnerable groups; the amplification of misinformation; and election interference, to name a few.

In a report published today, the Centre for Data Ethics and Innovation (CDEI) sets out a number of recommendations for platforms that use targeting tools to determine what content or ads are shown to users. It argues these recommendations will help build public trust in digital services, including those delivered by the public sector.

"Most people do not want targeting stopped. But they do want to know that it is being done safely and ethically. And they want more control," writes chair Roger Taylor in an executive summary.

ADVERTISEMENT

"Our analysis of the regulatory environment demonstrates significant gaps in their regulatory oversight," the report goes on. "Our analysis of public attitudes shows greatest concern and interest about the use of online targeting on large platforms.

"Our research demonstrates that online targeting systems used by social media platforms (like Facebook and Twitter), video sharing platforms (like YouTube, Snapchat, and TikTok), and search engines (like Google and Bing) raise the greatest concerns in these areas."

The advisory body, which was announced by the Conservative-led government in 2017 to help devise policy for regulating the use of AI and data-driven technologies, is calling for online targeting giants to be held to higher standards of accountability over their use of targeting tools.

Current regulations are inadequate to cover online targeting, per its analysis, while it dubs self-regulation and the status quo "unsustainable".

Respondents to a survey the CDEI conducted were overwhelmingly in favour (61 percent) of giving an independent regulator oversight of the use of online targeting systems vs just 17 percent preferring self-regulation to continue.

The UK government set out a plan to regulate a number of online harms in a White Paper published last year -- which proposes a duty of care be placed on platforms to protect users from a range of harms, such as age inappropriate content or material that encourages damaging behaviors such as self-harm or eating disorders.

The CDEI suggests this proposed framework could help plug some of the regulatory gaps its report is flagging "if online targeting is recognised within the independent regulator’s remit" (while warning that would still leave a number of gaps in the regulation of political advertising).

The report also calls for greater transparency in how online targeting systems operate "so that society can better understand the impacts of these systems and policy responses can be built on robust evidence".

Another key recommendation is for Internet users to be given greater control over the way they are targeted so that personalization can better fit their preferences.

"Online targeting has helped to put a handful of global online platform businesses in positions of enormous power to predict and influence behaviour. However, current mechanisms to hold them to account are inadequate," the CDEI writes. "We have reviewed the powers of the existing regulators and conclude that enforcement of existing legislation and self-regulation cannot be relied on to meet public expectations of greater accountability."

"There is recognition from industry as well as the public that there are limits to self-regulation and the status quo is unsustainable. Now is the time for regulatory action that takes proportionate steps to increase accountability, transparency and user empowerment," it adds.

The CDEI is not proposing any specific restrictions itself -- but rather advocating for a regulatory regime that "promote[s] responsibility and transparency and safeguard[s] human rights by design".

It also recommends that a code of practice be applied to platforms and services that use online targeting systems, requiring that they adopt "standards of risk management, transparency and protection of people who may be vulnerable, so that they can be held to account for the impact of online targeting systems on users".

The future online harms regulator should have a statutory duty to protect and respect freedom of expression and privacy, it also suggests, writing that: "Regulation of online targeting should be developed to safeguard freedom of expression and privacy online, and to promote human rights-based international norms."

The regulator will also need information gathering powers in order to assess compliance with the code, per the recommendations — including the power to require that independent experts are given secure access to platforms' data to enable further compliance testing of their code.

"Online targeting systems may have a negative effect on mental health, for example as a possible factor in 'internet addiction'. They could contribute to societal issues including radicalisation and the polarisation of political views. These are issues of significant public concern, where the risks of harm are poorly understood, but the potential impact too great to ignore," the report warns.

"We recommend that the regulator facilitates independent academic research into issues of significant public interest, and that it has the power to require online platforms to give independent researchers secure access to their data. Without this, the regulator and other policymakers will not be able to develop evidence-based policy and identify best practice."

Another recommendation is that platforms be required to maintain online advertising archives "to provide transparency for types of personalised advertising that pose particular societal risks" such as politics ads; employment and other similar opportunities where there may be a risk of unlawful discrimination; and for age-restricted products.

An ad archive is one of the self-regulatory measures which ad platforms, including Facebook, have developed and implemented in recent years as scrutiny of their systems has dialed up in the wake of the Cambridge Analytica political ad targeting scandal which was carried out using Facebook's ad tools and user data.

Although such archives still tend to offer only limited visibility to users, and Facebook has been heavily criticized by researchers for failing to provide adequate tools to support academic study of its platform.

On "more meaningful control" for users over how they're targeted, the Centre suggests support for a new market in third party 'data intermediaries' to enable users’ interests to be represented across multiple services and new third party safety apps.

It is also calling for formal coordination mechanisms between the future online harms regulator and the UK's data watchdog (the ICO) and Competition and Markets Authority (CMA). The report flagged other related work being carried out by the ICO and CMA, including the ICO's Age Appropriate Design Code; and the CMA's market study of online platforms and digital advertising.

The latter raised concerns late last year about the market power of tech giants, floating a range of potential interventions in its interim report, including asking for views on breaking up platform giants.