Advertisement
Australia markets closed
  • ALL ORDS

    8,013.80
    +11.00 (+0.14%)
     
  • ASX 200

    7,767.50
    +7.90 (+0.10%)
     
  • AUD/USD

    0.6669
    +0.0018 (+0.27%)
     
  • OIL

    81.46
    -0.28 (-0.34%)
     
  • GOLD

    2,336.90
    +0.30 (+0.01%)
     
  • Bitcoin AUD

    90,209.69
    -1,950.26 (-2.12%)
     
  • CMC Crypto 200

    1,257.23
    -26.60 (-2.07%)
     
  • AUD/EUR

    0.6221
    +0.0015 (+0.24%)
     
  • AUD/NZD

    1.0940
    +0.0013 (+0.12%)
     
  • NZX 50

    11,717.43
    -117.59 (-0.99%)
     
  • NASDAQ

    19,682.87
    -106.16 (-0.54%)
     
  • FTSE

    8,164.12
    -15.56 (-0.19%)
     
  • Dow Jones

    39,118.86
    -45.20 (-0.12%)
     
  • DAX

    18,235.45
    +24.90 (+0.14%)
     
  • Hang Seng

    17,718.61
    +2.14 (+0.01%)
     
  • NIKKEI 225

    39,583.08
    +241.54 (+0.61%)
     

Reddit to update web standard to block automated website scraping

FILE PHOTO: Reddit IPO at the NYSE in New York

(Reuters) - Social media platform Reddit said on Tuesday it will update a web standard used by the platform to block automated data scraping from its website, following reports that AI startups were bypassing the rule to gather content for their systems.

The move comes at a time when artificial intelligence firms have been accused of plagiarizing content from publishers to create AI-generated summaries without giving credit or asking for permission.

Reddit said that it would update the Robots Exclusion Protocol, or "robots.txt," a widely accepted standard meant to determine which parts of a site are allowed to be crawled.

The company also said it will maintain rate-limiting, a technique used to control the number of requests from one particular entity, and will block unknown bots and crawlers from data scraping - collecting and saving raw information - on its website.

ADVERTISEMENT

More recently, robots.txt has become a key tool that publishers employ to prevent tech companies from using their content free-of-charge to train AI algorithms and create summaries in response to some search queries.

Last week, a letter to publishers by the content licensing startup TollBit said that several AI firms were circumventing the web standard to scrape publisher sites.

This follows a Wired investigation which found that AI search startup Perplexity likely bypassed efforts to block its web crawler via robots.txt.

Earlier in June, business media publisher Forbes accused Perplexity of plagiarizing its investigative stories for use in generative AI systems without giving credit.

Reddit said on Tuesday that researchers and organizations such as the Internet Archive will continue to have access to its content for non-commercial use.

(Reporting by Harshita Mary Varghese; Editing by Alan Barona)