Advertisement
Australia markets closed
  • ALL ORDS

    7,937.50
    -0.40 (-0.01%)
     
  • ASX 200

    7,683.00
    -0.50 (-0.01%)
     
  • AUD/USD

    0.6529
    +0.0029 (+0.44%)
     
  • OIL

    83.14
    +0.33 (+0.40%)
     
  • GOLD

    2,338.70
    +0.30 (+0.01%)
     
  • Bitcoin AUD

    97,789.37
    -4,161.84 (-4.08%)
     
  • CMC Crypto 200

    1,363.05
    -19.53 (-1.41%)
     
  • AUD/EUR

    0.6085
    +0.0014 (+0.24%)
     
  • AUD/NZD

    1.0946
    +0.0004 (+0.04%)
     
  • NZX 50

    11,946.43
    +143.15 (+1.21%)
     
  • NASDAQ

    17,526.80
    +55.33 (+0.32%)
     
  • FTSE

    8,093.79
    +53.41 (+0.66%)
     
  • Dow Jones

    38,460.92
    -42.77 (-0.11%)
     
  • DAX

    18,005.13
    -83.57 (-0.46%)
     
  • Hang Seng

    17,284.54
    +83.27 (+0.48%)
     
  • NIKKEI 225

    37,628.48
    -831.60 (-2.16%)
     

With Evals, OpenAI hopes to crowdsource AI model testing

Image Credits: OpenAI

Alongside GPT-4, OpenAI has open sourced a software framework to evaluate the performance of its AI models. Called Evals, OpenAI says that the tooling will allow anyone to report shortcomings in its models to help guide improvements.

It's a sort of crowdsourcing approach to model testing, OpenAI explains in a blog post.

"We use Evals to guide development of our models (both identifying shortcomings and preventing regressions), and our users can apply it for tracking performance across model versions and evolving product integrations," OpenAI writes. "We are hoping Evals becomes a vehicle to share and crowdsource benchmarks, representing a maximally wide set of failure modes and difficult tasks."

OpenAI created Evals to develop and run benchmarks for evaluating models like GPT-4 while inspecting their performance. With Evals, developers can use datasets to generate prompts, measure the quality of completions provided by an OpenAI model and compare performance across different datasets and models.

ADVERTISEMENT

Evals, which is compatible with several popular AI benchmarks, also supports writing new classes to implement custom evaluation logic. As an example to follow, OpenAI created a logic puzzles evaluation that contains 10 prompts where GPT-4 fails.

It's all unpaid work, very unfortunately. But to incentivize Evals usage, OpenAI plans to grant GPT-4 access to those who contribute "high-quality" benchmarks.

"We believe that Evals will be an integral part of the process for using and building on top of our models, and we welcome direct contributions, questions, and feedback," the company wrote.

With Evals, OpenAI -- which recently said it would stop using customer data to train its models by default -- is following in the footsteps of others who've turned to crowdsourcing to robustify AI models.

In 2017, the Computational Linguistics and Information Processing Laboratory at the University of Maryland launched a platform dubbed Break It, Build It, which let researchers submit models to users tasked with coming up with examples to defeat them. And Meta maintains a platform called Dynabench that has users "fool" models designed to analyze sentiment, answer questions, detect hate speech and more.