Release notes: 2.0.0

Version released on November 17, 2022

Major updates

  • The new aiXplain release offers improved system performance and stability with a brand new user-friendly and intuitive design!
  • Dashboard — The aiXplain Dashboard is the go-to place for all assets from aiXplain tools.
  • Asset drawer — The asset drawer allows the collection of assets in one place where they can be used with any tool on aiXplain.
  • Derivative data — Create derivative data through pipelines or Benchmarking reports and view data history.
  • Billing nested view — This new view allows for a simpler report of the transactions made on aiXplain at a glance.
  • AutoMode — aiXplain’s AutoMode is an ensemble model that routes the input to the most optimal system according to the quality preference that it is trained on it on. The supported functions for AutoMode are Automatic Speech Recognition and Machine Translation.
  • Multi-input/output support — Design now supports connecting and running multiple input and output nodes in a single pipeline.
  • Subtitling node — The Subtitling node allows having an entire subtitling system in one node.
  • Decision node — The decision node allows members to set routing of the data in a pipeline based on set conditions and values.
  • Benchmark for Diacritization — Benchmark now supports Arabic text diacritization.
  • Benchmark for text classification — Benchmark now supports text classification.

Minor updates

  • Categories in dataset creation — Members can now specify categories when they are creating a dataset on aiXplain; these categories will be used when calculating bias in Benchmark jobs.
  • Light/dark mode — aiXplain now supports light and dark modes!
  • Downloading results as CSV — If a dataset license is owned, members can now download the results of a Benchmark report as a csv. While a Benchmark job is running, you can download the up-until-this-moment-computed results as a csv.
  • Zooming in benchmarking report plots — Now members can zoom in the benchmarking report plots.
  • Canceling and rerunning models in benchmarking — Now you can cancel and/or rerun a single model in benchmarking. This is useful for when a model fails during benchmarking.
  • Data sampling — Members can now specify how the results for Benchmark jobs can be displayed based on the data samples they’d like to use.
    • All segments — Shows the results of all the segments in a Benchmark job with a penalty score for failed segments.
    • Successful segments — Shows the results only based on the successful segments in a Benchmark job, disregarding failed segments in calculating the scores.
    • Intersecting successful segments — Shows the results only based on the common successful segments between all models.
  • Interquartile range multiplier — This configuration allows members to specify the multiplier at which the length of the whiskers in box plots is set.