Release notes

1.7.1

September 27, 2022

Major updates

  • AutoMode as an individual tool — Previously, AutoMode was accessible through Discover and Benchmark. To shed more light on the product it is now its own experience on aiXplain where users can train and deploy their custom AutoMode model through!

Minor updates

  • Zooming in Benchmark report plots — Now we can zoom in the benchmarking report plots.
  • Downloading intermediate Benchmark results — While a benchmarking report is running, now we can download the up-until-this-moment-computed results as a csv.
  • Canceling and rerunning single models in Benchmark — Now you can cancel and/or rerun a single model in Benchmark. This is useful for when a model fails during benchmarking.
  • Updates to performance table — Now the performance table shows the completed and failed segments for each supplier.

Bug fixes

  • Allow dataset upload of larger files — Due to infrastructure limitations, dataset upload would timeout after 5 minutes of dataset upload. This caused issues with trying to upload large datasets to aiXplain. This is now fixed.
  • Fixed the colors of the confusion matrix in classification benchmarking to be more representative to a heatmap.
  • Fixed a bug with scoring for Diacritization Benchmark which displayed wrong scores at times.

1.7.0

August 31, 2022

Major updates

  • Benchmarking for text classification models — We have expanded our benchmarking capabilities to support text classification models. The current supported functions are Sentiment Analysis and Offensive Language Identification.
  • Subtitling node — We added a new node in aiXplain’s designer, the subtitling node.
  • AutoMode for ASR — Following AutoMode for MT, you can now create your own AutoMode ASR model from Discover.

Minor updates

  • Bring your own models output — We have added the ability for users to upload their models output during the dataset upload and compute the score of that model without the model being onboarded on aiXplain.
  • Bring your own bias categories — Similar to bringing your own model’s output, during dataset upload you can identify categorical columns that you can use for bias analysis and topic classification in benchmarking.