Release notes

2.6.9

June 18, 2024

Major updates

  • Introducing Script node — This node allows you to upload Python scripts and integrate them into your pipelines for more versatile applications.
  • Launching aiXplainKit — Our Swift SDK, aiXplainKit, enables Swift programmers to add AI functions to their software with ease.

Minor updates

  • Fixed ‘if’ logic implementation in Design.
  • Updated the model and pipeline API integration code templates.

  • Added 2-factor authentication announcement to Dashboard.

2.6.8

June 6, 2024

Minor updates

  • Added a dictionary input template to example code blocks in the API integration tab of Pipeline and Model assets.
  • Added the ability to sign up or sign in using Apple ID.

SDK updates

SDK build 0.2.13

  • Updated asset cost parameters.

2.6.7

May 15, 2024

Major updates

  • Developed a centralized authorization system for aiXplain products and services. This authorization system offers enhanced security and scalability when signing in.

Minor updates

  • Added node force-fitting functionality to Design that auto-snap nodes into the canvas grid.
  • Added parameter mapping display between the nodes in Design to provide more visibility into the data flow of pipelines.

SDK updates

SDK build 0.2.12

  • Fixed a bug that affected the support of text labels in datasets.

2.6.6

April 17, 2024

Minor updates

  • Added the functionality to view and download scripts in script nodes in Design.
  • Improved Discover filters to allow more flexibility.
  • Added the functionality to directly swap a node in Design.
  • Added “in” and “contains” logic options to decision nodes in Design.
  • Made various UX improvements in Design.

SDK updates

SDK build 0.2.11

  • Added pipeline saving/updating service to the SDK.
  • Added a new structure of label data type.

2.6.5

March 28, 2024

Minor updates

  • Improved the structure of FineTune logs.
  • Added suggestion notice to auto-switch teams when a private asset is accessed through URL.
  • Made various Benchmark report UX improvements.

Bug fixes

  • Fixed metric import issues in Design.
  • Fixed automatic node labeling issues in Design.

2.6.4

March 5, 2024

Minor updates

  • Added smart search and suggestions in Discover that allow you to find the AI assets you’re looking for easier.
  • Created standardized input format for all LLM models to ensure seamless swapping between LLMs.
  • Added recommended functions for each metric in Benchmark.

SDK updates

SDK build 0.2.10

  • Enabled API key parameter in data asset creation.
  • Created bounds for FineTune hyperparameters.

2.6.3

February 13, 2024

Minor updates

  • Added impact analysis plots to Benchmark reports that study the effect of certain features on the performance scores of models.
  • Added descriptions under plots for failure rate and bias analysis in Benchmark reports.
  • Updated the activity log section of the dashboard to display deleting and moving assets.
  • Added Solar LLM as one of the models supported for fine-tuning.

SDK updates

SDK build 0.2.9

  • Enabled the onboarding of LLMs directly from HuggingFace.

2.6.2

January 30, 2024

Minor updates

  • Added data characteristic analysis into Benchmark reports.
  • Added the ability to move models between teams the user is an owner of.
  • Added failure rate plot for models in Benchmark reports.
  • Onboarded Groq-hosted LLaMa-2 70B models.

Bug fixes

  • Fixed an issue with Bias Analysis plot failing to display results.

2.6.1

January 17, 2024

Minor updates

  • Added modality filters for AI assets in Discover marketplace.
  • Added topic classification analysis as a new feature in Benchmark.

SDK updates

SDK build 0.2.8

  • Added the ability to assign model version.
  • Added bounds for FineTune hyperparameters.
  • Updated LLM hyperparameters in FineTune.
  • Enabled Parameter-efficient fine-tuning as a default setting.

2.6.0

December 19, 2023

Major updates

  • Publishing Corpora — Added the ability to publish and sell onboarded Corpora into the marketplace.
  • Summary and data analysis in Benchmark — There is a new tab in Benchmark reports that provides a textual summary of the report. In addition, in-depth analyses are provided under each plot in the report.

Minor updates

  • Added a tokenizer to accurately provide price estimates for LLMs.

SDK updates

SDK build 0.2.7

  • Added the ability to delete assets through the SDK.
  • Added the ability to sort models by cost, popularity, and creation date.

2.5.4

November 28, 2023

Minor updates

  • Added token based pricing to text generation models in Discover.

SDK updates

SDK build 0.2.6

  • Added LLM fine-tuning — Prompt formatter, hyperparameter tuning, supervised fine-tuning, and parameter-efficient fine-tuning.
  • Added learning rate scheduler and early stopping to FineTune LLMs.

2.5.3

November 14, 2023

Minor updates

  • Added new supplier filter to Discover.
  • Updated Bel Esprit behavior to populate pipelines with prompted text generation models instead of script nodes.

SDK updates

SDK build 0.2.5

  • Added the ability to upload files through the SDK.
  • Fixed a status display on model failure.

Improvements

  • Improved Bel Esprit’s messaging.

2.5.2

October 31, 2023

Minor updates

  • Updated Bel Esprit UI and added onboarding experience.
  • Added the ability to input prompts and context into LLMs in Design.
  • Added report functionality for assets inside asset card details.
  • Added the ability to create up to 5 teams on aiXplain and invite members to them.
  • Added model API performance information inside model card.
  • Added supported file types in model try out and compare.

2.5.1

October 18, 2023

Minor updates

  • Added the ability to contact a human specialist through Bel Esprit.
  • Added the ability to copy Bel Esprit session ID and export the chat log.
  • Added asset renaming and editing meta information.
  • Created pipeline drafts listing page with function and visibility filters.

SDK updates

SDK build 0.2.4

  • Model image upload SDK.

Bug fixes

  • Fixed an issue with the asset drawer where it would clear assets after being used.

2.5.0

September 21, 2023

Major updates

  • Bel Esprit — We are thrilled to introduce Bel Esprit, your personal AI solution architect. Converse with an AI chat agent in natural language to transform your ideas into deployable, production-ready AI solutions.
  • Pipeline drafts in Design — A new state for pipelines in Design has been added which allows the user to save unfinished Design pipelines as drafts. Users can load drafts and pick up their work where they left off. Drafts also work with auto-saving to ensure that no work gets lost.
  • Pipeline templates in Design — We are adding pipeline templates into Design. You can now save your pipelines as templates, you can then reuse them or load pre-existing templates to speed up your pipeline building process.

Minor updates

  • Improved the documents page in the side panel and added several new articles.
  • Added the ability for users to sign-up or sign-in using their Google accounts.
  • Introduced conversation history in “Try it out” for LLMs.
  • Increased the limit for the number of access keys per team to 10.
  • Disabled the zoom button if the zoom level is already at maximum or minimum in Design.

SDK updates

  • Added data validations for the format constraints while uploading datasets.
  • Added the ability to filter models by AI function when listing them.
  • Fixed FineTune and data asset functional tests.
  • Fixed metric example displayed in the documentation.

2.4.2

September 4, 2023

Minor updates

  • Added the ability for decision node in Design to handle multiple inputs and multiple outputs.
  • Improved the Model comparison feature to handle any function with multi-input support.
  • Added Models and Metrics display in Pipeline specification table.
  • Added the ability to edit owned files and Pipelines through asset editor.
  • “Try it out” feature now handles Models with Search function.
  • Added an aiXplain Credit purchase interaction that’s triggered when the user’s balance is low.
  • Added data format requirements to Dataset and Corpus creation.

SDK updates

  • Made pipeline logs accessible to users via the SDK.

Bug fixes

  • Fixed an issue where the list of models was not updated when a model is unsubscribed.
  • Fixed an issue with handling label inputs in “Try it out”.

2.4.1

August 10, 2023

Major updates

  • FineTune for Search — We are thrilled to introduce the latest feature added to FineTune, FineTune for Search. This feature allows you to customize your own multi-modal search engine using your own data.
  • New Model and Pipeline tryout — We expanded the capabilities of “Try it out!” which is used to run and test models and pipelines on aiXplain. The new update takes a conversational approach and supports models and pipelines with multi-input, multi-output, and multi-modality.

Minor updates

  • Added the ability to download pipeline logs when you try out a pipeline.
  • Added a feedback form which is accessible from the left panel.
  • Added an aiXplain Credit purchase interaction when the user’s balance is low for “Try it out!”.
  • Added the ability to delete assets from asset cards.

SDK updates

  • Refactoring Benchmark on the SDKWe refactored Benchmark SDK to make it consistent with the previously released FineTune SDK.
  • Onboard Dataset and Corpus from an s3 bucketWe added the ability to onboard datasets and corpora to aiXplain from public s3 buckets.

Bug fixes

  • Fixed an issue with the asset drawer compatibility which allowed collecting assets that are not onboarded yet.

2.4.0

July 24, 2023

Major updates

  • Metric changes — In this release, we wanted to increase the versatility of how evaluation metrics that are used in Benchmark are behaving. Hence, we are adding more details to Metric cards to be more descriptive. Adding details like the input, output, pricing, and supplier of each Metric. This would also give more visibility to the pricing of Benchmark jobs.
  • Text normalization setting in Benchmark — A new feature has been added to Benchmark which allows using various normalization settings for text to be used. This feature comes with minor UI improvements to Benchmark job configuration.

Minor updates

  • Added the ability to use public URLs as valid data in Dataset and Corpus upload.
  • Added the ability to filter models in Discover which are FineTune compatible.
  • Added tooltips for different types of nodes in Design when you hover over them.
  • Added double-click functionality in Design to re-align zoom and access node details.
  • Improved UI visuals of Design nodes with unified asset node types and dynamic font size.
  • Added autosave functionality for pipelines in Design.
  • Added a Dashboard guide that is visible for the first time a member signs.

SDK updates

  • SDK for FineTuneSDK now supports fine-tuning models.

Bug fixes

  • Fixed an issue with data persistence that kept experience progress when switching teams.
  • Fixed an issue with handling third party packages which impacted certain nodes in Design.

2.3.0

June 12, 2023

Major updates

  • Corpus and Dataset changes — In release 2.2.0 we introduced a new asset type “Corpus” which allowed structured data to be onboarded on aiXplain without having to allocate a function to it.
    • Added the ability to create Datasets from Corpora, or use them directly with your favorite tools like Benchmark and FineTune.
    • Improved Dataset creation process to allow for more options such as adding multiple references and handling errors.
    • Added Datasets and Corpora creation from files already uploaded to aiXplain instead of only supporting local files.
    • Added Dataset and Corpus logs which are available on the details page of the assets to show the history of these assets.
  • AutoMode training temporarily disabled — In light of the changes to datasets in aiXplain, our team is working hard to address their compatibility with AutoMode. This won’t affect the already trained AutoMode models. Expect an update that will address this soon.

Minor updates

  • Added more information on the Corpus details page that shows the number of columns/features and rows/segments in a Corpus.
  • Added credit expiration notice for Credits granted through phone verification.
  • Updated the colors in SHAP plots inside Benchmark reports.
  • Added more metadata categories in Benchmark reports.

Bug fixes

  • Fixed a bug that showed the “Compare” button in the asset drawer for assets that don’t support this function.
  • Fixed a bug that displayed the wrong labels in nodes inside Design.
  • Fixed an issue with certain models that failed when rerunning a Benchmark job.

2.2.0

April 26, 2023

Major updates

  • Discover changes — In preparation for bigger upcoming changes to some of our AI asset types, we updated the design of Discover to make finding the assets that you are looking for easier and more intuitive! The changes will also be reflected on your assets that are accessible from your Dashboard.
  • Introducing a new asset type: Corpus — In this release we added a new asset type that you can find, onboard, and use in aiXplain. The corpus is structured data which is not defined to be used for a single particular AI function. There are limited things that you can do with corpora for the time being but our next release will allow extracting multiple datasets for multiple AI functions from a single corpus. For example, I can onboard a corpus with columns for speech, transcript, translation, speaker age, speaker gender, speaker ID where I can use it to create datasets for Speech Recognition, Speaker Diarization, Speech Classification, and Translation.
  • Updates to API access keys — We previously added the ability for teams to generate access keys from the team settings. In this release, we removed the auto-generated default access key which was specific to a particular model or pipeline for each team. This change allows creating and using different access keys for different endpoints. With added security allowing members to delete and recreate access keys as they see fit.

Minor updates

  • Translation metrics added in Design — We added all of the Translation metrics found in Benchmark to the metric node inside Design.
  • Improved Benchmark — We improved the way that scoring in Benchmark takes place. Those changes will have positive impact on Benchmark performance and stability.
  • Enhanced network security — We added improved encryption to the tokens sent out over the network.
  • User guide in Dashboard — Added a user guide in the Dashboard to help the members access the documents page and use the platform.

Bug fixes

  • Fixed a bug that made Benchmark reports appear to be stuck at 90% when they are complete.
  • Fixed a bug where members couldn’t find the “switch team” option when the team invitation is accepted.
  • Fixed a bug that prevented the phone number verification to appear on the dashboard.
  • Fixed a bug in displaying metrics inside the metric node in Design that collapsed the scroll.
  • Fixed a bug where some function names were displayed incorrectly.

2.1.1

February 8, 2023

Minor updates

  • Phone verification — Go to your account settings and verify your phone number to get access to your free credits!
  • Renaming pipelines — Added the ability to rename pipelines after creating them.

Bug fixes

  • Fixed a bug where clicking on canvas doesn’t close dropdown menus in Design.
  • Fixed a bug where the back button wasn’t working as intended in some areas of the app.
  • Fixed a bug where you were redirected to the wrong URL after payment method is updated.

2.1.0

January 12, 2023

Major updates

  • FineTune tool — We have launched the ability to fine-tune select models for MT and ASR. You can also view the logs of the model inside the details page for it. Read more
  • Segmentor node and constructor node — We added new nodes into Design! Using the segmentor node and constructor node, members now have the ability to customize the segmentation of the data flowing through their pipelines and reconstruct them back. This optimizes the use of Design for models where the input size matters.
  • Metric node — We added another new node in Design! Members can add benchmarking to their pipelines as a node which gives them the ability to evaluate the performance of their models while a pipeline is running.
  • Activity logs — We added a log for all your activities on aiXplain which allows you to filter them by product and by status. This will give members more control and insight over what activities are being done on their teams such as model fine-tuning and Benchmark reports.

Minor updates

  • Imputation of failed inputs — Now in Benchmark, members have the option to impute the scores of some failed segments to get a higher representation of their data.
  • Added the number of segments to be displayed onto the asset card for datasets.
  • Added the ability to create API keys on aiXplain: In the top menu under team settings, members will be able to generate API keys which can then be used as a universal key to run all of their assets.
  • Removed phone number from registration.
  • Added icons next to AutoMode and FineTune models.
  • Added multiple notifications in platform for Benchmark, FineTune, and AutoMode.

Bug fixes

  • Fixed an issue which didn’t allow saving pipelines without setting decision nodes.
  • Fixed a bug that showed a redundant message when not having enough credits on the platform.
  • Fixed a bug that allowed users to add multiple datasets in benchmarking when they shouldn’t.
  • Fixed a bug where submetrics showed redundant metrics name.
    Fixed an issue with the IQR filter displaying inside the segments tab in a benchmark report.
  • Fixed a bug where the label of the decision node couldn’t be updated.
  • Improved error handling for model and pipeline tryout.

2.0.0

November 17, 2022

Major updates

  • The new aiXplain release offers improved system performance and stability with a brand new user-friendly and intuitive design!
  • Dashboard — The aiXplain Dashboard is the go-to place for all assets from aiXplain tools.
  • Asset drawer — The asset drawer allows the collection of assets in one place where they can be used with any tool on aiXplain.
  • Derivative data — Create derivative data through pipelines or Benchmarking reports and view data history.
  • Billing nested view — This new view allows for a simpler report of the transactions made on aiXplain at a glance.
  • AutoMode — aiXplain’s AutoMode is an ensemble model that routes the input to the most optimal system according to the quality preference that it is trained on it on. The supported functions for AutoMode are Automatic Speech Recognition and Machine Translation.
  • Multi-input/output support — Design now supports connecting and running multiple input and output nodes in a single pipeline.
  • Subtitling node — The Subtitling node allows having an entire subtitling system in one node.
  • Decision node — The decision node allows members to set routing of the data in a pipeline based on set conditions and values.
  • Benchmark for Diacritization — Benchmark now supports Arabic text diacritization.
  • Benchmark for text classification — Benchmark now supports text classification.

Minor updates

  • Categories in dataset creation — Members can now specify categories when they are creating a dataset on aiXplain; these categories will be used when calculating bias in Benchmark jobs.
  • Light/dark mode — aiXplain now supports light and dark modes!
  • Downloading results as CSV — If a dataset license is owned, members can now download the results of a Benchmark report as a csv. While a Benchmark job is running, you can download the up-until-this-moment-computed results as a csv.
  • Zooming in benchmarking report plots — Now members can zoom in the benchmarking report plots.
  • Canceling and rerunning models in benchmarking — Now you can cancel and/or rerun a single model in benchmarking. This is useful for when a model fails during benchmarking.
  • Data sampling — Members can now specify how the results for Benchmark jobs can be displayed based on the data samples they’d like to use.
    • All segments — Shows the results of all the segments in a Benchmark job with a penalty score for failed segments.
    • Successful segments — Shows the results only based on the successful segments in a Benchmark job, disregarding failed segments in calculating the scores.
    • Intersecting successful segments — Shows the results only based on the common successful segments between all models.
  • Interquartile range multiplier — This configuration allows members to specify the multiplier at which the length of the whiskers in box plots is set.

1.7.1

September 27, 2022

Major updates

  • AutoMode as an individual tool — Previously, AutoMode was accessible through Discover and Benchmark. To shed more light on the product it is now its own experience on aiXplain where users can train and deploy their custom AutoMode model through!

Minor updates

  • Zooming in Benchmark report plots — Now we can zoom in the benchmarking report plots.
  • Downloading intermediate Benchmark results — While a benchmarking report is running, now we can download the up-until-this-moment-computed results as a csv.
  • Canceling and rerunning single models in Benchmark — Now you can cancel and/or rerun a single model in Benchmark. This is useful for when a model fails during benchmarking.
  • Updates to performance table — Now the performance table shows the completed and failed segments for each supplier.

Bug fixes

  • Allow dataset upload of larger files — Due to infrastructure limitations, dataset upload would timeout after 5 minutes of dataset upload. This caused issues with trying to upload large datasets to aiXplain. This is now fixed.
  • Fixed the colors of the confusion matrix in classification benchmarking to be more representative to a heatmap.
  • Fixed a bug with scoring for Diacritization Benchmark which displayed wrong scores at times.

1.7.0

August 31, 2022

Major updates

  • Benchmarking for text classification models — We have expanded our benchmarking capabilities to support text classification models. The current supported functions are Sentiment Analysis and Offensive Language Identification.
  • Subtitling node — We added a new node in aiXplain’s designer, the subtitling node.
  • AutoMode for ASR — Following AutoMode for MT, you can now create your own AutoMode ASR model from Discover.

Minor updates

  • Bring your own models output — We have added the ability for users to upload their models output during the dataset upload and compute the score of that model without the model being onboarded on aiXplain.
  • Bring your own bias categories — Similar to bringing your own model’s output, during dataset upload you can identify categorical columns that you can use for bias analysis and topic classification in benchmarking.