Aim v2.7.1 is out! We are on a journey to democratize AI dev tools. Thanks to the community for continuous support and feedback!

Check out the updated Aim demo at play.aimstack.io.

Below are the two main highlights of Aim v2.7.1.

Aim Table CSV export

Now you can export both tables from Explore and Dashboard to a CSV file. Afterwards feel free to use the exported CSV file to import in your spreadsheets for reports or just import in your notebook and explore further in other ways (e.g. with Pandas).

Exporting CSV from the Explore table

Bookmark/Save the Explore State [experimental]

This new feature helps to save/bookmark the Explore state for future access. The bookmarking ability…


Aim 2.6.0 is out! We are on a journey to democratize AI dev tools and thankful to the community for continuous feedback.

Check out the updated Aim demo at play.aimstack.io.

For this release, there are two major updates — both highly requested by the Aim community. Specifically thanks to jaekyeom, siddk and vopani for raising and pushing for those improvements. See the full list of changes here and here.

Docker requirement removed

Back when we were designing the first version of Aim, we used to be excited about packaging Aim in a docker — had spent considerable amount of time to build a…


Aim 2.4.0 is out! Thanks to the community for feedback and support on our journey towards democratizing AI dev tools.

Check out the updated Aim at play.aimstack.io.

For this release, there have been lots of performance updates and small tweaks to the UI. Below are the two highlight features of Aim 2.4.0. Please see the full list of changes here.

XGBoost Integration

Now aim.callback is available that exports AimCallback to be passed to the xgb.train as a callback to log the experiments.

Check out this blogpost for more details on how to integrate Aim to your XGBoost code.

Confidence Interval as the aggregation method

Now you can use…


Aim 2.3.0 is out! Thanks to the community for feedback and support on our journey towards democratizing AI dev tools.

Check out the updated Aim at play.aimstack.io.

Below are the highlight features of the update. Find the full list of changes here.

System Resource Usage

Now you can use automatic tracking of your system resources (GPU, CPU, Memory, etc). In order to disable system tracking, just initialize the session with system_tracking_interval==0.

Once you run the training, you will see the following. Here is a demo:

Aim automatic system resource tracking demo

Of course you can also query all those new metrics in the Explore page and compare, group, aggregate…


Aim 2.2.0 is out! Thanks to the incredible Aim community for the feedback and support on building democratized open-source AI dev tools.

Thanks to siddk, TommasoBendinelli and Khazhak for their contribution to this release.

Note on the Aim versioning: The previous two release posts: Aim 1.3.5 and Aim 1.3.8 had used the version number of AimUI as those contained only UI changes. From now on we are going to stick to the Aim versions only regardless of the type of changes to avoid any confusion. Check out the Aim CHANGELOG.

We have also added milestones for each version. …


What is a random seed and how is it important?

The random seed is a number that’s used to initialize the pseudorandom number generator. It can have a huge impact on the training results. There are different ways that the pseudorandom number generator can be used in ML. Here are a few examples:

  • Initial weights of the model. When using not fully pre-trained models, one of the most common approaches is to generate the uninitialized weights randomly.
  • Dropout. Dropout is a common technique in ML that freezes randomly chosen parts of the model during training and recovers them during evaluation.
  • Augmentation. Augmentation is a well-known technique, especially for semi-supervised problems…


Aim 1.3.8 is now available. Thanks to the incredible Aim community for the feedback and support on building democratized open-source AI dev tools

Check out the new features at play.aimstack.io.

Here are the more notable changes:

Enhanced Context Table

The context table used to not use the screen real-estate effectively — an empty line per grouping, non-efficient column management (imagine you have 200 columns to deal with).

So we have made 3 changes to tackle this problem

New table groups view

Below is the new modified look of the table. Here is what’s changed:

  • The empty per group is removed
  • In addition to the group details popover…


Researchers divide datasets into three subsets — train, validation and test so they can test their model performance at different levels.

The model is trained on the train subset and subsequent metrics are collected to evaluate how well the training is going. Loss, accuracy and other metrics are computed.

The validation and test sets are used to test the model on additional unseen data to verify how well it generalise.

Models are usually ran on validation subset after each epoch.

Once the training is done, models are tested on the test subset to verify the final performance and generalisation.

There…


Aim v1.3.5 is now available. Thanks to the incredible Aim community for the feedback and support on building democratized open source AI dev tools.

Check out the new features at play.aimstack.io

Here are the highlights of the features:

Activity View on the Dashboard

The Activity View shows the daily experiment runs. With one click you can search each day’s runs and explore them straight away.

Statistics displays the overall count of Experiments and Runs.

The new dashboard in action!

X-axis alignment by epoch

X-axis alignment adds another layer of superpower for metric comparison. If you have tracked metrics in different time-frequencies (e.g. …


In this article I discuss Aim — an open source AI experiment comparison tool that can deal with 100s of training runs easily.

Disclaimer: I am co-creator of Aim.

I see metrics, metrics everywhere! (image by author)

AI researchers take on increasingly ambitious problems. As a result demand for AI compute and data multiplies month over month.

With more compute resources and more data available AI engineers run not only longer but also a lot more experiments than they used to.

Usually, AI research starts with setting up the data pipeline (or several versions of them) followed by an initial group of experiments to test the pipeline, architecture and detect basic bugs. Once that’s established, the rounds of experiments begin!

Then folks play with different moving pieces (datasets, pipeline, hyperparameters, architecture…

Gev Sogomonian

Aim co-creator. Co-founder and CEO AimHub. Prior Altocloud (acqu. Genesys). Runner, Swimmer.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store