Aim 2.3.0 is out! Thanks to the community for feedback and support on our journey towards democratizing AI dev tools.

Check out the updated Aim at play.aimstack.io.

Below are the highlight features of the update. Find the full list of changes here.

System Resource Usage

Now you can use automatic tracking of your system resources (GPU, CPU, Memory, etc). In order to disable system tracking, just initialize the session with system_tracking_interval==0.

Once you run the training, you will see the following. Here is a demo:

Of course you can also query all those new metrics in the Explore page and compare, group, aggregate…


Aim 2.2.0 is out! Thanks to the incredible Aim community for the feedback and support on building democratized open-source AI dev tools.

Thanks to siddk, TommasoBendinelli and Khazhak for their contribution to this release.

Note on the Aim versioning: The previous two release posts: Aim 1.3.5 and Aim 1.3.8 had used the version number of AimUI as those contained only UI changes. From now on we are going to stick to the Aim versions only regardless of the type of changes to avoid any confusion. Check out the Aim CHANGELOG.

We have also added milestones for each version. …


What is a random seed and how is it important?

The random seed is a number that’s used to initialize the pseudorandom number generator. It can have a huge impact on the training results. There are different ways that the pseudorandom number generator can be used in ML. Here are a few examples:

  • Initial weights of the model. When using not fully pre-trained models, one of the most common approaches is to generate the uninitialized weights randomly.
  • Dropout. Dropout is a common technique in ML that freezes randomly chosen parts of the model during training and recovers them during evaluation.
  • Augmentation. Augmentation is a well-known technique, especially for semi-supervised problems…


Aim 1.3.8 is now available. Thanks to the incredible Aim community for the feedback and support on building democratized open-source AI dev tools

Check out the new features at play.aimstack.io.

Here are the more notable changes:

Enhanced Context Table

The context table used to not use the screen real-estate effectively — an empty line per grouping, non-efficient column management (imagine you have 200 columns to deal with).

So we have made 3 changes to tackle this problem

New table groups view

Below is the new modified look of the table. Here is what’s changed:

  • The empty per group is removed
  • In addition to the group details popover…


Researchers divide datasets into three subsets — train, validation and test so they can test their model performance at different levels.

The model is trained on the train subset and subsequent metrics are collected to evaluate how well the training is going. Loss, accuracy and other metrics are computed.

The validation and test sets are used to test the model on additional unseen data to verify how well it generalise.

Models are usually ran on validation subset after each epoch.

Once the training is done, models are tested on the test subset to verify the final performance and generalisation.

There…


Aim v1.3.5 is now available. Thanks to the incredible Aim community for the feedback and support on building democratized open source AI dev tools.

Check out the new features at play.aimstack.io

Here are the highlights of the features:

Activity View on the Dashboard

The Activity View shows the daily experiment runs. With one click you can search each day’s runs and explore them straight away.

Statistics displays the overall count of Experiments and Runs.

X-axis alignment by epoch

X-axis alignment adds another layer of superpower for metric comparison. If you have tracked metrics in different time-frequencies (e.g. …


In this article I discuss Aim — an open source AI experiment comparison tool that can deal with 100s of training runs easily.

Disclaimer: I am co-creator of Aim.

AI researchers take on increasingly ambitious problems. As a result demand for AI compute and data multiplies month over month.

With more compute resources and more data available AI engineers run not only longer but also a lot more experiments than they used to.

Usually, AI research starts with setting up the data pipeline (or several versions of them) followed by an initial group of experiments to test the pipeline, architecture and detect basic bugs. Once that’s established, the rounds of experiments begin!

Then folks play with different moving pieces (datasets, pipeline, hyperparameters, architecture…


Can tools for AI be as good as they are for the rest of software engineering?

AI is eating software.

Back in 2015, AI grabbed the attention of builders, investors and companies. Deep learning is now mainstream because it’s a super efficient way to solve many types of problems, where we used to hardcode rules and features.

But AI tools are broken.

For the rest of software, we engineers have always built awesome tools for ourselves, like IDEs, version control, packaging, containers and monitoring. Launching a production-strength website, lib or app has never been easier.

But for AI the development cycle is very long — a preprocessing bug may…


Most of the ideas cannot be effectively shared straight away. It takes time and brain processing before the first contact with other brains is made. Even the simplest one sentence ideas get filtered (subconsciously). It doesn’t get easier when we think about multi-dimensional and multi-layered concepts. How to transfer complex ideas effectively?

Our thoughts and ideas do seem complex, but there are hidden regularities waiting to be discovered if we pay enough attention. Therefore, the stream of consciousness is less continuous and more inherently modular so that each module/component could be processed, considered and communicated. This approach is used extensively…


Every company is a data company but not many think of data as a culture. The data collection in modern companies (and modern world) is not institutionalised enough. In the episode “How to Love Criticism” of the “WorkLife” podcast one of the heroes to a question why he likes transparent criticism, answers:

“It’s just data. It’s just data, objective data about what I’m like. I would rather know how bad the bad is and how good the good is so I can do something with it”.

This is a great demonstration of a culture that systematically collects data — in…

Gev Sogomonian

Aim co-creator. Co-founder and CEO AimHub. Prior Altocloud (acqu. Genesys). Runner, Swimmer.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store