ai

Outputs Of AI Models Can Help Improve Other AI Models: Paper

All these years, researchers have been feeding their AI models human-generated data to improve their performance. It now turns out these models might not need human intervention at all.

The outputs of AI models are helping improve other AI models, a new paper suggests. Titled

“Synthetic Data from Diffusion Models Improves ImageNet Classification”, the paper is presented by researchers from the Google Brain team. “Deep generative models are becoming increasingly powerful, now generating diverse high fidelity photo-realistic samples given text prompts. Have they reached the point where models of natural images can be used for generative data augmentation, helping to improve challenging discriminative tasks?” the paper asks.

And it discovers that they have. “We show that large-scale text-toimage diffusion models can be fine-tuned to produce class conditional models with SOTA FID (1.76 at 256×256 resolution) and Inception Score (239 at 256 × 256). The model also yields a new SOTA in Classification Accuracy Scores (64.96 for 256×256 generative samples, improving to 69.24 for 1024×1024 samples). Augmenting the ImageNet training set with samples from the resulting models yields significant improvements in ImageNet classification accuracy over strong ResNet and Vision Transformer baselines,” the paper says.

What this means in simple terms is that images produced by an AI model, such as Stable Diffusion, can help improve the performance of other AI models. AI models are usually trained on large amounts of human-generated data — models like GPT-3 and Stable Diffusion are trained on human-created text and images respectively. But the researchers discovered that they could also use AI-generated images to train the models and improve their performance.

Now this might not sound like much, but the implications of this discovery are profound. If AI-models can create unique training data that can be used to improve other AI models, AI models might be able to improve very rapidly — models are currently often constrained by the availability of human-generated training data, but computers can create infinitely large amounts of training data, which can be made to keep improving models infinitely many times.

More interestingly, training models on the data generated by other models can take humans out of the equation entirely — models can simply use the outputs of other models to improve themselves. These are still early days, but this discovery opens up a whole new paradigm into how AI models of the future could evolve and improve.