6.5 C
New York
Sunday, November 29, 2020

EfficientNet: New Methods for Scaling Convolutional Neural Networks

Must Read

X-ray analysis of ancient Egyptian mummy reveals a surprising discovery

High intensity X-rays have also revealed a mysterious object that was placed on the girl's abdomen at the time...

Scientists warn of the danger of vitamin D deficiency

A severe deficiency of vitamin D affects many aspects of our health, but above all the bones and muscles. Not...

In the UK, a millennial tree changes sex

One of the oldest trees in the UK has changed its sex. It is an ancient yew tree that surrounds...
Kamal Saini
Kamal S. has been Journalist and Writer for Business, Hardware and Gadgets at Revyuh.com since 2018. He deals with B2b, Funding, Blockchain, Law, IT security, privacy, surveillance, digital self-defense and network policy. As part of his studies of political science, sociology and law, he researched the impact of technology on human coexistence. Email: kamal (at) revyuh (dot) com

Starting with an initially simple Convolutional Neural Network (CNN), the precision and efficiency of a model can usually be increased step by step by arbitrarily scaling network dimensions such as width, depth, and resolution.

Increasing the number of levels used or using higher resolution images to train the models is usually associated with a high manual effort. Researchers at the Google Brain AI team now use EfficientNet to present a new scaling approach based on a fixed set of scaling coefficients and advances in AutoML.

Behind the EfficientNets hides a number of new models, which according to Google promise high precision with optimized efficiency (smaller and faster). Based on the AutoML MNAS Framework newly developed model EfficientNet-B0, which uses the architecture of Mobile Inverted Bottleneck Convolution (MBConv) – comparable to MobileNetV2 and MnasNet. The simple structure of this network should create the conditions for generalized scaling according to the new approach.

Instead of optimizing individual network dimensions independently of each other, Google researchers are now looking for a balanced scaling process across all network dimensions. The optimization starts with a grid search to determine the dependencies between the different dimensions with regard to parameters such as a doubling of the FLOPS. Then, the scaling coefficients thus obtained are applied to the network dimensions to scale a base network to the desired target model size or budget.

A detailed description of the new scaling approach behind the EfficientNet can be found in the Google AI blog post and in the ICML 2019 paper “EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks” . Google provides the source code, including the TPU training scripts, as open source on the GitHub project page.

- Advertisement -

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisement -

Latest News

X-ray analysis of ancient Egyptian mummy reveals a surprising discovery

High intensity X-rays have also revealed a mysterious object that was placed on the girl's abdomen at the time...

Scientists warn of the danger of vitamin D deficiency

A severe deficiency of vitamin D affects many aspects of our health, but above all the bones and muscles. Not treating this problem over the...

In the UK, a millennial tree changes sex

One of the oldest trees in the UK has changed its sex. It is an ancient yew tree that surrounds the church of St. Meugan,...

Why does the sun shine? Scientists go one step further to understand it

Scientists have detected for the first time neutrinos formed during a mysterious process in the Sun. It is the first experimental test of the...

Earth found to be closer to a supermassive black hole than previously thought

A new three-dimensional map of the Milky Way, drawn up by a team of Japanese astrophysicists, shows that our planet is located about 2,000...
- Advertisement -

More Articles Like This

- Advertisement -