Google’s latest smartphone, the Pixel 4, focuses precisely on the scientific area and can be the anticipation of many tools that experts from different fields use in the very near future. These are some examples.
The Soli Project is basically the arrival of radar sensors to a mobile. It is the first step to eliminate the need for touch screens and the starting gun so that our body becomes the only necessary interface. How does it work? Thanks to these sensors, the Pixel 4 “feels” when we lift it and immediately activates the camera to use the facial recognition that unlocks it. A movement of the hand on the screen is enough to turn off the alarm, a “pinch” of the fingers to zoom or a pass of the back of the hand to discard a call. And this is the first step. The idea is to provide these sensors with screens first and then with any device. So scientists can “write” on the air and have this move directly to the screen, rehearse surgical interventions at a distance, digitally build molecules, etc. Then there is the inclusive element of Soli: the movements on the screen make it not necessary to use the keyboard of the same, it is enough to program movements for different tasks. Something that will greatly facilitate the lives of people with visual problems.
In astronomy, the Pixel 4 also brings very interesting innovations. The night mode, improved from the previous version, allows you to take clearer images of the night sky despite being a mobile. These devices in general, have a huge difficulty capturing images of stars and other weakly illuminated objects and more if they are so far away: good exposures require a lot of light and a lot of exposure time.
The process for taking the exhibits is quite simple, according to Google. First, you have to make sure you are away from any illuminated area. Use a tripod (or some firm support) and schedule a four-minute exposure. During this period the phone will take 15 exposures automatically, taking into account the movement of the stars. From here begins the science. The Pixel 4 includes machine learning for white balance (prevents dark areas from being too dark and white areas being too white, this allows, for example, to recognize snow regardless of whether it is illuminated by the red of a sunset or midday yellow) and something called “semantic segmentation” that can obscure the skies in Night Sight. All this allows to obtain much more detailed images, long exposure, without the celestial objects appear moved. However, for now, you cannot handle the contrast difference between the full moon and a moonlit landscape in the same shot. The full moon is about half a million times brighter than the landscape and no consumer camera can photograph such extreme brightness contrast in one shot … yet.
Another interesting aspect is the recorder. It is an application, simply called Recorder, that has the ability to transcribe recordings. What makes it different from similar ones like Otter, is that it does it in real-time and without the need for an internet connection. This is possible thanks to a new system for processing language, which combines AI and speech recognition, which even identifies sounds, such as applause or music. For now, the application is only available in English. What makes this interesting for science? Google has the Transaltor option in its search engine. In short, what is a recorder today will also be a “sender” of content that can be read, transcribed and translated at the same time and offline. This will greatly facilitate the exchange of knowledge in the scientific community.