In Crawford’s own words:
Most people know that humans can’t see really small things. That’s why we invented the microscope. On the flip side, humans also struggle to see really big things. If you want to view the whole Earth, for example, you have to look from space and you lose a lot of detail. Every pixel your eye sees is 10,000 square miles. Using artificial intelligence and cloud computing, we can simultaneously see the whole Earth and see the detail. We can look at trillions of features within millions of satellite images all at once. We call this a macroscope. Our machine-vision algorithms allow a computer to identify an object as a car or a truck, a house or a building, and so on. Once we have that data, we try to pull meaning from it, whether that’s a prediction of crop yields or how much oil will soon enter the marketplace. The question then becomes, what does this enable us to do that was previously impossible? One cool example is a project we’re doing with the World Resources Institute. WRI already uses satellite imagery to spot deforestation. But what you want to know is which forest will be cut down next, because then you can do something about it. We will be able to detect the road building, the initial thinning, and the other preparations that go on in advance of major deforestation events. Governments could also benefit from our technology. There’s evidence, for instance, that the Arab Spring was partially triggered by a doubling in the price of wheat in the Middle East, due to droughts in Ukraine and other countries that weren’t being well-tracked. Imagine if we could track food security in real time. I just think back to the microscope. It led to a revolution in biology and changed our understanding of the world. The macroscope, we believe, could lead to a revolution of its own. This article was originally published in the January/February 2016 issue of Popular Science, as part of our Big Ideas Of 2016 feature.