What science photos can using Artificial Intelligence?


What science photography does that artificial intelligence can’t do? The impact of breast cancer on low- and middle-income countries and the U.S.

People in low- and middle-income countries are hardest hit by breast cancer, which is on the rise. What science photography does that artificial intelligence can’t do?

Low and middle-income countries have higher death rates from breast cancer than wealthier nations because of the lack of screening and treatment options. For example, people aged under 50 in low-income countries are four times more likely to die from breast cancer than those in high-income countries, on the basis of the most-recent available data, from 2022. There will be an increase in breast cancer cases and deaths in the next 25 years due to increasing life expectancy, greater prevalence of risk factors, and less breastfeeding.

Through actions such as firing of US federal scientists en masse, blocking clean-energy incentives and abandoning international climate commitments, the administration of US president Donald Trump is hobbling the country’s efforts to reduce its contribution to global warming. As courts start to assess the legality of some of Trump’s policies, the uncertainty is hampering climate- and energy-related programmes and businesses. “There’s been a viciousness that I hadn’t anticipated in tearing everything down without a coherent plan for what to build back up,” says atmospheric scientist Daniel Cohan.

A group of nonprofits, archives and researchers are trying to make the federal environmental data that they rely on remain available to the public. Alejandro Paz and Eric Nost, both from Public Environmental Data Partners network, have written an overview of how to find and save government data in the US.

Source: Daily briefing: What science photos do that AI-generated images can’t

Do you see the reef from your thief or your granny? How do you feel? What should we do about it, how to document it?

Do you know your reef from your thief and your granny from your grief? The knots are similar and the reef is strong. “The grief knot, aptly named, is so weak you could sneeze on it and it would fall apart,” notes brain scientist Sholei Croom. But people aren’t that good at guessing which knot is stronger just by looking at them — even when they showed a good understanding of the underlying structure. Researchers say this blindspot in physical reasoning sheds light on how our brains perceive the world.

The results of my AI experimentation are often cartoon-like images that can hardly pass as reality — let alone documentation — but there will be a time when they will be. They all agree that there should be clear standards for what is allowed and what isn’t. A visually-impaired person should never be allowed as a documentation.

Communication is a powerful tool for driving climate action, but it’s often overlooked by researchers, says ecologist Harini Nagendra. Climate science can become more popular because of engaging stories about nature and the joy of nature. These stories need to be heard in as many languages as possible. We need to share the stage with those affected by climate change, to understand how it feels.

How to Make a Photograph of a Nanocrystal (An Explanation for the Discarded Paper by Bawendi)

Researchers in Spain eavesdropped on close-knit families of crows, collecting information on hundreds of thousands of different vocalizations. The small microphones recorded calls that were far less loud than the familiar clavs. The team used the technology to analyse the sounds. Researchers hope to understand how the crows communicate with one another.

In their book, Discarded, Sarah Gabbatt and Jan Zalasiwitz argue that synthetic clothing will enter the fossil record as a technofossil after thousands of years despite the fact that clothes made from natural materials have degraded. (The Guardian | 7 min read)

One of the privileges of being at the Massachusetts Institute of Technology in Cambridge is seeing some examples of the future, such as advances in quantum computing and energy sustainable production. Do I understand it all deeply? I am able to wrap my head around much of that research when I get asked to create an image to document it.

First, let’s remind ourselves of the differences between a photograph, in which each pixel corresponds to real-world photons, and a genAI visual, created with a diffusion model — a complex computational process that generates something that seems real but might never have existed.

In 1997, Moungi Bawendi, a chemist at MIT, asked me to take a picture of his nanocrystals (quantum dots). When excited with ultraviolet light, these crystals fluoresce at different wavelengths depending on their size. Bawendi, who later shared a Nobel prize for this work, did not like the first image (see ‘Three views’), in which I had placed the vials flat on the lab bench, taking a top-down photograph. You can tell that was how I had placed them, because you can see the air bubbles in the tubes. I thought that making the image more interesting was intentional.

The second iteration was used on the November 1997 cover of the Journal of Physical Chemistry B (see ‘Three views’). The photograph shows the importance of collaborating with the scientist, an essential part of my process.

People might think that the image the program produced is attractive (see ‘Three views’), but it is not even close to the reality captured in the original photograph. DALL-E has bead-like dots that are not in the prompt. The word quantum dots were likely found in the data set of the model that underlies it, and used to replace the words “nanocrystals”.

More troubling is the fact that, in each vial, there are dots with different colours, implying that the samples contain a mix of materials that fluoresce at a range of wavelengths — this is inaccurate. Furthermore, some of the dots are shown lying on the surface of the table. Was that an aesthetic decision made by the model? I find the resulting visual fascinating (see Supplementary Information).

We can see the colors we see in the image of the Universe being digitally enhanced, meaning that we can get even more of a sense of reality. Humans have been generating images for a long time without necessarily being labelled as such. It’s not possible to enhance a photograph with software and create a reality from trained data sets.

As a science photographer, I am acutely aware of the difference between an illustration and a documentary photograph, but I am less confident that AI programs can make this distinction. An illustration is a representation of something that is subjectively and visually described using various colors, shapes, and notations. A documentary optical photograph that is made using scanning or transmission electron microscope is a representation of an item, even if it is not the item itself. The intent is what differentiates the two.

Publishers have software in place to identify certain manipulation in images that already exist but will eventually be able to use artificial intelligence to circumvent these fail-safes. Efforts are being made to find a way to trace the original owner of a photograph. For example, the forensic photography community, through the global Coalition for Content Provenance and Authenticity, provides technical information to camera manufacturers regarding the ability to trace the provenance of a photograph by keeping a record in the camera of any manipulation. One can’t imagine just how many manufactures are on board.

Two articles have raised an important issue by pointing out possible privacy and copyright violations when using diffusion models. Preprint at arXiv and on go.nature.com. Credit is only feasible in a closed system (which diffusion models are not) for which the training data are known and fully documented. For example, Springer Nature, which publishes Nature (Nature is independent of its publisher), has recently included an exception into its policy for Google DeepMind’s AlphaFold program to cover this sort of use (for models trained on a specific set of scientific data). Since Alphafold is not a genai tool that creates images, it is important that people keep that in mind.

Efforts are being made to address privacy issues. Creators can now use a kind of ‘tamper-evident’ metadata called Content Credentials to, as Adobe explains in its manual, “obtain proper recognition and promote transparency in the content creation process” (see go.nature.com/3wx92ng).

For example, I recall one experience with an engineer who altered a photograph that I had made of their research and wanted to publish it, along with the submitted article (see Supplementary Information). The researcher did not consider that altering the image was, in fact, similar to changing their data because they had not been taught the basic ethics of image manipulation and visual communication.