Beginning as far back as Leonardo da Vinci, artists ceased to limit themselves to canvases, paints, and sculpture alone. A true researcher, Da Vinci studied the surrounding world in all its complexity. His work with anatomy became particularly significant—this scientific interest not only enriched his knowledge but also formed a vital foundation for the master’s unique style. For him, art was a method of perceiving physical reality.
In the mid-1960s, at the turning point between the industrial era and an emerging technological imagination, this approach evolved into the Experiments in Art and Technology (E.A.T.) project. It offered art not a new tool, but a new logic of existence. The project’s initiators—artist Robert Rauschenberg and Bell Labs engineer Billy Klüver—sought to create a system where art and engineering operated as equals. Their 1966 manifesto, the performance series 9 Evenings: Theatre and Engineering, and the legendary Pepsi Pavilion at Expo ’70 in Osaka proved that technology in art can be complex, invisible, and even “uncomfortable,” provided it opens new ways of thinking. The primary lesson of E.A.T. was that art is a process, a system, and an environment, rather than a mere static image.
Today, this “laboratory of perception” has migrated into the space of data and neural networks. AI has ceased to be a simple tool—it has become a co-author, a kind of exoskeleton for human imagination.
Refik Anadol: Architect of Digital Hallucinations
A Turkish-American media artist and UCLA alumnus, Refik Anadol has transformed massive datasets into a new currency of perception. He works at the intersection of architecture and machine learning, collaborating with neuroscientists and engineers from Google and NASA.
How it works: Anadol utilizes the concept of Latent Space. He feeds millions of images—ranging from archival city photos to deep-space captures—into neural networks. By employing fluid dynamics algorithms, he compels pixels to behave like flows of water or smoke, visualizing the machine’s process of “remembering” data. In the Machine Hallucinations project, the algorithm literally dreams while awake, whereas in WDCH Dreams, Anadol prompted the Walt Disney Concert Hall building to visualize its own musical history directly onto its façade.
Anadol aims to demonstrate that data is not a collection of dry graphs, but a living, visceral environment. He builds a bridge between science and emotion, allowing us to experience the invisible processes of the world at an intuitive level.
Ian Cheng: Creator of Digital Ecosystems
American artist Ian Cheng, who studied cognitive science, directs technology toward the modeling of the future. His works are not videos but autonomous worlds, which have been exhibited at MoMA and the Venice Biennale.
How it works: Cheng creates Live Simulations based on video game engines (Unity/Unreal). These worlds are inhabited by virtual creatures endowed with “agent-based modeling”—each possessing its own goals and learning algorithms. The systems are launched in real-time: creatures mutate, conflict, and adapt without the author’s intervention.
Cheng explores chaos and uncertainty. He presents the world as a complex system without a central script, where consciousness must constantly adapt. For him, AI serves as a model of reality where the emphasis lies not on form, but on dynamics and the probability of failure.

Mario Klingemann: Explorer of the Machine Unconscious
German artist Mario Klingemann, a pioneer in the use of GANs (Generative Adversarial Networks), views AI not as an imitator, but as a distorted mirror of humanity. His works are frequently described as “Neural Baroque.”
How it works: His systems utilize two neural networks (the Generator and the Discriminator) locked in a continuous duel. Klingemann trains them on classical paintings and compels the algorithm to infinitely generate new faces. He deliberately intervenes in the process, forcing the machine to “err” beautifully, creating strange, haunting deformations in real-time.
Klingemann is interested in the machine imagination as an analogue to the human unconscious. He demonstrates that AI forms its own logic of imagery, helping us better understand the nature of our own creativity.

Sofia Crespo: Biology of the “Other”
Working at the nexus of art and biology, Sofia Crespo creates entities that might have emerged through an “alternative evolution.”
How it works: Crespo utilizes neural image synthesis, “cross-breeding” the visual genes of different species—such as the texture of coral with the anatomy of insects. The algorithm analyzes biological patterns to synthesize new organisms that appear plausible yet do not exist in nature.
She explores whether AI can become a “new creator of life.” Her works blur the line between the natural and the artificial, expanding our imagination beyond the confines of the terrestrial biosphere we recognize.

Photos by Sam McCormick, NXT Museum.
Holly Herndon and Mat Dryhurst: Digital Twins and Voice
This Stanford-based duo is radically reshaping the concepts of authorship and identity in the era of digital clones.
How it works: They created Holly+—a deep neural network model of Herndon’s voice. This is not a mere filter but a trained replica that understands the nuances of her breathing and intonation. Through this tool, anyone can “sing” with her voice, while the rights and ethics of its use are governed via decentralized protocols.
Their goal is to reimagine personhood. They demonstrate that a digital twin can be an active cultural agent and advocate for individual sovereignty over one’s “digital doubles.”

Emi Kusano: Nostalgia for the Future
Tokyo-based artist Emi Kusano merges the 1980s anime aesthetic with cutting-edge image generation tools.
How it works: Kusano uses Stable Diffusion and LoRA models trained on pop-culture archives: fashion, advertisements, and music. The algorithm generates characters that resemble lost frames from vintage cartoons that never existed. This is a synthesis of retro-design and artificial intelligence.
Kusano explores collective memory. She shows how technology rewrites cultural history, creating new mythologies where our past is transformed into a malleable data set.

A New Ontology: What Awaits Us Tomorrow?
In 2026, the union of art and science has definitively moved beyond experimentation. We have transitioned from the creation of static objects to the management of processes. The artist of today is not the one who holds the brush, but the one who defines the parameters of the system.
A man climbs a mountain because it is there. A man creates a work of art because it is not there.
– Observed Carl Andre.
So, what should we expect from the synergy of humanity and technology? It is likely that the art object will become a dialogue between your psyche and the code. Thanks to AI, installations will shift their form in real-time, adjusting to the viewer’s pulse and eye movements.
We will witness the emergence of autonomous algorithm-artists living on the blockchain, which will evolve independently, sell their works, and even hire humans to physically manifest their ideas. With the advancement of robotics, AI will transcend screens. Sculptures will “grow” and pulsate, mimicking biological life and transforming galleries into living digital gardens.
Editor’s Choice
The art of the future is not a struggle against technology, but its total dissolution into the creative act. We are moving toward a moment where technology and consciousness will intertwine inextricably, creating new forms of being that Da Vinci could only have dreamed of.
R