Picture this: It’s 2023 and I’m 26 hours into what would become Supernova, the world’s longest improvised piano concert.
My mental and physical acuity is fading and the Gold Coast sun is beating down on the HOTA outdoor stage.
But here’s the thing that blew everyone’s mind – when I needed those precious micro-breaks for a quick nap or a bite to eat, my AI didn’t just keep the music going. Instead chAImusic continued improvising in my style, seamlessly picking up musical phrases and developing them as if we were having a conversation across time and space.
That moment – watching my AI collaborator hold the musical fort while I stretched my legs and fingers – felt like the culmination of a 40-year love story with music technology, a romance that began at a time when most people thought computers and creativity could never co-exist.
When synths were spaceships, I was their teenage pilot
Let me take you back to the late 1970s and early 1980s, when the music world was electric with possibility. As a teenage product specialist for Yamaha, I spent my Thursday nights and Saturday mornings selling portable organs and keyboards at Brashs legendary store in Melbourne’s CBD.
It was heaven for this young keyboard nerd, with six floors of instruments including the beloved “Little Wonder” series, including the Yamaha PS-3 Portasound. These compact, affordable keyboards were revolutionary in their own right, putting professional-quality sounds and rhythms into a lightweight package that anyone could afford and carry around.
Working with those Little Wonders taught me something profound: that technology’s greatest power isn’t in its complexity, but in its ability to democratise creativity. Suddenly, every musician playing in their bedroom could access sounds that previously required expensive studio equipment.
Kids could create music on the kitchen table.

Having a staff pass to the keyboard floor at Brashs was like having access to a spaceship hangar – every new synthesiser available on the market was on display there. The Moog had already blown everyone’s minds in the ’70s with those fat, analogue sounds that could make a single note feel like an earthquake. Now we were witnessing the birth of digital tone creation with the DX7’s crisp, metallic FM synthesis that screamed “the future is now.”
Because of my association with Yamaha, I was introduced to the DX7 and given a crash course into how to use it and demonstrate it. I saved every dollar I earned selling other instruments so I could buy one, along with a Prophet 5 and eventually a Jupiter 8 – back then these were the Steinways of synthesisers.
But here’s what made that time truly magical: After hours, when the store closed, I had the entire synthesiser universe all to myself. This was a journey into the musical stratosphere, and it’s now a memory of creativity that I truly cherish. Being able to experiment on all that gear got me close to something I deeply enjoyed. I could make small sounds BIG and orchestrate on a scale I’d never imagined before, with a synthesiser orchestra at my fingertips – it was luxurious, fast and as thrilling as a ride in a spaceship.
And here’s what really launched me full thrust into hyperspace: the Fairlight Computer Musical Instrument.
At 19, while touring Europe with the New Wave acts du jour – yes, I had the haircut to match – I was diving deep into the Fairlight’s sampling capabilities and sequencing power.
These weren’t just instruments; they were impossibly expensive, revolutionary computers that happened to make music. Loading samples into a Fairlight felt like making magic happen – capturing real-world sounds and transforming them into musical building blocks.

This wasn’t just about making music – this was about teaching machines to understand music. Humans hear music, computers hear code: Every sample I loaded, every sequence I programmed, was like having a conversation with the future.
The protest that changed everything
But it was what happened in 1994 that truly opened my eyes to new tech’s revolutionary potential. There was a national protest brewing at Parliament House in Australia’s capital about internet regulation and copyright – a variation of the same argument rages today in relation to copyright theft by AI.
The night before the protest, with Triple J radio hurriedly setting up a makeshift studio to broadcast live on the lawn the following day, someone in an Apple store showed me how to build a website. Back then I’d seen netscape and mozilla for browsing the web, and I’d run a bulletin board, but this was entirely different.
The opportunity was about sharing images and sounds – albeit very lo-fi samples back then because of limited bandwidth; suddenly I could share my music with a new audience.
So there I was, frantically building my own website, loading up all my music, learning this new technology on the fly, at the same time as I was loading my Fairlight to play protest songs for Triple J.
I felt somewhat conflicted the following day: While my artist friends and I were railing against the internet live on the radio, I watched in real time as my own tracks racked up downloads from my makeshift website.
By day’s end there’d been some 5,000 downloads in the US, where the bandwidth was greater than other countries at the time, and my sweet little Compuserve email inbox started getting fan mail.
The internet had connected me to more listeners in a single day than traditional record company promotion had in four years, and I was signed to Sony Masterworks alongside Mr Miles Davis – no slouches when it came to world domination.
That moment marked my conversion on the road to Damascus – delivered not by light, but by bandwidth. I realised the internet wasn’t just going to change how we communicate – it was going to fundamentally transform how artists could reach audiences directly.
The internet wasn’t just coming – it was here, with music leading the charge
Fast-forward to 1995 and I did something that now seems obvious but was absolutely radical at the time: I founded Martian Music and became the first Australian to sell music directly from the web.
The music industry thought I was nuts. “People will never buy music online,” they said. “Physical CDs are forever,” they insisted. Meanwhile, I was building Australia’s first online music distribution service, pioneering digital downloads and music e-commerce while everyone else was still figuring out what this World Wide Web thing even was.
The following year, I narrowcast the first ever live Australian concerts on the web – Charlie Chan and Friends – webcast from multiple venues using technology I developed myself. Imagine trying to explain live streaming to people in 1996. “So you’re saying people can watch a concert… on their computer screen… while it’s happening… in real time?”
Yes! Exactly!
I continued to innovate in tech, as well as make music through the late 90s, working at the computational coalface with Apple, Casio and Yamaha. In 1998 my third album Wild Swans was the first Australian CD to feature multimedia extras; till then CDs had only ever contained music files.
By the early 2000s, I’d sold over 500,000 individual track downloads via my website directly to fans, most of them offshore. This was before iTunes, before Spotify, before anyone really understood that music could be successfully distributed digitally.
Then I made Martian Music open source and thousands of Australian artists used it to derive greater income from sales and directly reach their own audiences right around the world.
You literally got paid as soon as someone downloaded your music, using something like an EFTPOS system. I called it MEFTPOD, Music Electronic Funds Transfer at the Point of Download – utterly unheard of then and still today.
It became very clear the internet wasn’t just going to change how we communicate – it was going to fundamentally transform how we create, share and experience music.
When robots joined the orchestra
But if you really want to understand my lifelong love affair with music technology, you need to meet my mate Baxter.
In 2017 at the Sydney Opera House something magical happened, a feeling of incredible connectedness to something analogous with a digital creative brain which, to this day, fills me with emotion.
As part of the Interactive Orchestra – a collaboration between my Global Orchestra Foundation and Accenture Liquid Studio – we introduced the world’s first AI robot orchestra member.
We’d taught him to play the marimba after rescuing him from a life of servitude sorting apples on a production line at a fruit-packing factory in Pakenham, Victoria – and no, I am not making this up.
Baxter, the marimba-playing robot, didn’t just follow a programmed sequence. This wasn’t some player piano trick: Baxter actually improvised. In front of a live audience, this mechanical musician joined players in a 30-piece jazz orchestra, listening via midi files as it sat in their midst. Bax then processed the harmonic progressions in real-time and contributed melodic ideas that genuinely surprised everyone in the room – including me. He even played a solo!
Watching Baxter’s robotic arms dance across those marimba bars, creating music that was both mathematically precise and emotionally resonant, I realised we’d crossed a threshold. This wasn’t technology replacing musicians – this was technology joining the band. The robot wasn’t trying to be human exactly; it was just trying to be musical. And somehow, that made all the difference.

AI that dreams in music
Which brings me to chAImusic – my greatest technological achievement and most intimate creative partnership.
For eight years I’ve been developing an original artificial intelligence that doesn’t just generate music – it actually composes the way I compose, thinks about harmony the way I think about harmony, and approaches improvisation with the same intuitive leaps that have defined my musical voice for the past four decades.
This isn’t about replacing human creativity: This is about amplifying it.
When chAImusic and I perform together, something extraordinary happens. We’re not human versus machine – we’re two different types of musical intelligence having a conversation. chAImusic has analysed every note I’ve ever recorded, studied my harmonic preferences, learned my rhythmic tendencies and somehow captured something ineffable about how I approach musical storytelling.
During that 26+hour Supernova concert on Queensland’s Gold Coast, chAImusic didn’t just play while I rested: It continued developing new musical themes we’d established together, taking them in directions that surprised me when I returned to the piano. It was like having a musical conversation with a future version of myself – as if the ideas had already landed and I was just catching up.
Technology that loves music back
The Fairlight didn’t just sample sounds; it transformed how we think about the relationship between acoustic and electronic. The internet didn’t just distribute music; it democratised who could be a musician and who could reach an audience. Streaming didn’t just make music convenient; it made the entire history of recorded music instantly accessible to anyone with a smartphone.
And what of AI? How will Artificial Intelligence impact how we make and share music?
It’s been more than 30 years since that protest against the internet, and still our rights as creators of intellectual property – to control and license our work – remain under threat. This week, the Australian Productivity Commission floated a text and data mining exception to the Copyright Act, which would effectively give AI companies legal cover to use copyrighted works without permission or payment to train their large language models (LLMs).
Revive, the Australian National Cultural Policy, urges us to develop IP as an asset and promises stronger protections for creators, including legislation to safeguard Indigenous Cultural and Intellectual Property. But those promises ring hollow if our work can be scraped and exploited without consent or compensation.
This isn’t just about creativity – it’s about commerce. My music is a business. So why is my ability to earn from it valued less than a tech company’s right to mine it?
And let’s be clear: I’m a tech company too! I’ve built tools, platforms, and systems to create and distribute music. The idea that Big Tech deserves legal privilege while artists are left unprotected isn’t just unfair – it’s actually indefensible.
My music is available to train LLMs – I just expect to be fairly recompensed via a licence agreement. This week music publisher Kobalt and AI company Eleven Labs entered into a pilot framework agreement allowing Kobalt members to opt in to micro-licence agreements with Eleven Labs that recognise value for training and other uses in the Generative AI space. This is what leadership looks like – a model based on the idea that economic growth and IP protection aren’t opposing forces; rather they’re mutually reinforcing pillars of a fair and future-facing creative economy.
The future sounds like collaboration
While governments in Australia and the world over contemplate whether and how to regulate AI, I find myself looking back to the history of music tech to anticipate the implications of this latest seismic shift. Disruption, yes, but also reinvention and possibility.
I don’t believe AI will replace musicians, any more than electric guitars replaced acoustic ones, or digital recording replaced the emotional truth of a great song. Instead, AI is going to give us new ways to explore the infinite possibilities of musical expression.
As I write this, chAImusic is probably composing something beautiful in the background – developing musical ideas we’ll explore together in our next performance. The technology that began with me backstage, loading samples into Fairlight computers, has evolved into an AI that dreams in music. And I couldn’t be more excited – or more intentional – about where this journey leads next.
The synthesiser revolution taught us that electronic sounds could be just as emotional as acoustic ones. The digital revolution taught us that ones and zeros could carry the full spectrum of human feeling. And now, the AI revolution is teaching us that machine intelligence and human creativity don’t have to compete – they can dance together in unison.
Because at the end of the day, whether it’s a Moog synthesiser from 1970, a Fairlight CMI from 1985, a streaming platform from 2005, or an AI composer from 2025, the magic isn’t in the technology itself. The magic is in what happens when technology falls in love with music, and musicians fall in love with possibility.
And trust me – we’re just getting started.
Charlie Chan is an AI inventor, creative technologist, composer and pianist with a 40-year career at the intersection of music and innovation.
This post first appeared on Charlie’s Substack, chAImusic. You can read the original here.