Even the Tone-Deaf Can Sing Like a K-Pop Star
What if anyone, no matter their singing ability, could sound like a polished K-pop idol—like G-Dragon or IU?
That vision is becoming real thanks to Supertone, a South Korean AI company that is making big strides in changing how voices are transformed.
With AI, artists do not need a perfect natural voice anymore to make great songs.
Supertone, a startup from South Korea, is leading this change with technology that can turn any voice into a polished K-pop style.
At the recent Fortune Brainstorm AI conference in Singapore, Supertone’s founder, Kyogu Lee, demonstrated how his AI tools transform plain vocal recordings into rich, melodic performances with emotional depth — even for those without singing experience.
Kyogu Lee (left) speaking about Supertone on stage at the Fortune Brainstorm AI conference.
How Supertone Creates Unique Voices from Any Recording
Supertone’s approach breaks down a voice into four key elements: pitch, loudness, timbre, and linguistic content.
Lee describes timbre as the “vocal identity” unique to each person.
By isolating timbre and adjusting the other features, Supertone can create entirely new singing voices that still retain an individual’s distinctive sound.
This means anyone can experiment with singing styles they never thought possible.
Lee showed this in action by turning a monotone delivery into a lively K-pop style track, highlighting the tool’s ability to add style and emotion.
The Impact of AI on Music Production
Before this technology, producers had to hunt for singers who naturally fit a specific sound.
Now, Supertone enables the design of unique voices from scratch, reducing reliance on human performers.
Lee, however, stresses that the goal is collaboration, not replacement.
“We see creators and artists as co-creators.”
The company works closely with artists to refine the technology, helping them explore new genres or styles beyond their natural vocal range.
Backed by HYBE and Powering AI Idols
Supertone’s rapid growth ties closely to its partnership with HYBE, the entertainment company behind global phenomenon BTS.
HYBE invested $3.6 million in 2021 and later fully acquired Supertone for $32 million in 2023.
This integration has led to projects like MIDNATT, HYBE’s AI-powered artist, which uses Supertone’s tech to produce multilingual songs featuring singer Lee Hyun’s voice.
The acquisition signifies a serious bet on AI’s role in the future of music.
Real-Time Voice Transformation and Its Wider Reach
At the conference, Lee demonstrated how Supertone’s voice synthesis engine can turn a volunteer’s flat recitation into a harmonious, expressive vocal track live on stage.
Such real-time processing not only excites producers but could make music creation more accessible worldwide.
The ability to produce seamless multilingual songs could reshape global markets, allowing artists to reach audiences without language barriers.
Concerns and Ethical Questions Around Synthetic Voices
The rise of synthetic singing voices raises questions about authenticity and the future role of human vocalists.
Some fans and industry insiders worry about AI displacing singers, while others embrace the creative freedom these tools offer.
Lee acknowledges these concerns and stresses ethical use.
In a recent interview, he emphasised consent and transparency as core principles.
Will AI Change the Soul of Music?
As Supertone continues to develop new AI tools for full music composition, the line between human and machine creativity blurs further.
With its backing by major entertainment firms and expansion into markets beyond Korea, the company is set to influence how music is made and consumed worldwide.
Yet, as synthetic voices become more common, the industry faces questions about artistic integrity and the definition of talent.
Is the future of music one where AI enhances human creativity, or one where it risks replacing the soul that makes music truly human?