What is DogeLayer?
As a product of its times, Dogecoin has its own set of problems. DogeLayer is here to fix just that.Max Ng
Have you ever wondered what your digital life will be like in the future?
The movie "Ready Player One" shows one of these scenarios, where people use "avatars" to explore various possibilities in a world that combines virtuality and reality.
In the movie, although the heroine Artemis is not perfect in reality, she is a heroic heroine in "Oasis", which is inseparable from an important technology-virtual human.
Today, Wei Xi will talk with you about virtual human technology and the future it represents.Virtual reality on the same screen: from a special party
In this May Day holiday, CCTV's May Fourth Gala is particularly different this year.
One of itsimportant highlights is that three virtual people, Tong Heguang, Xingtong and Jili from Tencent Interactive Entertainment, stepped onto the stage of the performance together with many young actors.
Accompanied by the familiar melody of "New Youth", the three virtual people are on the same screen with many young students on the campuses of Wuhan University and Renmin University.
This is a new attempt by CCTV to introduce cutting-edge digital technologies such as virtual human technology, creating the first immersive interactive experience of virtual programs in China.
In fact, as a national media platform, in order to create a better viewing experience, CCTV has never spared no effort in the application of new technologies.
CCTV's evening shows have always been an important display window for many black technologies: AR synthesis, holographic projection, 8K live broadcast, VR live broadcast, drone array, quadruped robots and many other hard technologies have been introduced to ordinary people through CCTV's evening shows. life popularization.
In a sense, CCTV's choice of new technologies also represents the wind vane of cutting-edge new technologies in the cultural industry from different aspects.
The performances of the three virtual humans at the gala conveyed an important signal to the audience very intuitively——as a composite technology with great potential, virtual humans can be integrated into cultural and entertainment scenes very naturally and seamlessly.
It is worth noting that the three virtual people participating in the party are all from the game field.
This is not accidental in a sense, and there are deeper reasons behind it——
First of all, the game industry has a deeper accumulation in virtual human technology and research and development, and can use its technical strength to create natural, lifelike, and highly interactive virtual human images.
Secondly, the game industry has long been committed to the creation of the digital world and the exploration of digital-real integration, and has a deeper and more forward-looking understanding of the application scenarios and development trends of virtual humans.
The performances of Tong Heguang, Xingtong and Jili at the May 4th Gala are a microcosm of virtual human technology gradually entering the life scenes of ordinary users, and it is also an important node where digital interaction integrating digital and real will become the norm in the future.
And there are broader values and meanings behind the virtual human technology that are worthy of discussion and analysis by practitioners in the technology industry.Standing at the Crossroads of Technology and Humanities: The Past and Present of Virtual Humans
First, let’s get to know the virtual people briefly.
Virtual human is not a single technology, but a collection of multiple technologies.
According to relevant definitions, a virtual person refers to a virtual character with a digital appearance created through computer graphics, graphics rendering, motion capture, deep learning, speech synthesis and other technologies.
"Hatsune Miku", which is highly sought after by fans, is a typical virtual human. It has a beautiful character shape, a recognizable voice, can interact and even hold live concerts.
In 1984, Max Headroom in the United Kingdom became the first virtual human to participate in a movie. From this point of view, virtual human began to sprout. At this time, the early virtual human was still mainly hand-painted.
Later, with the advancement of deep learning and computer vision technology, and the maturity of related technologies such as CG, speech recognition, image recognition, motion capture, and real-time rendering, virtual humans are increasingly moving towards the pursuit of immersion, "full reality and interactive" direction.
From "Hatsune Miku" to "Luo Tianyi", from digital Huang Renxun to virtual astronaut reporter Xiaozheng, from monster catcher Liu Yexi to fashionista "Xing Tong", from holographic concerts to virtual human variety shows, today's virtual People have become a booming industry and play an important role in various fields.
According to the report of Qubit, it is estimated that by 2030, the scale of China's virtual human market will reach 270.3 billion yuan, and the market demand will continue to expand.
Today's virtual human industry stands at the crossroads of technology and humanities. On the one hand, its image creation and interactive experience are highly dependent on the technology behind it. On the other hand, its IP and human design are highly dependent on the deep injection of humanities and emotions.What's under the iceberg? ——Taking Xingtong as an example to see the technology behind the virtual human
If the virtual human is an iceberg, then what we see is the part above the water surface of the iceberg, and the invisible part below the water surface is the underlying technology that supports the virtual human.
Next, let's take "Star Pupil" as an example to take a look at the technology behind the virtual human.
This virtual human Xingtong, who appeared on the CCTV stage, was born in the game, and its growth depends on the game technology, but its significance goes beyond the game.
According to public information, Xingtong was born in 2018. He was one of the NPCs in the game QQ Xuanwu. In 2019, Tencent Interactive Entertainment upgraded the modeling and 3D image of Xingtong, and Xingtong began to move towards Broader user vision.
Today, Xingtong is not only the avatar spokesperson of QQ Xuanwu, but also a fashion virtual blogger who has accumulated more than 260,000 fans at station B.
Trendy styling, excellent dressing taste, blue eyes, cute face, skillful dancing skills, elegant interactive conversation... Xingtong really caught the eyes of many fans who like her.
And behind this is the attention and support that Tencent Mutual Entertainment has spared no effort in terms of resources and technology.
I have to talk about the technology behind Xingtong. The production of virtual human mainly involves three major technologies: modeling, driving and rendering.
First of all, 3D modeling is the basis for constructing the virtual human image, and the focus is on the fine restoration of details;
Secondly, driving the virtual human model by capturing the captured motion is the main way to generate 3D virtual human motion at present, and the core technology is motion capture;
Finally, the rendering technology solves the restoration of the fidelity of the virtual human, as well as the creation of the performance and effect of the environment in which the virtual human lives.
The reason why Xingtong’s image is so vivid is because of its team’s investment in virtual human live broadcast technology——
The real-time live broadcast of 3D virtual humans is a collection of technologies in multiple fields. Its complexity lies in the need to integrate real-time driving of motion capture, real-time rendering of engines, and real-time synchronization of live content audio and video.
This involves the precise docking of multiple coupling links such as motion capture equipment, facial drivers, material settlement, lighting rendering, rendering engine, live content audio and video, etc. If there is a small error in any link, the picture will be terrible.
In order to crack the "hard bone" of virtual human 3D real-time live broadcast, the Tencent Interactive Entertainment Content Ecology Department coordinated the development resources of various internal technical teams, and built a real-time driving technology pipeline based on Unreal Engine4 from scratch; at the same time, it cooperated with Epic, Strategic cooperation with companies such as FACEGOOG has achieved more difficult live broadcast effects.
After several months of repeated debugging and testing, the whole process has run through the real-time capture technology from the face, hands, and body. Through real-time bone calculation + Clothing Tool and other tools, the full hair, clothing, accessories and other materials are realized. Real-time physical motion effects.
At the same time, in order to be more realistic and natural during the live broadcast, the Xingtong technical team also used fine lighting processing + engine light chasing effects to achieve real-time rendering of multi-angle and multi-action lights.
The saturated technology research and development investment has been proven to be worthwhile. Along the way, today's Xingtong can be called the first echelon among many domestic virtual idols in terms of image expression.
Of course, to really make virtual humans play a greater role, in addition to "good-looking skins", they also need "interesting souls". After all, the ultimate goal of virtual humans is not "virtual" but "human".
People mean personality, emotions, soul, and stories, and these need to be implemented from the "good-looking skin" through vivid images and warm content.
There is no interactive way to more vividly express the personality, affinity and appeal of virtual people than live broadcasting.
Since the live broadcast at station B started in June last year, Xingtong has been interacting with fans in depth almost every week.
On October 23, there was a 24-hour uninterrupted birthday live broadcast, and on-site connection or interaction with other well-known virtual anchors of station B including Ivan_iiivan, Bingtang IO, and Qihai Nana7mi. Very warm.
In every live broadcast, Xingtong's cute and cute personality, outstanding dancing skills, and good conversation make this virtual fashion girl more three-dimensional and plump.
Even from the barrage at station B, careful fans can still feel the subtle transformation of Xingtong from the slight nervousness at the beginning of the live broadcast to the later ease.
At the same time, in order to let Xingtong expand more possibilities, Xingtong's operation team also let this fashionable girl break through the dimensional wall.
Xingtong has successively become the "Virtual Spokesperson of Li Ning and Levi's", "MAKE UP FOR EVER Artistic Trendy Makeup Ambassador", "Changsha's First Intangible Heritage Cultural Tourism Promotion Ambassador", "Inheritor of Peacock Dance's Popular Interpretation", "Flower Skating Champion Pang Disciple of Tong Jian in the Qing Dynasty".
These labels make Xingtong, as a new generation of virtual idols, a vivid case of helping and nurturing multiple industries, and further broaden the application scenarios of virtual human technology.
Xingtong originated from the game, but it has already surpassed the game.
Along the way, she is the epitome of virtual human as an emerging industry that promotes the integration of numbers and reality.Virtual human technology is not achieved overnight
If you think that Tencent’s first foray into virtual idols will quickly make Xingtong a benchmark in the industry, then you are wrong.
In fact, as the first batch of technology companies in China to deploy virtual humans, Tencent’s accumulation and breakthrough in virtual human technology did not happen overnight——
As early as 2018, Siren, a high-fidelity virtual human being co-created by NExT Studios, was driven by real-time expressions and movements, which once caused a sensation in the industry at that time.
Matt, who was born a year later, has been able to realize voice-driven facial expressions through AI technology, closely linking voice with vivid facial expressions and emotions.
In 2021, based on real-time high-fidelity virtual human technology, Tencent NExT Studios and Xinhua News Agency jointly launched Xiao Zheng, China's first digital reporter and the world's first digital astronaut, and completed many aerospace reports.
And this time, Tong Heguang and Ji Li, who appeared on CCTV on the same stage with Xingtong, also have their own characteristics——
Gilly uses MetaHuman, the next-generation real character model generation system of Unreal Engine, and the cutting-edge Facegood face capture technology to make body and expression more vivid and natural;
Behind Tong Heguang is the xFaceBuilder® digital character production self-developed by NExT Studios, the xMoCap® motion capture animation full-process production pipeline, and the technical support of xLab photo modeling and motion capture laboratories. He is also a representative of male virtual idols.
It can be seen that the accumulation and breakthrough of virtual human technology is inseparable from the forward-looking layout and R & D accumulation of game companies. In essence, game technology has developed to the current stage and has the ability to "go global" and empower more upstream and downstream industries.
The virtual human technology born out of games has developed into a "new species" covering computer graphics, artificial intelligence, virtual reality and other Internet frontier disciplines, and has extremely wide applications in many fields.Application of Virtual Humans - Blooming and Fruiting in Multiple Industries
The virtual human industry has not stopped at the segmented track of virtual idols. Now, it has already moved towards a wider range of application scenarios.
In general, avatars can be divided into two types: identity type and function type:
Identity-type avatars are mostly virtual IPs and idols, with obvious personality and IP attributes, while functional avatars are mainly used to replace real-person services and are virtual representatives of service-oriented roles in the real world.
Today, these two types, as the two routes of the virtual human industry, are accelerating their own characteristics——
First of all, look at the identity-based virtual idol. One of the important reasons for its development is that compared with real idols, virtual idols will never have the risk of "collapsing houses". Therefore, major entertainment companies are also actively deploying virtual idols. market.
Looking at the functional virtual human, its development stems from its practical value, such as the simplest scene-museum guide, which can actually save the labor cost of the museum.
With the advancement of technology, the production cost of virtual humans has gradually decreased. Therefore, new formats such as digital employees and virtual anchors have been developed one after another, and their applications in many industries such as entertainment, e-commerce, education, and cultural tourism have also begun to gradually land.
For example, Hunan Satellite TV launched the virtual host Xiaoyang in the "Hello, Saturday" program premiered on January 1, 2022.
The support of the state at the policy level is also an important driving force for the vigorous development of functional virtual humans.
In October 2022, the "14th Five-Year Plan for Scientific and Technological Development of Radio, Television and Internet Audio-Visual" pointed out that virtual anchors and animated sign language should be widely used in news broadcasts, weather forecasts, variety shows, science and education, and other programs.
This points out the direction for the development of functional virtual human from the policy level.
It can be seen that with the increasing development of the demand side and the supply side, both the identity-based virtual human and the functional virtual human have shown considerable potential in their respective fields.Game Technology Behind Virtual Humans: A "Dream Workshop" for Building a Bigger World
Virtual human technology was born out of game technology, and it has continuously made breakthroughs with the advancement of game technology.
We can intuitively feel it through the following picture——
This is the process of the evolution of the image of Laura in "Tomb Raider".
It can be clearly seen that with the advancement of game technology represented by deep learning, real-time rendering, CG, motion capture, and machine vision, the image of the virtual Laura has changed from blurred to clear, from clumsy to agile, from cartoon to real .
The vividness and fullness of the avatar image in turn further enhance the game experience.
In the more macroscopic next-generation Internet "super digital scene", game technology and the virtual human technology derived from it will in a sense become the "meta-technology" in the future digital-real fusion industry, and will play the role of "dream maker". The infrastructure role of "workshop".
The core logic behind this is that—whether it is based on the relevant blueprint of the Metaverse or the relevant definition of the next-generation Quanzhen Internet, the next computing platform will definitely blur the boundary between "reality" and "virtuality".
In this process, "identity" is an extremely important part.
What's the meaning?
In today’s online world, what defines our identity is a simple avatar in WeChat, a cartoon hero in Glory of Kings, and a bust without legs in Facebook’s VR social scene. Very monotonous, very "lively".
The most essential reason behind this is that the bandwidth of these "Avatar" inputs is too small, and it cannot intuitively reflect our "spirit" including actions, demeanor, and emotions.
In the future, this input bandwidth will definitely increase. By then, as human beings in the real world, we can complete the real "online", which actually belongs to the virtual human technology in a broad sense.
Regardless of how the next-generation computing platform changes, the two factors of "people" and "world" are very critical.
This includes us and our digital avatars, other people and "other people's digital avatars", and fully digital "pure virtual people".
The world in it includes both the real world and the virtual world, as well as the world where digital and real are integrated.
In order to realize the mapping and extension of the real world, the virtual human technology "spillover" from the game technology can create a virtual human and a "real person clone" that is no different from a real person, and the game technology itself can be constructed from zero with high efficiency and high quality. A digital scene with form, sound and color.
Don't think that the future I described is very far away. In fact, we can already see that game technology not only "spills over" to the virtual human industry, but also "spills over" to more industries and scenarios——
For example, game technology is widely used in the film and television production industry, which can effectively improve the flexibility and expressiveness of film and television creation, while reducing shooting costs.
During the filming of the popular American drama "The Mandalorian" in 2019, the filming team used Unreal Engine4 to create detailed Star Wars scenes and realize digital virtual shooting.
The automotive industry has also benefited from technologies derived from games . For example, game engines serve as a real-time 3D creation platform, which can be applied to automotive industry design, simulation training, and autonomous driving.
For example, in terms of autonomous driving, Tencent TAD Sim introduces UE4, and uses the physics engine to simulate weather and traffic conditions under real driving conditions, which can achieve environmental simulation effects.
The digital twin system driven by the game engine can also help smart city planning. For example, Hong Kong Airport has developed an airport operation and maintenance system based on the digital twin.
When John Carmack, the originator of the game engine, developed the DOOM engine and announced that it would be open sourced in 1997, he would never have imagined that the technology he created would surpass games decades later and be so widely used in film, television, Automotive, automation and other industrial fields.
Director Yang Dechang said that after the invention of movies, human life has been extended by at least three times.
In a sense, after the invention of games, human life has been extended by at least three times on the basis of movies.
What connects the "game world" and the "real world" is the technology behind Ninth Art Games' continuous "turning decay into magic".
The game at that time will no longer be a game.
As a product of its times, Dogecoin has its own set of problems. DogeLayer is here to fix just that.Max Ng
Frax Finance: Stablecoin hedging inflation, supporting a stable digital currency ecosystem.Cheng Yuan
Friend.Tech, a decentralised social media platform, is a creation that melds technological prowess with human interaction, and orbits around the idea of crypto personalities — in short, the transformative utilisation of "shares" as digital assets.Catherine
Blockchain and AI have different underlying principles and are difficult to integrate in a single system, according to Andre Cronje.Beincrypto
Microsoft co-founder Bill Gates doesn’t seem convinced of the importance of Web3 and the metaverse, saying that the technology is not “revolutionary”.decrypt
There are a few things that make Jack Dorsey’s vision for Web5 distinct, including not wanting to completely replace Web2 but work with it.Coindesk
While discussing what happened to FTX, Brain Armstrong, CEO of Coinbase, bragged that his firm is not vulnerable to that kind of issue.Bitcoinist
Decentralized finance tokens provide crypto users with access to a number of bank-like services such as loans, lending and insurance.Coindesk
Token Burning is simply removing or destroying a reasonable amount of a project’s total supply.Nulltx
Bitcoin price may have nosedived to $21,000 but this analyst says it’s not an entirely bad thing. Willy Woo, on-chain ...Bitcoinist