Cardano (ADA) NFTs : A New Wave of Digital Assets
Cardano, the proof-of-stake platform founded in September 2017 by Ethereum co-founder Charles Hoskinson, has made significant strides in the world of blockchain and NFTs.

Author: Su Yang, Hao Boyang; Source: Tencent Technology
As the "shovel seller" in the AI era, Huang Renxun and his NVIDIA always believe that computing power never sleeps.
Huang Renxun said in his GTC speech that inference has increased the demand for computing power by 100 times
At today's GTC conference, Huang Renxun brought out the new Blackwell Ultra GPU, as well as the server SKUs derived from it for inference and Agent, including the RTX family bucket based on the Blackwell architecture. All of this is related to computing power, but what is more important next is how to consume the endless computing power reasonably and effectively.
In Huang Renxun's eyes, the path to AGI requires computing power, embodied intelligent robots require computing power, and building the Omniverse and world models requires even more computing power. As for how much computing power is ultimately needed for humans to build a virtual "parallel universe", Nvidia has given an answer - 100 times as much as in the past.
To support his point of view, Huang Renxun showed a set of data at the GTC site - in 2024, the top four cloud factories in the United States will purchase a total of 1.3 million Hopper architecture chips, and by 2025, this figure will soar to 3.6 million Blackwell GPUs.
The following are some key points of NVIDIA's GTC 2025 conference compiled by Tencent Technology:
NVIDIA released the Blackwell architecture at GTC last year and launched the GB200 chip. This year's official name has been slightly adjusted. It is not called GB300 as previously rumored, but directly called Blakwell Ultra.
But from the hardware point of view, it is just a new HBM memory replaced on the basis of last year. In a word, Blackwell Ultra = Blackwell large memory version.
Blackwell Ultra is packaged with two TSMC N4P (5nm) process, Blackwell architecture chips + Grace CPU, and is equipped with more advanced 12-layer stacked HBM3e memory. The video memory is increased to 288GB. Like the previous generation, it supports the fifth-generation NVLink and can achieve 1.8TB/s of inter-chip interconnection bandwidth.
NVLink performance parameters of previous generations
Based on the storage upgrade, the FP4 precision computing power of the Blackwell GPU can reach 15PetaFLOPS, and the inference speed based on the Attention Acceleration mechanism is 2.5 times faster than that of the Hopper architecture chip.
Official picture of Blackwell Ultra NVL72
Like GB200 NVL72, NVIDIA also launched a similar product this year, the Blackwell Ultra NVL72 cabinet, which consists of a total of 18 computing trays, each of which contains 4 Blackwell Ultra GPUs + 2 Grace CPUs, a total of 72 Blackwell Ultra GPUs + 36 Grace CPUs, 20TB of video memory, a total bandwidth of 576TB/s, plus 9 NVLink switch trays (18 NVLink switch chip), and the NVLink bandwidth between nodes is 130TB/s.
The cabinet has 72 built-in CX-8 network cards, providing 14.4TB/s bandwidth, and the Quantum-X800 InfiniBand and Spectrum-X 800G Ethernet cards can reduce latency and jitter and support large-scale AI clusters. In addition, the rack also integrates 18 BlueField-3 DPUs for enhanced multi-tenant networking, security and data acceleration.
NVIDIA said that this product is specially customized "for the era of AI reasoning", and its application scenarios include reasoning AI, Agent, and physical AI (data simulation and synthesis for robot and intelligent driving training). Compared with the previous generation product GB200 NVL72, the AI performance has been improved by 1.5 times, and compared with the DGX cabinet products with the same positioning of the Hopper architecture, it can provide data centers with a 50-fold revenue increase opportunity.
According to official information, the reasoning of 671 billion parameters DeepSeek-R1 can achieve 100 tokens per second based on H100 products, while using Blackwell Ultra NVL72 solution, it can reach 1000 tokens per second.
Converted to time, the same reasoning task takes 1.5 minutes for H100, while Blackwell Ultra NVL72 can finish it in 15 seconds.
Blackwell Ultra NVL72 and GB200 NVL72 hardware parameters
According to information provided by NVIDIA, Blackwell NVL72 related products are expected to be available in the second half of 2025. Customers include server manufacturers, cloud factories, and computing power leasing service providers:
Server manufacturers
15 manufacturers including Cisco/Dell/HPE/Lenovo/Supermicro
Cloud factories
Mainstream platforms such as AWS/Google Cloud/Azure/Oracle Cloud
Computing power rental service providers
CoreWeave/Lambda/Yotta, etc.
According to NVIDIA's roadmap, the home field of GTC2025 will be Blackwell Ultra.
However, Huang Renxun also took this opportunity to preview the next-generation GPU based on the Rubin architecture that will be launched in 2026, as well as the more powerful cabinet Vera Rubin NVL144 - 72 Vera CPUs + 144 Rubin GPUs, using HBM4 chips with 288GB of video memory, 13TB/s of video memory bandwidth, and equipped with the sixth-generation NVLink and CX9 network cards.
How powerful is this product? The inference computing power of FP4 precision has reached 3.6ExaFLOPS, and the training computing power of FP8 precision has also reached 1.2ExaFlOPS, which is 3.3 times the performance of Blackwell Ultra NVL72.
If you think it is not enough, it doesn’t matter. In 2027, there will be a more powerful Rubin Ultra NVL576 cabinet, with FP4 precision inference and FP8 precision training computing power of 15ExaFLOPS and 5ExaFLOPS respectively, which is 14 times that of Blackwell Ultra NVL72.
Parameters of Rubin Ultra NVL144 and Rubin Ultra NVL576 officially provided by NVIDIA
For those customers whose needs cannot be met by Blackwell Ultra NVL72 at this stage and who do not need to build ultra-large-scale AI clusters, NVIDIA's solution is the plug-and-play DGX Super POD AI supercomputing factory based on Blackwell Ultra.
As a plug-and-play AI supercomputing factory, DGX Super POD is mainly designed for AI scenarios such as generative AI, AI Agent and physical simulation, covering the full process computing power expansion needs from pre-training, post-training to production environment. Equinix, as the first service provider, provides liquid cooling/air cooling infrastructure support.
DGX SuperPod built by Blackwell Ultra
The DGX Super POD customized based on Blackwell Ultra is divided into two versions:
The DGX SuperPOD with built-in DGX GB300 (Grace CPU ×1 + Blackwell Ultra GPU ×2) has a total of 288 Grace CPUs + 576 Blackwell Ultra GPUs, providing 300TB of fast memory and a computing power of 11.5ExaFLOPS at FP4 precision
DGX SuperPOD with built-in DGX B300. This version does not contain the Grace CPU chip, has further expansion space, and uses an air-cooling system. Its main application scenario is ordinary enterprise-level data centers.
In January this year, NVIDIA showed off a conceptual AI PC product priced at US$3,000 at CES - Project DIGITS. Now it has an official name of DGX Spark.
In terms of product parameters, it is equipped with a GB10 chip, and the computing power can reach 1PetaFlops at FP4 precision. It has built-in 128GB LPDDR5X memory, CX-7 network card, 4TB NVMe storage, runs the DGX OS operating system customized based on Linux, supports frameworks such as Pytorch, and is pre-installed with some basic AI software development tools provided by NVIDIA. It can run 200 billion parameter models. The size of the whole machine is close to that of Mac mini. Two DGX Sparks are interconnected and can run models with more than 400 billion parameters.
Although we say it is an AI PC, it is essentially still in the category of supercomputing, so it is placed in the DGX product series, rather than in consumer-grade products such as RTX.
However, some people have complained about this product. The advertised performance of FP4 has low usability. When converted to FP16 precision, it can only compete with RTX 5070 or even the $250 Arc B580, so the cost performance is extremely low.
DGX Spark computer and DGX Station workstation
In addition to the officially named DGX Spark, NVIDIA also launched an AI workstation based on Blackwell Ultra. This workstation has a built-in Grace CPU and a Blackwell Ultra GPU, with 784GB of unified memory and CX-8 network card, providing 20PetaFlops of AI computing power (officially not marked, theoretically also FP4 accuracy).
The product SKUs introduced earlier are all based on Grace CPUs and Blackwell Ultra GPUs, and they are all enterprise-level products. Considering that many people are interested in the wonderful use of products such as RTX 4090 in AI reasoning, NVIDIA has further strengthened the integration of the Blackwell and RTX series at this GTC, and launched a large number of AI PC-related GPUs with built-in GDDR7 memory, covering scenarios such as notebooks, desktops and even data centers.
Desktop GPUs:, including RTX PRO 6000 Blackwell Workstation Edition, RTX PRO 6000 Blackwell Max-Q Workstation Edition, RTX PRO 5000 Blackwell, RTX PRO 4500 Blackwell and RTX PRO 4000 Blackwell
Laptop GPUs: RTX PRO 5000 Blackwell, RTX PRO 4000 Blackwell, RTX, PRO 3000 Blackwell, RTX PRO 2000 Blackwell, RTX PRO 1000 Blackwell and RTX PRO 500 Blackwell
Data Center GPUs: NVIDIA RTX PRO 6000 Blackwell Server Edition
NVIDIA's AI "family bucket" for enterprise-level computing
The above are only some of the SKUs customized for different scenarios based on the Blackwell Ultra chip, ranging from workstations to data center clusters. NVIDIA itself calls it the "Blackwell Family", and the Chinese translation "Blackwell Family Bucket" is more appropriate.
The concept of optoelectronic co-sealed module (CPO) is to encapsulate the switch chip and the optical module together, which can realize the conversion of optical signals into electrical signals and make full use of the transmission performance of optical signals.
Before this, the industry had been discussing Nvidia's CPO network switch product, but it had been delayed in going online. Huang Renxun also gave an explanation on the spot - due to the extensive use of fiber optic connections in data centers, the power consumption of optical networks is equivalent to 10% of computing resources. The cost of optical connections directly affects the Scale-Out network of computing nodes and the improvement of AI performance density.
Parameters of the two silicon photonic co-sealed chips Quantum-X and Spectrum-X displayed at GTC
At this year's GTC, NVIDIA launched the Quantum-X silicon photonic co-sealed chip, Spectrum-X silicon photonic co-sealed chip and three derived switch products: Quantum 3450-LD, Spectrum SN6810 and Spectrum SN6800.
Quantum 3450-LD: 144 800GB/s ports, backplane bandwidth 115TB/s, liquid cooling
Spectrum SN6810: 128 800GB/s ports, backplane bandwidth 102.4TB/s, liquid cooling
Spectrum SN6800: 512 800GB/s ports, backplane bandwidth 409.6TB/s, liquid cooling
The above products are uniformly classified as “NVIDIA Photonics", NVIDIA said that this is a platform based on the co-creation and development of the CPO partner ecosystem. For example, its micro-ring modulator (MRM) is optimized based on TSMC's optical engine, supports high-power, high-efficiency laser modulation, and uses a detachable fiber optic connector.
What is more interesting is that according to previous industry information, TSMC's micro-ring modulator (MRM) was created by it and Broadcom based on 3nm process and advanced packaging technologies such as CoWoS.
According to the data provided by NVIDIA, the Photonics switch with integrated optical modules has a 3.5-fold performance improvement over traditional switches, a 1.3-fold improvement in deployment efficiency, and more than 10 times the expansion flexibility.
Huang Renxun describes the "big pie" of AI infra on the spot
Because Huang Renxun only talked about software and embodied intelligence for about half an hour in this 2-hour GTC. Therefore, many details are supplemented by official documents rather than completely from the scene.
Nvidia Dynamo is definitely the software king bomb released at this event.
It is an open source software built for inference, training, and acceleration across the entire data center. Dynamo's performance data is quite shocking: on the existing Hopper architecture, Dynamo can double the performance of the standard Llama model. And for specialized inference models such as DeepSeek, NVIDIA Dynamo's intelligent inference optimization can also increase the number of tokens generated by each GPU by more than 30 times.
Huang Renxun demonstrated that Blackwell with Dynamo can exceed Hopper by 25 times
These improvements of Dynamo are mainly due to distribution. It distributes different computing stages of LLM (understanding user queries and generating the best response) to different GPUs, allowing each stage to be optimized independently, improving throughput and speeding up response.
Dynamo's system architecture
For example, in the input processing stage, that is, the pre-filling stage, Dynamo can efficiently allocate GPU resources to process user input. The system will use multiple groups of GPUs to process user queries in parallel, hoping that the GPU processing will be more decentralized and faster. Dynamo uses FP4 mode to call multiple GPUs to "read" and "understand" user questions in parallel. One group of GPUs processes the background knowledge of "World War II", another group processes historical data related to "cause", and the third group processes the timeline and events of "process". This stage is like multiple research assistants looking up a lot of information at the same time.
When generating output tokens, that is, the decoding stage, the GPU needs to be more focused and coherent. Compared with the number of GPUs, this stage requires more bandwidth to absorb the thinking information of the previous stage, so more cache reads are also required. Dynamo optimizes inter-GPU communication and resource allocation to ensure coherent and efficient response generation. On the one hand, it fully utilizes the high-bandwidth NVLink communication capability of the NVL72 architecture to maximize the efficiency of token generation. On the other hand, through the "Smart Router", the request is directed to the GPU that has cached the relevant KV (key value), which can avoid repeated calculations and greatly improve the processing speed. By avoiding repeated calculations, some GPU resources are released and Dynamo can dynamically allocate these idle resources to new incoming requests.
This architecture is very similar to Kimi's Mooncake architecture, but NVIDIA has provided more support on the underlying infra. Mooncake can be improved by about 5 times, but Dynamo has a more obvious improvement in reasoning.
For example, among the important innovations of Dynamo, the "GPU Planner" can dynamically adjust GPU allocation according to the load, the "Low Latency Communication Library" optimizes data transmission between GPUs, and the "Memory Manager" intelligently moves reasoning data between storage devices of different cost levels, further reducing operating costs. The intelligent router, the LLM-aware routing system, directs requests to the most appropriate GPU to reduce repeated calculations. This series of capabilities optimizes the GPU load.
Using this set of software reasoning systems, it is possible to efficiently expand to large GPU clusters, allowing a single AI query to seamlessly expand to up to 1,000 GPUs to fully utilize data center resources.
For GPU operators, this improvement has significantly reduced the cost per million tokens and greatly increased production capacity. At the same time, a single user can obtain more tokens per second, respond faster, and improve user experience.
Use Dynamo to make the server reach the golden profit line between throughput and response speed
Unlike CUDA as the underlying foundation of GPU programming, Dynamo is a higher-level system that focuses on the intelligent allocation and management of large-scale inference loads. It is responsible for the distributed scheduling layer of inference optimization and is located between the application and the underlying computing infrastructure. But just as CUDA completely changed the GPU computing landscape more than ten years ago, Dynamo may also successfully create a new paradigm for inference software and hardware efficiency.
Dynamo is completely open source and supports all mainstream frameworks from PyTorch to Tensor RT. Open source is still a moat. Like CUDA, it only works on NVIDIA GPUs and is part of NVIDIA's AI inference software stack.
With this software upgrade, NVIDIA has built its own defense against dedicated inference AISC chips such as Groq. Only with a combination of software and hardware can we dominate the inference infrastructure.
Although Dynamo is indeed quite amazing in terms of server utilization, NVIDIA still has a gap with the real experts in training models.
NVIDIA used a new model Llama Nemotron at this GTC, focusing on efficiency and accuracy. It is derived from the Llama series of models. After special fine-tuning by NVIDIA, compared with the Llama body, this model has been optimized through algorithm pruning and is more lightweight, only 48B. It also has similar reasoning capabilities as o1. Like Claude 3.7 and Grok 3, the Llama Nemotron model has a built-in reasoning capability switch, and users can choose whether to turn it on. This series is divided into three levels: entry-level Nano, mid-range Super and flagship Ultra, each of which is aimed at the needs of enterprises of different sizes.
Specific data of Llama Nemotron
Speaking of efficiency, the fine-tuning data set of this model is completely composed of synthetic data generated by NVIDIA itself, with a total of about 60B tokens. Compared to DeepSeek V3, which took 1.3 million H100 hours to train completely, this model, which has only 1/15 of the parameters of DeepSeek V3, took 360,000 H100 hours to fine-tune. The training efficiency is one level lower than DeepSeek.
In terms of inference efficiency, the Llama Nemotron Super 49B model does perform much better than the previous generation model. Its token throughput can reach 5 times that of Llama 3 70B. It can process more than 3,000 tokens per second on a single data center GPU. However, in the data released on the last day of DeepSeek Open Source Day, the average throughput of each H800 node during pre-filling was about 73.7k tokens/s input (including cache hits) or about 14.8k tokens/s output during decoding. The gap between the two is still obvious.
From the performance point of view, the 49B Llama Nemotron Super exceeds the 70B Llama 70B model distilled by DeepSeek R1 in all indicators. However, considering the recent frequent release of small-parameter high-energy models such as the Qwen QwQ 32B model, it is estimated that the Llama Nemotron Super will not be able to stand out among these models that can compete with the R1 body.
The most terrible thing is that this model is equivalent to confirming that DeepSeek may know more about GPU tuning during training than NVIDIA.
Why does NVIDIA develop an inference model? This is mainly to prepare for the next AI explosion point that Huang is eyeing - AI Agent. Since OpenAI, Claude and other big companies have gradually established the foundation of Agent through DeepReasearch and MCP, NVIDIA obviously also believes that the era of Agent has arrived.
The NVIDA AIQ project is NVIDIA's attempt. It directly provides a ready-made AI Agent workflow for planners with the Llama Nemotron inference model as the core. This project belongs to NVIDIA's Blueprint level, which refers to a set of pre-configured reference workflows and templates to help developers integrate NVIDIA's technology and libraries more easily. AIQ is the Agent template provided by NVIDIA.
NVIDA AIQ Architecture
Like Manus, it integrates external tools such as network search engines and other professional AI agents, which allows the Agent itself to search and use various tools. Through the planning, reflection and optimization of the Llama Nemotron reasoning model, the user's tasks are completed. In addition, it also supports the construction of multi-agent workflow architecture.
The servicenow system based on this template
One step further than Manus is that it has a complex RAG system for enterprise files. This system includes a series of steps from extraction, embedding, vector storage, rearrangement to final processing through LLM, which can ensure that enterprise data is used by Agents.
On top of this, NVIDIA has also launched an AI data platform to connect the AI reasoning model to the enterprise data system to form a DeepReasearch for enterprise data. The major evolution of storage technology has made the storage system no longer just a data warehouse, but an intelligent platform with active reasoning and analysis capabilities.
The composition of the AI Data Platform
In addition, AIQ places great emphasis on observability and transparency mechanisms. This is very important for security and subsequent improvements. The development team can monitor the activities of the Agent in real time and continuously optimize the system based on performance data.
Overall, NVIDA AIQ is a standard Agent workflow template that provides various Agent capabilities. It is a more fool-proof Dify-type Agent construction software that has evolved to the reasoning era.
If focusing on Agent is still investing in the present, thenNVIDIA's layout in embodied intelligence can be regarded as integrating the future.
NVIDIA has arranged all three elements of the model: model, data, and computing power.
Let's start with the model. This GTC released an upgraded version of the embodied intelligence basic model Cosmos announced in January this year.
Cosmos is a model that can predict future images through current images. It can input data from text/images, generate detailed videos, and predict the evolution of scenes by combining its current state (image/video) with actions (prompts/control signals). Because this requires an understanding of the physical causal laws of the world, NVIDIA calls Cosmos a world foundation model (WFM).
The basic architecture of Cosmos
For embodied intelligence, predicting the impact of the machine's behavior on the external world is the most core ability. Only in this way can the model plan behavior based on predictions, so the world model becomes the basic model of embodied intelligence. With this basic behavior/time-physical world change prediction model, through fine-tuning of specific data sets such as autonomous driving and robotic tasks, this model can meet the actual landing needs of various embodied intelligence with physical forms.
The entire model contains three parts of capabilities. The first part, Cosmos Transfer, converts structured video text input into controllable realistic video output, and generates large-scale synthetic data out of thin air with text. This solves the biggest bottleneck of current embodied intelligence - the problem of insufficient data. And this generation is a "controllable" generation, which means that users can specify specific parameters (such as weather conditions, object properties, etc.), and the model will adjust the generation results accordingly, making the data generation process more controllable and targeted. The entire process can also be combined with Ominiverse and Cosmos.
Cosmos is a reality simulation built on Ominiverse
The second part, Cosmos Predict, can generate virtual world states from multimodal inputs, supporting multi-frame generation and motion trajectory prediction. This means that given the start and end states, the model can generate a reasonable intermediate process. This is the core physical world cognition and construction capability.
The third part is Cosmos Reason, which is an open and fully customizable model with spatiotemporal perception capabilities. It understands video data and predicts interaction results through thought chain reasoning. This is the ability to improve planning behavior and predicting behavior outcomes.
With these three capabilities gradually added together, Cosmos can achieve a complete behavioral chain from real image token + text command prompt token input to machine action token output.
This basic model should indeed be effective. Only two months after its launch, the three leading companies 1X, Agility Robotics, and Figure AI have all started using it. Although the large language model is not leading, NVIDIA is indeed in the first echelon of embodied intelligence.
With Cosmos, NVIDIA naturally used this framework to fine-tune and train the basic model Isaac GR00T N1 dedicated to humanoid robots.
Isaac GR00T N1's dual-system architecture
It uses a dual-system architecture, with a fast-response "system 1" and a deep-reasoning "system 2". Its comprehensive fine-tuning enables it to handle general tasks such as grasping, moving, and dual-arm manipulation. And it can be fully customized according to specific robots, and robot developers can use real or synthetic data for post-training. This allows this model to be deployed in a variety of robots of different shapes.
For example, Nvidia, in cooperation with Google DeepMind and Disney to develop the Newton physics engine, used the Isaac GR00T N1 as a base to drive a very unusual little Disney BDX robot. This shows how versatile it is. As a physics engine, Newton is very delicate, so it is enough to build a physical reward system to train embodied intelligence in a virtual environment.
Huang Renxun and BDX robot "passionately" interact on stage
NVIDIA combined NVIDIA Omniverse and the NVIDIA Cosmos Transfer world base model mentioned above to create the Isaac GR00T Blueprint. It can generate a large amount of synthetic action data from a small amount of human demonstrations for robot operation training. NVIDIA used the first batch of Blueprint components to generate 780,000 synthetic trajectories in just 11 hours, equivalent to 6,500 hours (about 9 months) of human demonstration data. A considerable part of the data for Isaac GR00T N1 comes from this, which improves the performance of GR00T N1 by 40% compared to using only real data.
Twin Simulation System
For each model, NVIDIA can provide a large amount of high-quality data with Omniverse, a purely virtual system, and Cosmos Transfer, a real-world image generation system. NVIDIA also covers the second aspect of this model.
Since last year, Huang has emphasized the concept of "three computers" at GTC: one is DGX, which is a large GPU server, which is used to train AI, including embodied intelligence. Another AGX is NVIDIA's embedded computing platform designed for edge computing and autonomous systems. It is used to deploy AI on the end side, such as as a core chip for autonomous driving or robots. The third is the data generation computer Omniverse+Cosmos.
Three major computing systems of embodied intelligence
This system was mentioned again by Huang at this GTC, and he specifically mentioned that with this computing system, billions of robots can be born. From training to deployment, computing power is used by NVIDIA. This part is also closed.
If we simply compare it with the previous generation Blackwell chip, Blackwell Ultra does not match the previous adjectives such as "nuclear bomb" and "king bomb" in terms of hardware, and even has a taste of squeezing toothpaste.
But from the perspective of roadmap planning, these are all in Huang Renxun's layout. The Rubin architecture next year and the year after will have significant improvements in chip technology, transistors, rack integration, GPU interconnection, and cabinet interconnection. As the Chinese say, "the best is yet to come."
Compared to the empty promises on the hardware level, NVIDIA has made rapid progress on the software level in the past two years.
Looking at NVIDIA's entire software ecosystem, the three-level services of Meno, Nim, and Blueprint include full-stack solutions from model optimization and model encapsulation to application construction. The ecological niches of cloud service companies and NVIDIA AI all overlap. With the newly added Agent, AI infra, NVIDIA will eat all parts except the basic model.
In terms of software, Huang's appetite is as big as Nvidia's stock price.
In the robotics market, Nvidia's ambitions are even greater. Models, data, and computing power are all in hand. Although it didn't catch up with the top spot in basic language models, it has made up for it with basic embodied intelligence. A monopoly giant of embodied intelligence has emerged on the horizon.
In this, each link and each product corresponds to a potential market of hundreds of billions. Huang Renxun, the lucky gambling king who gambled everything in his early years, started a bigger gamble with the money he got from the GPU monopoly.
If in this gamble, any aspect of the software or robotics market is taken, then Nvidia is the Google of the AI era, the top monopolist in the food chain.
However, looking at the profit margin of Nvidia GPU, we still hope that such a future will not come.
Fortunately, for Huang, this is also a big gamble that he has never made in his life, and the outcome is unpredictable.
Cardano, the proof-of-stake platform founded in September 2017 by Ethereum co-founder Charles Hoskinson, has made significant strides in the world of blockchain and NFTs.
Explore the enchanting world of Disney's 100th anniversary celebrations with their exclusive Platinum NFT collection. Dive into the details of these limited-edition digital collectibles featuring iconic characters, and discover how they pay homage to a century of Disney magic.
Through the assessment, it reveals that even the highest-scoring model amongst foundation model developers attained only 54 out of 100 points, underscoring the industry's fundamental lack of transparency.
According to founder Richard Sanders, employees that he trusted had relocated the firm's clients and staff to new entities in Alaska and Singapore
The announcement of this partnership took place at the recent Plan B event held in Lugano, Switzerland.
The stock of iFlyTek, a Chinese company specialising in AI, fell by 10% because their product started criticising Mao Zedong.
The mysterious disappearance and return of BlackRock's iShares Bitcoin ETF.
Authentickator and Smobler have joined forces to bridge the gap between Web 3 and Web 2, making blockchain-based assets and experiences accessible to a broader audience.
The United States has expedited export restrictions by instructing Nvidia to promptly cease certain AI chip shipments to China.
Everyone was celebrating the Bitcoin pump- but is the crypto winter truly over?