Jury Finds Sam Bankman-Fried Guilty on All Counts in FTX Fraud Trial
After just a little over four hours of deliberation, the jury unanimously rendered a verdict of guilt on all charges against the former billionaire.

Interview: Yan
On April 7, 2025, Vitalik and Xiao Wei appeared together at the Pop-X HK Research House event co-organized by DappLearning, ETHDimsum, Panta Rhei and UETH.
During the event, Yan, the initiator of the DappLearning community, interviewed Vitalik, covering topics such as ETH POS, Layer2, cryptography and AI. The interview was conducted in Chinese, and Vitalik was fluent in Chinese.
The following is the content of the interview (the original content has been reorganized for easy reading and understanding):
Yan:
Hello Vitalik, I am Yan from the DappLearning community. It is a great honor to interview you here.
I started to understand Ethereum in 2017. I remember that in 2018 and 2019, everyone had a very heated discussion about POW and POS. Maybe this topic will continue to be discussed.
From now on, (ETH) POS has been running stably for more than four years, and there are millions of Validators in the consensus network. But at the same time, the exchange rate of ETH to BTC has been falling all the way, which has both positive and challenges.
So standing at this point in time, what do you think of Ethereum's POS upgrade?
Vitalik:
I think these prices of BTC and ETH have nothing to do with POW and POS.
There will be many different voices in the BTC and ETH communities, and the two communities do completely different things, and everyone's way of thinking is also completely different.
Regarding the price of ETH, I think there is a problem. ETH has many possible futures. (It is conceivable) that in these futures, there will be many successful applications on Ethereum, but these successful applications may not bring enough value to ETH.
This is a problem that many people in the community are worried about, but in fact this problem is very normal. For example, Google, a company, they make many, many products and do many interesting things. But more than 90% of their revenue is still related to their Search business.
The Ethereum ecosystem application and ETH (the relationship between prices) are similar. That is, some applications pay a lot of transaction fees, and they consume a lot of ETH. At the same time, there are many (applications) that may be more successful, but the success they bring to ETH is not as much as it should be.
So this is a problem that we need to think about and continue to optimize. We need to support more applications that have long-term value to Ethereum Holders and ETH.
So I think the future success of ETH may appear in these areas. I don't think it has much relevance to the improvement of the consensus algorithm.
Yan:
Yes, the prosperity of the ETH ecosystem is also an important reason for attracting our developers to build it.
OK,What do you think of the PBS (Proposer & Builder Separation) architecture of ETH2.0?This is a good direction. In the future, everyone can use a mobile phone as a light node to verify (ZK) proof, and then everyone can stake 1 ether to be a validator.
But the Builder may be more centralized. It has to do both MEV resistance and ZK Proof generation. If Based roll up is adopted, the Builder may have to do more, such as being a Sequencer.
Will the Builder be too centralized in this case? Although the Validator is decentralized enough, it is a chain. If there is a problem in one of the links in the middle, it will also affect the operation of the entire system. How to solve the anti-censorship problem in this area?
Vitalik:
Yes, I think this is a very important philosophical question.
In the early days of Bitcoin and Ethereum, there was an assumption that could be said to be subconscious:
Building a block and verifying a block is one operation.
Suppose you are building a block, if your block contains 100 transactions, then you need to run this much gas (100 transactions) on your own node. When you build the block and broadcast it to the world, every node in the world also needs to do this much work (consuming the same gas). So if we set the gaslimit so that every Laptop or Macbook in the world, or a server of a certain size, can build blocks, then corresponding node servers are needed to verify these blocks.
This is the previous technology. Now we have ZK, DAS, many new technologies, and Statelessness.
Before these technologies were used, building blocks and verifying blocks needed to be symmetric, but now they can be asymmetric. So the difficulty of building a block may become very high, but the difficulty of verifying a block may become very low.
Use stateless client as an example: If we use stateless technology and increase gaslimit tenfold, the computing power required to build a block will become huge, and an ordinary computer may not be able to do it. At this time, you may need to use a very high-performance MAC studio, or a server with stronger configuration.
But the cost of verification will become lower, because verification does not require any storage at all, and only relies on bandwidth and CPU computing resources. If ZK technology is added, the CPU cost of verification can also be removed. If DAS is added, the cost of verification will be very, very low. If the cost of building a block becomes higher, but the cost of verification becomes very low.
So is this better than the current situation?
This question is more complicated. I would think like this, that is, if there are some super nodes in the Ethereum network, that is, some nodes will have higher computing power, and we need them to do high-performance computing.
Then how can we prevent them from doing evil? For example, there are several attacks:
First: create a 51% attack.
Second: Censorship attack. If they do not accept transactions from some users, how can we reduce such risks?
Third: anti-MEV related operations, how can we reduce these risks?
In terms of 51% attack, since the verification process is done by Attester, the Attester node needs to verify DAS, ZK Proof and stateless client. The cost of this verification will be very low, so the threshold for being a consensus node will still be relatively low.
For example, if there are some Super Nodes that build blocks, if such a situation occurs, 90% of these nodes are you, 5% are him, and 5% are other people. If you do not accept any transactions at all, it is not a particularly bad thing. Why? Because you have no way to interfere with the entire consensus process.
So you cannot make a 51% attack, then the only thing you can do is to dislike certain users' transactions.
The user may only need to wait for ten or twenty blocks, and let another person include his transaction in the block. This is the first point.
The second point is that we have the concept of Fossil, so what does Fossil do?
Fossil can separate the role of "selecting transactions" from the role of execution (executing transactions). In this way, the role of selecting which transactions to include in the next block can be made more decentralized. Therefore, through the Fossil method, small nodes will have the ability to independently select transactions to be included in the next block. In addition, if you are a large node, you actually have very little power [1].
This method is more complicated than before. Before, we thought that each node was a personal laptop. But in fact, if you look at Bitcoin, it is now a more hybrid architecture. Because the Bitcoin miners are all mining data centers.
So in POS, it is roughly done like this, that is, some nodes need more computing power and more resources. But the rights of these nodes are limited, and other nodes can be made very decentralized, so they can ensure the security and decentralization of the network. But this method is more complicated, so this is also a challenge for us.
Yan:
Very good idea. Centralization is not necessarily a bad thing, as long as we can limit it from doing evil.
Vitalik:
Yes.
Yan:
Thanks for answering my confusion for many years. Let's come to the second part of the question. As a witness to Ethereum's journey, Layer2 is actually very successful. Now the TPS problem has indeed been solved. Unlike the ICO (rush to issue transactions) back then, it was congested.
I personally think that L2 is very useful now. However, many people have proposed various solutions to the problem of liquidity fragmentation of L2. What do you think about the relationship between Layer1 and Layer2? Is the current Ethereum mainnet too Buddhist and too decentralized, and does not have any constraints on Layer2? Does Layer1 need to formulate rules with Layer2, or formulate some profit-sharing models, or adopt solutions such as Based Rollup? Justin Drake recently proposed this solution in Bankless, and I agree with it. What do you think? At the same time, I am also curious about when the corresponding solution will be launched if it is already available?
Vitalik:
I think our Layer2 has several problems now.
The first is that their progress in security is not fast enough. So I have been pushing for Layer2 to be upgraded to Stage 1, and I hope to upgrade to Stage 2 this year. I have been urging them to do so, and I have been supporting L2BEAT to do more transparency work in this regard.
The second is the issue of L2 interoperability. That is, cross-chain transactions and communications between two L2s. If the two L2s are in the same ecosystem, interoperability needs to be simpler, faster and cheaper than it is now.
Last year, we started this work, which is now called Open Intents Framework, and the Chain-specific addresses, which are mostly UX work.
In fact, I think that 80% of L2's cross-chain problems are actually UX problems.
Although the process of solving UX problems may be painful, as long as the direction is right, complex problems can be made simple. This is also the direction we are working towards.
Some things need to go a step further, for example, the Withdraw time of Optimistic Rollup is one week. If you have a token on Optimism or Arbitrum, you need to wait a week to cross-chain that token to L1 or cross-chain to another L2.
You can ask Market Makers to wait for a week (correspondingly, you need to pay a certain fee to them). Ordinary users can cross-chain from one L2 to another L2 through methods such as Open Intents Framework Across Protocol. For some small transactions, this is possible. However, for some large transactions, Market Makers still have limited liquidity. So the transaction fees they need will be higher. I published that article last week [2], which is that I support the 2 of 3 verification method, which is the OP + ZK + TEE method.
Because if we do that kind of 2 of 3, we can meet three requirements at the same time.
The first requirement is to be completely Trustless, without the need for a Security Council. TEE technology is a supporting role, so there is no need to fully trust it.
Second, we can start using ZK technology, but ZK technology is relatively early, so we cannot rely on it completely.
Third, we can reduce the Withdraw time from one week to one hour.
You can imagine that if users use the Open Intents Framework, the liquidity cost of Market Makers will be reduced by 168 times. Because the time that Market Makers need to wait (to do the Rebalance operation) will be reduced from one week to one hour. In the long term, we plan to reduce the withdraw time from 1 hour to 12 seconds (the current block time), and if we adopt SSF, it can be reduced to 4 seconds.
At present, we will also use, for example, zk-SNARK Aggregation to parallelize the ZK proof process and reduce latency. Of course, if users use ZK to do this, they don’t need to do it through Intents. But if they do it through Intents, the cost will be very low, which is all part of Interactability.
Regarding the role of L1, in the early days of L2 Roadmap, many people may think that we can completely copy Bitcoin’s Roadmap, and L1 will have very few uses, only doing proofs (and other small amounts of work), and L2 can do everything else.
But we found that if L1 does not play any role at all, then this is dangerous for ETH.
We talked about this before. One of our biggest concerns is that the success of Ethereum applications cannot become the success of ETH.
If ETH is not successful, our community will have no money and no way to support the next round of applications. So if L1 does not play any role at all, the user experience and the entire architecture will be controlled by L2 and some applications. No one will represent ETH. So if we can assign more roles to L1 in some applications, it will be better for ETH.
The next question we need to answer is what will L1 do? What will L2 do?
I published an article in February [3], in a world of L2 Centric, there are many important things that need to be done by L1. For example, L2 needs to issue a certificate to L1. If an L2 has a problem, the user will need to cross the chain to another L2 through L1. In addition, Key Store Wallet can also store Oracle Data on L1, etc. There are many such mechanisms that need to rely on L1.
There are also some high-value applications, such as Defi, which are actually more suitable for L1. An important reason why some Defi applications are more suitable for L1 is that their Time Horizon (investment period) is very long, such as one, two, or three years.
This is especially obvious in the prediction market. Sometimes the prediction market will ask some questions, such as what will happen in 2028?
There is a problem here. If the governance of an L2 has problems, then theoretically all users there can exit, they can move to L1, and they can also move to another L2. However, if there is an application in this L2, and its assets are all locked in a long-term smart contract, then the user has no way to exit. Therefore, many theoretically secure DeFis are not very secure in reality.
For these reasons, some applications should still be done on L1, so we have begun to pay more attention to the scalability of L1.
We now have a roadmap, and by 2026, there are about four or five ways to improve the scalability of L1.
The first is Delayed Execution (separate verification and execution of blocks), that is, we can only verify the block in each slot and let it actually execute in the next slot. This has an advantage, and the maximum execution time we can accept may be increased from 200 milliseconds to 3 seconds or 6 seconds. This way there is more processing time[4].
The second is Block Level Access List, which means that each block needs to indicate in the information of this block which account status this block needs to read and the related storage status. It can be said that it is a bit similar to Stateless without Witness. One advantage of this is that we can process EVM operation and IO in parallel. This is a relatively simple implementation method for parallel processing.
The third is Multidimensional Gas Pricing [5], which can set the maximum capacity of a block. This is very important for security.
Another one is (EIP4444) historical data processing. It does not require every node to permanently save all information. For example, each node can only save 1%. We can use a p2p method, such as your node may save a part, and his node may save a part. In this way, we can store the information in a more decentralized manner.
So if we can combine these four solutions, we now think that we can increase the Gaslimit of L1 by 10 times. All our applications will have the opportunity to start relying more on L1 and do more things on L1, which is good for L1 and ETH.
Yan:
Okay, next question, will we have Pectra upgrade this month?
Vitalik:
Actually, we hope to do two things, that is, Pectra upgrade around the end of this month, and then Fusaka upgrade in Q3 or Q4.
Yan:
Wow, so fast?
Vitalik:
I hope so.
Yan:
My next question is also related to this. As someone who has watched Ethereum grow all the way, we know that in order to ensure security, Ethereum has about five or six clients (consensus clients and execution clients) being developed at the same time, and there is a lot of coordination work in the middle, which leads to a long development cycle.
This has advantages and disadvantages. Compared with other L1s, it may be really slow, but it is also safer.
But what kind of solution allows us not to wait for a year and a half to usher in an upgrade. I have probably seen that you have proposed some solutions. Can you introduce them in detail? Vitalik: Yes, there is a plan. We can improve the efficiency of coordination. We are now starting to have more people who can move between different teams to ensure more efficient communication between teams. If a client team has a problem, they can speak up about it and let the researcher team know. In fact, the advantage of Thomas becoming one of our new EDs is this. He is in the client (team), and now he is also in the EF (team). He can do this coordination. This is the first point. The second point is that we can be stricter with the client team. Our current method is, for example, if there are five teams, then we need all five teams to be fully prepared before we announce the next hard fork (network upgrade). We are now thinking that we can start the upgrade only after four teams are completed, so that we don’t have to wait for the slowest one, and we can also mobilize everyone’s enthusiasm.
Yan:
So there should be appropriate competition. It's good. I really look forward to every upgrade, but don't let everyone wait too long.
Later, I would like to ask more questions related to cryptography, which are more divergent.
When our community was just established in 2021, it gathered developers from major domestic exchanges and Venture researchers to discuss Defi. 2021 is indeed a stage where everyone participates in understanding Defi, learning and designing Defi. It is a wave of participation by all people.
From the perspective of future development, for ZK, whether it is the general public or developers, learning ZK, such as Groth16, Plonk, Halo2, the later developers find it difficult to catch up, and the technology is advancing very fast.
In addition, we can see that ZKVM is developing very fast, which makes the direction of ZKEVM not as popular as before. When ZKVM gradually matures, developers don’t need to pay too much attention to the underlying ZK.
What suggestions and opinions do you have for this?
Vitalik:
I think the best direction for some ZK ecosystems is that most of these ZK developers can know some high-end languages, that is, HLL (High Level Language). Then they can write their application code in HLL, and those researchers of Proof System can continue to modify and optimize the underlying algorithm. Developers need to be layered, and they don’t need to know what happens in the next layer.
There may be a problem now that the current ecosystem of Circom and Groth16 is very developed, but this has a relatively large limitation on ZK ecological applications. Because Groth16 has many shortcomings, such as the problem that each application needs to go to Trusted Setup by itself, and its efficiency is not very high, so we are also thinking that we need to put more resources here and help more modern HLLs succeed.
Another is that the ZK RISC-V route is also very good. Because RISC-V has also become an HLL, many applications, including EVM and some other applications, can be written on RISC-V[6].
Yan:
Okay, so developers only need to learn Rust. I attended Devcon Bangkok last year and heard about the development of applied cryptography, which also made me shine.
In terms of applied cryptography, what do you think about the combination of ZKP, MPC, and FHE, and what suggestions do you have for developers?
Vitalik:
Yes, this is very interesting. I think FHE has a good prospect now, but there is a concern that MPC and FHE always need a committee, that is, they need to select seven or more nodes. If those nodes, maybe 51% or 33% are attacked, then your system will have problems. It is equivalent to saying that the system has a Security Council, which is actually more serious than the Security Council. Because, if an L2 is Stage 1, then the Security Council needs to have 75% of its nodes attacked before problems arise[7], this is the first point. The second point is that if the Security Council is reliable, most of them will be thrown into cold wallets, that is, most of them will be offline, but in most MPC and FHE, their Committee needs to be online all the time to keep the system running, so they may be deployed on a VPS or other server, in which case it will be easier to attack them. This makes me a little worried. I think many applications can still be done, that is, they have advantages, but they are not perfect.
Yan:
Finally, I will ask a relatively easy question. I see that you have also been paying attention to AI recently. I would like to list some opinions.
For example, Elon Mask said that humans may just be a boot program for silicon-based civilization.
Then there is a view in "Network State" that centralized countries may prefer AI, and democratic countries prefer blockchain.
Then from our experience in the currency circle, in fact, the premise of decentralization is that everyone will abide by the rules, will check and balance each other, and will also know how to take risks, which will eventually lead to elite politics. So what do you think of these opinions? Just talk about the opinions.
Vitalik:
Yes, I am thinking about where to start answering.
Because the field of AI is very complex, for example, five years ago, no one would have predicted that the United States would have the best Close Source AI in the world, and China would have the best Open Source AI. AI can improve all the abilities of everyone, and sometimes it can also improve the power of some centralized (national) powers.
But AI can sometimes be said to have a more democratic effect. When I use AI myself, I find that in those fields where I have already made it to the top 1,000 in the world, such as some ZK development fields, AI actually helps me less in the ZK part, and I still need to write most of the code myself. But in those fields where I am relatively new, AI can help me a lot, for example, for Android APP development, which I have never done before. I made an APP ten years ago, using a framework, written in Javascript, and then converted into an APP. Apart from that, I have never written a Native Android APP before.
I did an experiment at the beginning of this year, that is, I wanted to try to write an App through GPT, and it was completed within 1 hour. It can be seen that the gap between experts and novices has been reduced a lot with the help of AI, and AI can also provide many new opportunities.
Yan:
One more thing, thank you for giving me a new perspective. I used to think that with AI, experienced programmers might learn faster, and it would be unfriendly to novice programmers. But in some ways, it does improve the abilities of novices. It may be a kind of equal rights, not differentiation, right?
Vitalik:
Yes, but now a very important question that needs to be considered is what effect the combination of some of the technologies we have developed, including blockchain, AI, cryptography, and some other technologies, will have (on society).
Yan:
So you still hope that humans will not be ruled by just an elite, right? You also hope to achieve the Pareto optimality of the entire society. Ordinary people can become super individuals through the empowerment of AI and blockchain.
Vitalik:
Yes, super individuals, super communities, super humans.
Yan:
OK, let's move on to the last question. What are your expectations and messages for the developer community? Do you have anything to say to the developers in the Ethereum community?
Vitalik:
For the developers of these Ethereum applications, you need to think about it.
Now there are many opportunities to develop applications in Ethereum. There are many things that could not be done before, but now they can be done.
There are many reasons for this, for example
First: L1's TPS was not enough before, but now this problem is gone;
Second: There was no way to solve the privacy problem before, but now there is;
Third: Because of AI, the difficulty of developing anything has become smaller. It can be said that although the complexity of the Ethereum ecosystem has increased, through AI, everyone can still understand Ethereum better.
So I think many things that failed before, including ten years ago or five years ago, may be successful now.
In the current blockchain application ecosystem, I think the biggest problem is that we have two types of applications.
The first type can be said to be very open, decentralized, secure, and particularly idealistic (application). But they only have 42 users. The second type is the casino. The problem is that these two extremes are both unhealthy. So we hope to make some applications. First, users will like to use them, which means they will have real value. Those applications will be better for the world. Second, there are really some business models, such as economics, that can run continuously without relying on limited funds from foundations or other organizations. This is also a challenge. But now I think everyone has more resources than before, so now if you can find a good idea and do it well, your chances of success are very high. Yan: Looking back, I think Ethereum is actually quite successful. It has been leading the industry and working hard to solve the problems encountered by the industry under the premise of decentralization.
There is another point that I feel deeply. Our community has always been non-profit. Through the Gitcoin Grant in the Ethereum ecosystem, the retroactive rewards of OP, and the airdrop rewards from other projects. We found that we can get a lot of support in the Ethereum community Build, and we are also thinking about how to make the community continue to operate stably.
The construction of Ethereum is really exciting. We also hope to see the true realization of the world computer as soon as possible. Thank you for your valuable time.
Interview at Mount Davis, Hong Kong
April 07, 2025
After just a little over four hours of deliberation, the jury unanimously rendered a verdict of guilt on all charges against the former billionaire.
HSBC's foray into the blockchain gold system is part of a broader blockchain initiative. In September this year, the Singapore branch of the bank launched its own blockchain-based payment platform.
Ripple Labs announced the acquisition of an XRP license from the Dubai International Financial Centre (DIFC), enabling its utilisation within the realm of virtual asset services.
One of the key conspirators profited around 3 million from the operation.
BAYC’s exclusive ApeFest Hong Kong merch drop is in the midst of a hiccup with the online store being taken down temporarily until website issues are resolved.
The utilisation of machine learning has played a role in enabling Paul McCartney and Ringo Starr to transform an archival John Lennon demo into what’s likely the band’s last collaborative effort.
The company is only the fourth entity that has been approved to offer crypto services to UK customers.
As convicted traffickers deliberated over the optimal offshore haven for their crypto wealth, US authorities listened in and pounced on the proceeds accrued through illicit darknet drug sales.
ProShares CEO Michael Sapir stated that the new ETF was created specifically to help people short ETH.