
Solana is a high-throughput, low-latency blockchain network that has successfully raised the bar for decentralized system performance. As the network has scaled across its lifespan, protocol-level challenges have emerged, representing new opportunities to raise the bar even higher and serving as rallying points for core contributors to build solutions. Today, despite all the achieved progress, we still feel the same Solana underdog chip on our shoulder that’s been there since the very beginning.
The job’s not finished. There are more opportunities to conquer.
This report provides an assessment of those opportunities, structured into three main categories: validators, end users, and developers.
For each category, we outline the current opportunities, discuss their impact, and summarize ongoing or proposed solutions and optimizations. The goal is to provide a clear technical overview of where impactful opportunities exist in Solana today and how the ecosystem is already addressing them to take the network’s sustained performance, reliability, and developer adoption even higher.
Validators
Transaction Duplication and Flooding
During periods of high market volatility, Solana’s validators are often overwhelmed by bots flooding the network with duplicate transactions. These bots attempt to gain an edge in arbitrage or liquidation by spamming the same transaction multiple times, hoping one instance will be processed first. In one notable incident in early 2022, a surge of duplicate transactions inundated a block leader’s Transaction Processing Unit (TPU), causing overall network throughput to plummet from thousands of transactions per second to merely tens. This degraded state persisted for several hours, effectively grinding the network’s capacity down to a crawl.
Solana introduced Stake-Weighted Quality of Service (SWQoS) as a partial mitigation to this issue. SWQoS gives priority to transactions from validators with higher stake when leaders are deciding which transactions to include, thereby filtering out a portion of low-value or repetitive transactions. This mechanism has reduced the worst impacts of transaction flooding for the leader nodes. However, non-leader validators still receive and must process many of these duplicate packets, wasting computation and bandwidth on transactions that ultimately never confirm. In other words, the spam problem has been pushed to the edges: regular validators bear the brunt of handling duplicates that get filtered out before reaching a leader’s block. This excess workload can hinder validator performance and increase hardware requirements.

The recently introduced DoubleZero filtering mechanism further combats transaction duplication. The DoubleZero concept introduces an additional layer of transaction filtering before data even reaches the leading validators (or the “inner ring” of the network). In essence, it would serve as a pre-gossip filter that identifies and drops duplicate or low-quality transactions at the network edge, preventing them from propagating inward. By intercepting spam earlier in the flow, DoubleZero filtering could significantly reduce superfluous transaction traffic cluster-wide. This added barrier, combined with SWQoS prioritization, would help ensure that genuine, unique transactions are relayed to leaders, while redundant spam gets pruned out much sooner. Such measures collectively aim to preserve network throughput and stability during high-stress periods by relieving validators of unnecessary processing work.

Validator Operation Costs
Operating a Solana validator is both bandwidth-intensive and financially costly. A unique aspect of Solana’s consensus is that every validator must continually vote on the chain’s state by sending vote transactions on-chain for each block (or slot). With a slot time of 400 milliseconds, this means every validator submits a vote transaction 2.5 times per second to signal its agreement on the network’s fork choice. Each vote carries a small fee (approximately 0.000005 SOL base fee). Over a full day, this results in roughly 216,000 vote transactions per validator. Multiplying out the cost, a single validator might incur about 1.08 SOL in vote fees per day, which is on the order of 400 SOL per year (for example, around $60,000 annually if SOL trades at $150). This represents a significant ongoing expense just to participate in consensus, before considering any additional costs.
In addition to fee costs, validators must invest in high-performance hardware and infrastructure. Solana’s high throughput and low latency targets demand powerful servers with fast multi-core CPUs, ample RAM, high-speed SSD storage, and very robust network connectivity. Keeping up with a torrent of transactions and executing smart contracts in parallel (Solana’s runtime is highly parallelized) means that validators run near enterprise-grade hardware. These requirements can pose a barrier to entry and raise concerns about decentralization, as smaller operators may struggle to afford top-tier hardware and continuous operation at these specs. In recognition of this, the Solana community has taken steps to improve validator economics. Notably, in 2024, a governance proposal (SIMD-0096) was passed to redirect 100% of all user-paid priority fees to validators (previously, half of the fees were burned). This change increased the revenue for validators during periods of congestion, helping offset their operational costs and aligning network incentives to sustain a healthy validator set.
A major inefficiency today is that vote transactions dominate the ledger. Analyses show that roughly 70–80% of all transactions recorded on Solana are these automatic validator votes (rather than user-generated transactions). They also consume roughly 10% of each block’s compute budget. In effect, a large fraction of Solana’s advertised TPS is “noise” from an end-user perspective, since most of it comes from consensus maintenance overhead. Reducing this overhead is a priority for protocol designers, both to lower costs for validators and to free up more throughput for actual economic activity.
Two key optimizations are being explored in the next consensus update Alpenglow:
- BLS Signature Aggregation (Milestone Certificates): Instead of every validator gossiping an individual vote transaction each slot, validators could aggregate their votes offchain and only transmit a concise certificate with an aggregated signature using Boneh–Lynn–Shacham (BLS) scheme when certain quorum thresholds are met. For example, a validator would locally combine signatures and emit a tiny certificate when, say, 60% of stake has voted to confirm a block (a “pre-commit” milestone), and another when 80% have committed (finality). Each aggregated BLS signature is only 48 bytes, so a few certificates could replace thousands of Ed25519 vote signatures. The effect is a drastic reduction in on-chain voting traffic: rather than tens of thousands of vote transactions per second, the chain might only see a handful of aggregate votes per slot. Validators would still sign every block (maintaining accountability, since the signatures can be stored off-chain for audit), but the data transmitted and recorded on-chain would drop by orders of magnitude. Internal estimates suggest that what costs on the order of 400 SOL per validator annually in vote fees could drop to single-digit SOL with this scheme. In short, BLS aggregation would preserve fast consensus decisions while slashing bandwidth and fee costs for validators.
- Rotating Voting Committees: A complementary approach is to have only a subset of validators vote on-chain at any given time. Using a verifiable random function (VRF) or similar randomness beacon, the protocol can select a small committee of validators (e.g., 500 out of the thousands) to cast votes for the next N slots (for example, for the next few minutes worth of blocks). Only this committee’s votes would be transmitted and included on-chain, while the rest of the validators refrain from sending votes (they would instead observe and validate the committee’s vote certificates off-chain). The committee membership rotates frequently (to ensure no single validator or group holds disproportionate influence for long), and across time, every validator’s voting power still averages out according to stake. This design, already used in other modern protocols (such as Ethereum’s consensus committees, Algorand’s sortition, or similar ideas in Sui), would dramatically cut the number of votes each slot without sacrificing the security of stake-weighted consensus. By having perhaps a few hundred votes per slot instead of several thousand, the bandwidth and processing burden of voting would shrink. It would also reduce the surface for denial-of-service attacks, since an attacker would have to overwhelm only the small committee rather than the entire validator set.
Crucially, these two strategies can be combined. Solana’s long-term vision includes both milestone certificates (BLS aggregated votes) and rotating committees. In tandem, a small randomly-chosen committee could aggregate its votes into just a few tiny signatures in each slot, driving the on-chain voting overhead to nearly zero. This would preserve Solana’s hallmark sub-second confirmation times and strong security while making validator operations far more efficient. For validators, that means lower network traffic, fewer votes to broadcast (and pay for), and ultimately a more sustainable economic model – all of which bolsters decentralization by lowering the cost to participate in the network as it scales.
Client Diversity and Resilience
Another major concern for Solana’s validator community is the lack of diversity in node software clients. Client monoculture (where virtually all validators run the same software implementation) poses a systemic risk to the network. If a critical bug exists in the dominant client, it could affect more than a third of the stake and stall the network’s finality, or worse, over two-thirds and compromise consensus safety. Unfortunately, Solana’s history provides several examples underscoring this risk. All of Solana’s early validators ran the original Rust-based client, and when bugs emerged, they resulted in full network outages rather than isolated failures. For instance, a bug in the Turbine propagation protocol caused a six-hour network halt in December 2020; an inconsistency in transaction processing under heavy load triggered a 17-hour outage in September 2021; and a flaw related to durable nonce transactions led to a consensus failure in June 2022. In each case, because every validator was running identical code, a single error mode brought down the entire cluster. There was no alternative implementation to carry the network through the incident, forcing validators to coordinate out-of-band and patch the sole client before restarting.
To avoid repeating these scenarios, Solana is actively pursuing client diversification as a pillar of network resilience. The ideal target is to have at least three (or more) independent (as >33% validator stakes affected by a bug might already stall the network), production-grade validator clients, each built from separate codebases and by different teams. This way, a bug in one client is unlikely to exist in others, and a failure in one portion of the network would not cascade to everyone. The Solana Foundation and other ecosystem players have been funding and supporting multiple new clients to achieve this. Presently, there are several major implementations in development or early use:
- Agave – The name now used for the original Rust client (now maintained by the team at Anza). This is the reference implementation that most validators currently run, and it continues to be improved for stability and performance within the Rust ecosystem.
- Firedancer – A high-performance validator client written in C++ by Jump Crypto. Firedancer is focused on maximizing throughput and efficiency; in laboratory tests, it has demonstrated the ability to process over 1,000,000 TPS (far beyond current network loads). It achieves this via low-level optimizations, parallelization, and close-to-hardware optimizations (including kernel-bypass networking techniques).
- Sig – A client implemented in Zig by Syndica, optimized for fast read access and minimal resource usage for certain operations. This client explores different design trade-offs, focusing on node operations like RPC handling and rapid ledger reads.
- Mithril – A lightweight full node implementation in Go by Overclock Labs. Mithril aims to provide a full Solana node that is easier to run (with lower resource requirements), which could be useful for increasing accessibility and decentralization (allowing more people to run nodes, even if not participating in consensus).
Fostering this diversity not only reduces the chance of a single bug halting the entire network but also spurs innovation. Over time, competition and cross-pollination of ideas can lead to a more robust overall protocol implementation. There is a legitimate concern that, eventually, validators might gravitate toward whichever client proves fastest or most resource-efficient (Firedancer, with its million-TPS benchmark, is a prime candidate). However, performance shouldn’t be the sole factor deciding client adoption. Different clients may cater to different niches (small nodes vs. large stakers), and hopefully, future improvements to the Rust client (Agave) will narrow the performance gap, preventing an excessive centralization pull toward a single option. Even when there is one majority client, the importance of having a backup minority client cannot be overstated. In Ethereum, for example, validators are advised to run different fallback consensus and execution clients to mitigate similar risks.
Encouragingly, awareness of client risk is growing in the community. In early 2025, the Foundation rolled out a policy on the testnet requiring validators that receive foundation delegation to run a special hybrid client nicknamed “Frankendancer” (essentially a testnet build combining parts of Firedancer with the main client). Validators who refused risked losing the Foundation’s delegation support. This carrot-and-stick approach quickly achieved its goal: within days, a large portion of testnet validators switched to Frankendancer, creating a more heterogeneous environment to battle-test multi-client consensus under real conditions. This strong stance signals that client diversity is no longer just an aspirational goal for Solana but is becoming an operational mandate. In the long run, maintaining multiple robust clients will be key to Solana’s stability, security, and ability to upgrade or fix issues without catastrophically interrupting the network.
End Users
Transaction Congestion and Throughput Saturation
Solana is designed for high throughput, yet demand for transaction processing is often at or beyond its current capacity, leading to persistent congestion. This congestion is not just a temporary surge phenomenon; it has become a structural characteristic of the network under heavy use. Blocks on Solana are almost always filled to their maximum allowable content, and incoming transaction traffic continually exceeds what the network can handle in a single slot. For example, just before a validator becomes the leader (block producer) for its allotted slots, it can be inundated with a burst of around 400,000 transaction packets from the network. After the leader node applies its initial checks — signature verification, duplicate detection (drop by transaction hash), and account locks to ensure no conflicting writes — it might still end up with roughly 28,000 valid transactions queued in its memory. However, a single block currently can carry only on the order of 1,500 transactions (given the fixed 25 million compute unit limit and typical transaction complexity). Even across a standard leader rotation of 4 consecutive slots (~1.6 seconds, producing about 4 blocks), a leader can only fit maybe 6,000 transactions total from that queue into its blocks. The rest – easily 20,000+ remaining transactions – cannot be included and will spill over for the next leader to handle. Because the next leader is simultaneously receiving its own flood of new transactions, a backlog perpetually exists. In short, the network’s pipeline often operates in a continuous state of saturation, where supply (transaction submissions) outstrips processing capacity at the base fee levels configured.
One immediate consequence of this constant congestion is a surge in fees and an increasingly aggressive competition in the fee market. Solana’s fee model has several components: a mandatory base fee (which rises during congestion), an optional user-set priority fee (to tip validators for inclusion), and, with the advent of MEV markets, a Jito block-builder tip for bundling transactions. When 28k+ transactions are vying for 1,500 slots, users who attach higher fees have their transactions prioritized for inclusion. Users who set fees too low are at risk of seeing their valid transactions dropped from validator queues entirely, without ever being processed. Essentially, the fee market becomes a strict sorting mechanism: only the top bidders consistently get into blocks during peak periods. This can be frustrating for users, as transactions that aren’t outright invalid can still vanish if they don’t make the cut in time.
Another factor exacerbating congestion is the overhead of vote transactions, as discussed earlier. Presently, vote transactions (from validators) outnumber user-generated transactions roughly 3 to 1, but they are simpler, so they only consume about 10% of every block’s compute budget. This means that a chunk of block space is always occupied by consensus maintenance rather than user activity. If those votes could be removed from on-chain processing (for instance, via the off-chain aggregation schemes like milestone certificates or committee voting in Alpenglow), that block space and compute capacity (on the order of a 10% boost) would be immediately freed for user transactions. In practical terms, simply shifting votes off-chain would translate to roughly a 10% increase in throughput for actual economic transactions, with no changes to hardware or block time. This is one reason the Solana community is eager to implement vote-efficient consensus upgrades.
To tackle the broader congestion issue, Solana’s roadmap is exploring increases in block capacity alongside software performance improvements. Currently, the hard cap is 60 million compute units (CUs) per block. Linearly increasing the compute budget could linearly increase the number of transactions that fit in each block (all else being equal). However, raising the limit is only viable if the validator clients can execute that many transactions within the fixed 400 ms slot time without lagging or forking. This is where the development of high-performance validator clients becomes crucial. The new alternative clients discussed earlier (Firedancer, etc.) are specifically engineered to provide more headroom.
Together, these advancements in client software mean the network should be able to push towards 100M CU per block without lengthening the slot time or increasing the risk of forks due to slow processing. In turn, that could translate to thousands more transactions per second of real throughput, alleviating the pressure of backlog overflows. Importantly, these scaling improvements come hand-in-hand with the client diversity benefits, meaning Solana can gain capacity while also reducing its reliance on any single software stack. In summary, transaction congestion remains a significant challenge, but a combination of removing unnecessary overhead (like vote tx) and systematically increasing block limits (enabled by faster validator software) is charting a path to greatly expand Solana’s throughput in the near future.
Unreliable RPC Responses
From the end-user’s perspective (e.g., someone using a wallet or a decentralized app), the reliability of transaction submission and confirmation is critical. Lately, users have observed that during peak congestion, the success rate of transaction submissions via RPC (Remote Procedure Call) nodes can degrade. In May 2025, for instance, telemetry from several major RPC providers showed that about 12% of send Transaction requests were failing to result in a confirmed transaction on-chain during volatile periods. What’s particularly problematic is that many of these failures were “silent” – meaning the RPC call did not return an error, yet the transaction never actually landed in a block (and ultimately never appeared in the transaction status ledger). This can be frustrating and confusing: users are left wondering if they should resubmit the transaction, and if so, whether they risk double-spending or if the first submission might yet go through. It undermines confidence in the network’s responsiveness.
Geographical latency further worsens the user experience with RPCs. If a trader or user is physically far from the majority of Solana validators or RPC endpoints, the round-trip time can be substantial (e.g., 150–200 milliseconds from South Africa or South America to an RPC in Europe or North America). Many decentralized applications require a sequence of interactive calls — for example, consider an algorithmic trader’s workflow: place an order, then poll the order book or market state, potentially cancel or modify the order, and finally settle a trade. This might involve a dozen sequential RPC calls, each dependent on the previous one’s result. At ~200 ms per round trip, even 10-12 calls can introduce 2 to 3 seconds of delay in total. In fast-moving markets, a few seconds is an eternity — prices may change and the opportunity might be lost by the time the final action is taken. Even if none of those calls fail, the cumulative latency makes the interface feel sluggish and can render certain high-frequency trading strategies or rapid NFT minting attempts impractical on Solana compared to faster environments.
To address both reliability and latency issues with RPC, the Solana ecosystem is developing a more advanced programmable RPC layer. One effort led by RockawayX is reimagining the transaction routing process. Instead of the current simplistic approach (where an RPC node just blindly forwards transactions to the network and hopes for the best), a programmable RPC will have a smart scheduler and router:
- It will monitor the cluster’s state in real time, including the leader schedule and network congestion. If a particular validator is about to become the leader for the next slot, the RPC service can proactively forward pending user transactions directly to that validator’s TPU, rather than relying on the normal gossip propagation. This direct injection increases the chance that those transactions make it into the very next block, bypassing some of the randomness and delay of the network.
- The RPC layer can dynamically choose the best path and node for a given transaction. For example, if one RPC node (or pathway) is overloaded or dropping packets, the system can reroute subsequent transactions to another, better-performing RPC endpoint behind the scenes. From the user’s perspective, it’s still a single interface, but behind it the system load-balances and avoids bottlenecks.
- Perhaps most innovatively, the programmable RPC introduces a concept akin to stored procedures in databases. Developers will be able to deploy custom logic to run at the RPC node itself. Rather than a dApp making five separate calls to fetch data and then deriving a result on the client side, a developer could upload a small script or query that the RPC executes locally (close to the data). The RPC would then return only the final result. This reduces the number of back-and-forth calls over the internet and can slash latency significantly for complex read or compute queries. It also offloads some computation to the infrastructure side, which can be optimized for speed.
In the latest development Agave v3.0 introduces responsiveness upgrades to the subscription server, which now prioritizes incoming messages, such as subscription requests and PINGs, over outgoing notifications. This change delivers faster, more reliable real-time updates for dApps using PubSub WebSockets.
Overall, a smarter RPC layer should reduce the frequency of “silent” transaction drops by ensuring transactions take an optimal path to a leader. It should also improve global user experience by minimizing needless delays and failures. Users would see more consistent transaction confirmations and faster reaction times, bringing Solana’s interaction latency closer to what people expect from Web2 services. This is especially important for trading platforms, real-time gaming, and other latency-sensitive applications on Solana.
Latency and Block Finality
Solana’s default block time of 400 milliseconds makes it one of the fastest layer-1 blockchains in terms of raw block production speed. In practice, many applications treat a single block (or maybe two blocks) as sufficient confirmation for most purposes (since the chance of a fork reversion after one block is low under normal conditions). However, the network’s formal finality – the point at which a block is irrevocably confirmed – arrives after an accumulation of about 32 consecutive blocks (the network’s commitment scheme, where 32 “ancestor” roots deep means irreversible), which is roughly ~12 seconds in typical operation. For comparison, users in Web2 are accustomed to interactions that take on the order of only 100–200 ms to feel instantaneous (for example, seeing a stock quote update or receiving an order confirmation on a retail website). Even 400 ms, while quick by blockchain standards, is 2–4 times slower than the near-instant feedback from centralized services. And a 12-second finality, while fine for many use cases like payments or NFTs, is too slow for ultra-low-latency trading environments. Competing specialized chains or rollups have started to target sub-second block times; for instance, Hyperliquid (a DeFi-focused rollup/exchange chain) boasts block times under 100 ms. Traditional centralized exchanges operate even faster, in the sub-millisecond range within single data centers. Thus, for Solana to broaden its appeal to latency-sensitive financial applications (like high-frequency trading or real-time bidding systems), there is a need to cut down both the time to first confirmation and the time to finality by a significant margin.
To push latency lower, Solana’s engineers have conceived a comprehensive initiative called Alpenglow. Alpenglow is not a single feature but rather a collection of coordinated changes to Solana’s block production, propagation, and consensus mechanisms, all aimed at achieving sub-second end-to-end transaction finality without sacrificing the open participation of validators. Alpenglow’s design has three main pillars:
- Microsliced Block Production: Under the current protocol, a leader assembles a complete block (containing all transactions up to the 1,500 TX / 48M CU limit), converts that block into ~4,000 data “shreds” for Turbine propagation, and only then begins broadcasting those shreds to the network. This means even the first byte of the block isn’t sent out until the entire block is finished. Alpenglow changes this by splitting each block into a series of smaller slices. As soon as the leader finishes assembling and signing the first slice of the block, it immediately starts transmitting that slice to peers, while simultaneously working on the next slice. The network no longer waits for the slowest transaction in a block to be processed before beginning transmission of the fastest; instead, it streams out the block in pieces, cutting idle time and shortening the apparent block delivery latency.
- Single-Hop Broadcast via Stake-Weighted Relayers: Solana’s current gossip layer (Turbine) propagates shreds in a tree-like, two-hop fashion: the leader sends shreds to a first tier of validators, who then fan them out to others in subsequent hops. This propagation can introduce variability and delays, especially if any node in the chain has a slow connection. Alpenglow revamps this by assigning each shred to a relayer node (selected in a stake-weighted manner, so large-stake, high-bandwidth nodes are more likely to be relayers). The leader sends each shred directly to the chosen relayer, and then that relayer immediately forwards the shred to all other validators in the network in parallel. To make this feasible, validators will likely need very high-bandwidth networking, and they can leverage DoubleZero cables to send data securely and fast to many peers at once.
- Two-Round Fast Finality: Alpenglow also revisits Solana’s consensus confirmation rules. Instead of waiting for 31 descendant blocks as in the current consensus (which ensures >66% of stake voted on a chain), the proposal introduces a two-stage BLS-based voting scheme. In this scheme, validators will issue votes in two rounds for each block: a pre-commit vote at about 60% stake approval and a final commit vote at about 80% stake. These percentages refer to the fraction of total stake that has endorsed the block in each round. By collecting a supermajority of 80%, the network can consider the block finalized much more quickly, because the threshold is high enough to ensure safety with certain new assumptions. Specifically, this approach tolerates up to 20% malicious or actively faulty stake and another 20% that might be slow or offline (for a total of 40% non-participation) while still finalizing. This is a slight relaxation of the classical Byzantine fault tolerance model (which assumes at most 33% faulty), but it remains extremely secure – an attacker would need to control 20% of total stake and coordinate a disruption of another 20%, which is economically and practically unfeasible (attaining 20% of SOL stake would cost tens of billions of dollars). The benefit is that with a two-round voting cadence (and using the BLS aggregated certificates described earlier), economic finality can be achieved in well under 1 second in the common case. In other words, once about 80% of stake signs off, that block is done and irreversible, potentially as fast as a single slot or two.
By combining these three pillars – microslicing, single-hop broadcast, and two-round fast finality – Alpenglow drastically lowers latency across the board. Early testing on devnets and testnets that incorporate Alpenglow changes has shown promising results. Measurements indicate that the time from a leader finishing a block to that block being fully finalized can drop to around 150 ms (100ms – 380ms), which is an impressive feat.
Future Scaling Vision (Yakovenko’s Endgame)
Looking beyond the near-term upgrades, Solana’s architects are also planning for truly massive scalability to meet the demands of global adoption. Solana, as of 2025, is already among the highest-performing blockchains (processing on the order of ~1,500 TPS with 400 ms blocks and very low fees). Yet, when compared to traditional financial infrastructure like the Nasdaq stock exchange (which routinely handles 500,000–1,000,000 transactions per second with sub-millisecond finality), there is a gap that must be closed for blockchain to potentially handle world-scale volumes. Bridging this gap requires not just incremental improvements but a reimagining of Solana’s architecture to increase parallelism and throughput by orders of magnitude without centralizing the network.
Anatoly Yakovenko (Solana’s co-founder) has outlined an “endgame” vision that proposes radical changes to reach this scale. A core idea is to explode the concept of a single leader/slot into many parallel tracks:
- Instead of one leader producing a block at a time, Solana could have thousands of concurrent leaders. In Yakovenko’s vision, there might be 10,000 or more leader schedules operating in parallel, with perhaps 4 to 16 leaders randomly chosen per block to collaboratively produce parts of the next block. This high degree of concurrency would make it far harder for any single leader to censor transactions (since many leaders work at once) and could multiply throughput dramatically by utilizing many nodes at the same time.
- Block times could potentially shrink further to around 120 milliseconds (or even lower), a target that is more feasible once the Alpenglow improvements and faster networks are in place. Shorter slots mean faster turnaround and more blocks per second.
- To manage the complexity of consensus with so many leaders, the proposal suggests forming smaller subcommittees for voting. Rather than every validator communicating with every other for consensus, the network could elect on the order of 200–400 consensus nodes per epoch (with an epoch being several hours). These nodes would handle the voting and consensus for finalizing blocks, reducing the communication overhead drastically. Since the committee is frequently rotated and randomly chosen, the security remains based on the full stake distribution, but each given time window uses a leaner set of nodes to finalize blocks. This approach keeps consensus efficient even as the validator count grows and as leaders proliferate.
- Perhaps the most far-reaching change is a shift to asynchronous execution of transactions relative to consensus. In the current Solana design (and most blockchains), the leader node executes transactions in lockstep with proposing them, and every validator re-executes all transactions in the block to verify it, all within the span of the block’s slot time. The endgame proposal would decouple transaction execution from the voting on transaction ordering. In an asynchronous execution model, the network’s nodes agree on the ordering of transactions (and basic block contents) first, and the actual state execution could be carried out in parallel or even on separate specialized execution nodes. Consensus nodes would not need to perform all the transaction processing in real-time, which lowers the hardware requirements for participating in consensus. Execution could happen on different machines that take the ordered list of transactions and compute the new state, potentially feeding results back. This separation of concerns could greatly improve cache usage and allow hardware to be specialized (consensus vs. execution roles), and it means validators with less powerful machines could still participate in consensus because they aren’t doing all the heavy lifting of running every smart contract instantly.
- SIMD-0083: Relax Entry Constraints, set for activation during Agave 3.0, removes the rule that transactions within a block entry must not conflict with each other. Previously, any entry containing conflicting transactions (i.e., those that both write to the same account or where one reads while another writes) would invalidate the entire block, paving the way for the above mentioned Asynchronous execution.
In summary, Solana’s long-term roadmap (as championed by Yakovenko and others) is to transform the network architecture to meet world-scale demand. The combination of concurrent block production, ultra-fast slot times, streamlined consensus committees, and decoupled execution could enable Solana to serve billions of users and the most demanding applications (like high-frequency trading, global payments, and IoT-scale messaging) on a single unified state blockchain. It is an ambitious vision, but one that is actively being researched and incrementally implemented. If successful, Solana would not only maintain its lead among high-performance blockchains but potentially achieve parity with or surpass centralized systems in throughput and responsiveness, all while preserving the core values of openness and decentralization.
Developers
Steep Development Learning Curve
Building smart contracts and applications on Solana has proven to be both highly rewarding and challenging, especially in the early years of the project. Solana’s runtime (called Sealevel) introduces a unique parallel execution model for smart contracts, and the primary language for on-chain development is Rust, a powerful but complex systems programming language. This combination meant that, unlike the Ethereum ecosystem, where new developers could lean on a mature stack (Solidity, which is relatively straightforward, and a wealth of tools and frameworks like Truffle, Hardhat, OpenZeppelin libraries, etc.), early Solana developers often had to start from first principles. Between 2018 and 2020, Solana’s developer ecosystem was nascent: there were few tutorials, minimal tooling, and one had to learn concepts like explicit account handling, byte serialization, and Rust’s strict compile-time checks and ownership model. Many patterns familiar to Ethereum devs (such as easily using pre-built ERC-20 contracts or forking code from existing projects) were not yet available in Solana’s world, which led to a perception that Solana development was only for very experienced engineers willing to climb a steep learning curve.
Rust itself, while an excellent language for performance and safety, is known to have a daunting learning phase. Its compiler is strict, enforcing memory safety through a borrowing and ownership system that is initially confusing to those coming from more forgiving languages like JavaScript or Python (or even Solidity). In the blockchain context, Rust had been used in projects like Polkadot/Substrate, but it wasn’t the norm for writing smart contracts. Thus, developers attracted to Solana’s performance had to grapple with learning Rust and new paradigms of parallel transaction execution and account-based state management that differ from the Ethereum Virtual Machine model. Simple tasks could feel complex: for example, an Ethereum developer might spin up a local network and deploy a token contract in minutes using Hardhat and OpenZeppelin libraries, whereas an early Solana developer in 2020 might have had to manually code significant functionality and navigate low-level details to achieve the same outcome.
Tooling and developer infrastructure on Solana started out very basic. In 2020 and 2021, there was no equivalent to an all-in-one framework like Hardhat. The initial development experience required using the Solana CLI tools, writing tests in JavaScript or Rust with a lot of boilerplate, and dealing with less polished local node emulators. Debugging was particularly painful – diagnosing issues often meant combing through log outputs or program errors without the convenience of breakpoints or interactive debugging. Continuous integration and deployment pipelines were underdeveloped or had to be built by teams from scratch. All these gaps made Solana development feel “closer to the metal” and time-intensive.
However, the ecosystem has rapidly evolved and improved on the developer experience. By late 2021 and especially through 2022 and 2023, several frameworks and libraries emerged that significantly streamline Solana programming:
- Anchor: Anchor became the flagship framework for Solana smart contract development (often likened to Solana’s equivalent of Truffle/Hardhat on Ethereum). It provides a declarative way to define accounts and instructions, auto-generates an IDL (Interface Definition Language) for front-end use, and uses Rust macros to reduce boilerplate. Anchor simplified common patterns (like token transfers, error handling, and upgradable program logic) and has been widely adopted, making it much easier for new developers to write and deploy programs.
- Solana Program Library (SPL): The SPL is a collection of audited, open-source programs that implement standard functionality – for example, the SPL Token program (equivalent to ERC-20) provides a ready-made token contract that everyone can use, rather than reinventing it for each project. Other SPL programs include associated token accounts, token swaps, staking, etc. This library gives developers building blocks to compose into their own applications, much as Ethereum developers rely on OpenZeppelin’s standardized contracts.
- Better Local Testing and Dev Tools: The community introduced improved local validator environments (like solana-test-validator), which could mimic network behavior for testing, as well as frameworks for writing tests in Rust (so developers can unit-test their contracts similarly to how they would in Ethereum). Tools for linting, static analysis, and security (like rudimentary fuzzers or analyzers) also started to appear.
- Education and Examples: As time went on, more tutorials, example codebases, and template projects became available. Hackathons sponsored by the Solana Foundation, Ackee school of Solana produced reference implementations and open-sourced projects that newcomers could learn from, reducing the barrier to entry.
By 2024, the development experience on Solana had improved by leaps and bounds. It’s still acknowledged that Solana’s dev environment is catching up to Ethereum’s in some respects – for instance, Ethereum’s ecosystem benefits from many years of refinement, with things like easy contract verification, formal analysis tools, and battle-tested infrastructure. Some of the more advanced conveniences (like step-by-step debugging of on-chain code or very mature profilers) are only now emerging for Solana. Nonetheless, the gap is closing quickly.
One indicator of Solana’s progress is the growth of its developer community. According to an industry report by Electric Capital, Solana added more new developers in late 2024 than any other blockchain platform. In fact, 2024 was the first year that the number of new developers joining the Solana ecosystem surpassed those joining Ethereum. Over 7,000+ new developers contributed to Solana projects in that year, even while the overall number of active crypto developers slightly declined industry-wide. Furthermore, by the end of 2024, Solana was second only to Ethereum in total monthly active developers, with around 2,500 monthly active developers writing or maintaining Solana code. Notably, a significant fraction of these developers were new to blockchain entirely (meaning Solana attracted talent beyond the existing pool of Ethereum/Solidity developers). This momentum suggests that the improvements in tooling, along with Solana’s performance advantages, are resonating with builders.
In summary, while early Solana development demanded specialized knowledge and lots of patience, the environment has matured rapidly. Many of the pain points (lack of frameworks, scarce libraries, poor debugging support) are being addressed through community efforts and foundation support. Developers now have access to frameworks like Anchor, a rich standard library in SPL, and growing documentation. The learning curve for Solana, especially around Rust and parallel programming, is still non-trivial, but those who overcome it are rewarded with the ability to build on one of the fastest chains. As abstractions improve (e.g., higher-level languages or better SDKs on top of Rust, or even alternative VM support in the future) and as more educational resources proliferate, we expect the developer onboarding to become even smoother. The explosive growth in the Solana developer community is a strong testament that these challenges are being overcome and that there is enormous interest in building within this ecosystem.
Smart Contract Verification and Transparency
In the Ethereum world, it has long been standard practice to publish and verify smart contract source code on public block explorers (like Etherscan) once contracts are deployed. This practice enhances transparency and trust: any user can audit the live code of popular protocols, and the community generally expects that if a contract is not verified (i.e., source code matching the on-chain bytecode is not publicly available), something might be amiss. Most major Ethereum DeFi projects—Uniswap, Aave, Lido, Spark, Ethena, and others—openly verify their contracts, and often even open-source the code under permissive licenses. The culture strongly rewards openness; an unverified contract might be viewed with suspicion or outright avoided by experienced users.
Historically, Solana’s ecosystem lagged in this aspect of transparency. Until recently, a large number of prominent Solana on-chain programs (equivalent to smart contracts) were not open-source or verified on explorers. For example, among the top dApps, Pump.Fun, Jupiter, OKX DEX, Kamino, Marinade Finance, Orca, Fragmetric, and Pyth Network are not verified, and only Raydium and Drift are. This low rate of contract verification on Solana had two primary causes: one technical and one cultural.
Technical challenges impeded verification because Solana’s build process was not originally deterministic. On Ethereum, given a Solidity source file, a specific compiler version, and settings, one can reliably produce the exact bytecode that was deployed (enabling easy verification by matching hashes). Solana programs, on the other hand, are compiled to BPF bytecode via the Rust toolchain, and the output could vary depending on trivial factors like the compiler version, the operating system, or other environment differences. This made it difficult for developers to reproduce the exact binary that was on-chain, and thus, they couldn’t easily prove that a published source corresponds 1:1 to the deployed program. The Solana core team recognized this and, as early as 2020, had open issues requesting deterministic builds. Progress was slow, but by 2023, the community developed solutions for verifiable builds. One major contribution came from Ellipsis Labs, which released a Solana Verifiable Build CLI tool. This tool, along with enhancements to Anchor, allows developers to compile their programs in a Dockerized, pinned environment – essentially specifying an exact container with all dependencies (compiler version, Rust toolchain, etc.) so that anyone can recreate the build. Using these fixed environments, the compiled output becomes byte-for-byte reproducible. With this breakthrough, it became possible to reliably verify Solana programs: a developer can publish their source code and the Docker build recipe, and anyone else can build it to confirm the output matches the on-chain deployment. While this process isn’t as one-click simple as Ethereum’s (where Etherscan handles much of the compiler selection automatically), it effectively solves the technical barrier to contract verification on Solana. Now it’s largely a matter of developer effort and tooling to make it more seamless.
The second aspect is cultural. In Solana’s early DeFi boom (around 2020–2021), many projects were venture-funded startups racing to build novel products (DEXs, lending platforms, etc.) in what was then a less crowded ecosystem. There was a perception among some teams that keeping their program code closed-source could preserve a competitive advantage, at least until they established market dominance. Some founders feared that if they open-sourced too early, copycat projects would quickly clone their work and steal market share. Unlike Ethereum, where any attempt at a closed-source contract would draw immediate scrutiny, Solana, at that time, had a user base less attuned to checking contract code (and the explorer infrastructure for verified code was not well-developed either). As a result, there was less community pressure to verify, and many Solana users interacted with protocols without demanding transparency. In essence, the norms hadn’t fully caught up to those of Ethereum’s community when it came to open source expectations.
This began to change as the ecosystem matured. High-profile security incidents and the general industry trend towards openness have driven home that transparency is crucial for user trust. The Solana Foundation and ecosystem leaders started explicitly encouraging projects to verify and open-source their code. For example, in April 2023, the Solana Foundation made an announcement heralding “a new era of transparent, verifiable programs on Solana”. The messaging emphasized that as Solana targets mainstream adoption, it must uphold the trustless ethos by making program logic publicly auditable. They highlighted the new tools (deterministic builds, etc.) that remove the old excuses, and they appealed to developers to adopt Ethereum-like standards in publishing their code.
Since then, we’ve seen a positive shift: more and more new Solana programs are launched with their source code available, and existing projects have started to verify their contracts on explorer platforms as the feature becomes available. Over time, one can expect Solana’s culture to fully embrace open source transparency, converging on the best practices seen in Ethereum. This cultural shift, combined with the technical ability to verify, will significantly improve security (more eyes on code) and user confidence in the Solana network. It also aligns Solana with the broader blockchain community’s expectation that decentralization isn’t just about running nodes, but also about open and inspectable code. In conclusion, while Solana’s early growth prioritized performance and rapid development sometimes at the expense of openness, the trend is decisively moving towards greater transparency. With verifiable build processes now in place and strong advocacy from the Foundation, smart contract verification is becoming the norm, reinforcing Solana’s credibility as a trustworthy, mature platform for DeFi and Web3 applications.
Sources:
- https://www.helius.dev/blog/solana-outages-complete-history
- https://www.helius.dev/blog/alpenglow
- https://www.binance.com/en/square/post/01-08-2025-solana-intensifies-testing-of-firedancer-to-boost-blockchain-speed-18682479207433
- https://drive.google.com/file/d/10mkBvMXUPB3Hh9wHvYnV9TDSZ4MdYV70/view