Blockchain Scalability

By Dmitriy Kim on ALTCOIN MAGAZINEA brief overview, what do I have so far. It’s a project intending to build an alternative blockchain network, fixing the existing troubles with scalability by using their original technological approach. And the White Paper devoted to the description of this technological approach, which still remains somewhat vague even after this extensive description.Source: Data-Driven InvestorWhat’s the scalability problem? Like at some point we began to observe slow transaction confirmation and high transaction fees on the Bitcoin network. It’s the thing defined by the blockchain architecture. All the pending transactions fall into a special pool, where they remain until miners begin to add them into blocks. When the number of transactions had dramatically increased (which happened somewhat at the beginning of 2017) a number of transactions stuck into pools became quite significant. Miners don’t pick transactions in the same order they arrive. There is a different scheme. Each transaction is supplied with an additional amount of money (transaction fee) that goes to the miner as a reward for his work of adding this transaction to the block and all the hassle and heavy calculations necessary to hash the block and add it to the chain. So miners pick transactions that ready to pay the highest fees. So according to this implementation of market principles to the transaction confirmation process, the fees naturally go up. Because the number of transactions increases and the miners cannot process them all in an adequate time, so they favor transactions whose owners ready to pay more.In one article the situation was compared to a bus stop with buses (having only, say, twelve seats) arriving every ten minutes. Meanwhile, there are hundreds of people, standing at the bus stop, and many are ready to pay extra to get ahead.So, what’s initially the cause of this bottleneck? In other words, why the process of transaction confirmation is so slow, and why the Bitcoin network can only process about seven transactions per second? According to many experts, it has to do with the limited size of the block (1Mb) It’s logical; every block requires hashing, which, in turn, requires a lot of computational resources. So the smaller the block, the fewer transactions can fit into one block. So it leads to a higher number of blocks and more calculations necessary to process transactions packed in those smaller blocks.From the ideological point of view, many people claim that such a situation leads to changing the Bitcoin paradigm from peer-to-peer money exchange to a function of the settlement layer. In other words, it will become too expensive for ordinary people to conduct Bitcoin transactions, and low-amount transactions wouldn’t make any sense because fees for their confirmation will be comparable with the amount of transacted money. So, it means that transactions will mostly be conducted between bigger organizations (intermediaries) that would transact large sums of money and would be able to afford to pay high transaction fees. Hence this definition “The settlement layer.”So speaking about a solution, there are two opinions about how to overcome this situation with scalability. The first group insists that the block size should be increased. One of the forks, trying to implement this idea was SegWit. (Also, the principles of Bitcoin Cash have to do with the block size among other things, but I need to check)The problem with this approach is that the limited block size guaranteed some kind of predictability on the market of Bitcoin mining. In other words, it controls the character and speed of the hashing process and the number of new Bitcoins getting added into circulation. (Which is the failsafe mechanism, preventing uncontrollable Bitcoin emission, the way I understand it)Plus, with the larger blocks, it would be harder for the full nodes to hold the whole copy of the blockchain. (I don’t really understand why is it so, but anyway) The role of full nodes in the blockchain is to propagate pending transactions across the network, as well as to propagate hashed blocks, checking their validity in the process. Therefore, they need a full copy of the blockchain to be able to make those checks. One of the important aspects of this scheme is that the operation is considered correct only if all full nodes agree that it’s correct. (full consensus) If one of the full nodes states otherwise it’s enough for the network to reject the operation.So with the larger block size, the hardware requirements for the full nodes will increase, and fewer machines will be able to perform this function. If the network is only supported by a limited number of powerful servers, presumably belonging to large organizations there emerges another risk related to the fact that an entity or a group having more than 51 percent of computational power on the network can compromise it by manipulating transactions with fraudulent purposes. The remaining nodes on the network won’t be able to stop this process because they just won’t be able to keep up with computations.A different approach to solving the problem of low performance and scalability of the Bitcoin network is, in fact, splitting it into a number of local networks with the main Bitcoin blockchain as the core of the system. In this case, transactions are conducted separately on local networks, being verified only within those networks, then the aggregated result (or whatever it is) is added and checked on the main network. The smaller-scale networks probably are easier to hijack since they have a limited number of nodes verifying transactions. There are other technical and ideological problems like, in this case, it wouldn’t be a truly decentralized exchange independent from anybody. Local networks will be controlled by smaller groups of people, and they will be related to some specific entities and organizations, which eventually would defeat the idea of the Bitcoin network in its initial form. But, nonetheless, there are existing projects implementing this paradigm like Lightning Network, and platforms, utilizing the technology of State Channels, and other experimental solutions.In addition to the problem of computational difficulty and slow speed of block hashing, there are other technical aspects that new projects try to alter to create alternative blockchains with higher performance. In a project I study, there is a different approach to building Merkel trees, used to verify the validity of transactions within the block. Without getting into details about how the Merkel trees work, in the classical Bitcoin architecture, each block contains the root of the respective Merkel tree representing the transactions in that block. The project claims that the existing implementation of Merkel trees in the blockchain is not optimal and that, by tweaking it a bit, it’s possible to increase the speed of transaction verification within the block. (Or to check the correctness of a newly formed block)At this point, things are a bit hazy in my head, but there are several facts I remember: Miners check the validity of a previous block to avoid working on the wrong branch of the chain, which would lead to them wasting their time and resources. This process of testing the validity of the block has to do with transaction verification, Merkel trees, etc.. One of the objectives of the SegWit project was to optimize this process, as a result accelerating the process of Mining. Also, as a side note, Ethereum blockchain supports three Merkel trees in each block, which allows extracting more easily the information about transactions, their status, state of accounts and so on.Altcoin Magazine Scalability was originally published in ALTCOIN MAGAZINE on Medium, where people are continuing the conversation by highlighting and responding to this story.
Go to Source
Author: Dmitriy Kim

error: Alert: Content is protected !!