Why Is Bitcoin Difficulty Chart So Famous? Chart Information

How EpiK Protocol “Saved the Miners” from Filecoin with the E2P Storage Model?

How EpiK Protocol “Saved the Miners” from Filecoin with the E2P Storage Model?

On October 20, Eric Yao, Head of EpiK China, and Leo, Co-Founder & CTO of EpiK, visited Deep Chain Online Salon, and discussed “How EpiK saved the miners eliminated by Filecoin by launching E2P storage model”. ‘?” The following is a transcript of the sharing.
Sharing Session
Eric: Hello, everyone, I’m Eric, graduated from School of Information Science, Tsinghua University. My Master’s research was on data storage and big data computing, and I published a number of industry top conference papers.
Since 2013, I have invested in Bitcoin, Ethereum, Ripple, Dogcoin, EOS and other well-known blockchain projects, and have been settling in the chain circle as an early technology-based investor and industry observer with 2 years of blockchain experience. I am also a blockchain community initiator and technology evangelist
Leo: Hi, I’m Leo, I’m the CTO of EpiK. Before I got involved in founding EpiK, I spent 3 to 4 years working on blockchain, public chain, wallets, browsers, decentralized exchanges, task distribution platforms, smart contracts, etc., and I’ve made some great products. EpiK is an answer to the question we’ve been asking for years about how blockchain should be landed, and we hope that EpiK is fortunate enough to be an answer for you as well.
Q & A
Deep Chain Finance:
First of all, let me ask Eric, on October 15, Filecoin’s main website launched, which aroused everyone’s attention, but at the same time, the calls for fork within Filecoin never stopped. The EpiK protocol is one of them. What I want to know is, what kind of project is EpiK Protocol? For what reason did you choose to fork in the first place? What are the differences between the forked project and Filecoin itself?
First of all, let me answer the first question, what kind of project is EpiK Protocol.
With the Fourth Industrial Revolution already upon us, comprehensive intelligence is one of the core goals of this stage, and the key to comprehensive intelligence is how to make machines understand what humans know and learn new knowledge based on what they already know. And the knowledge graph scale is a key step towards full intelligence.
In order to solve the many challenges of building large-scale knowledge graphs, the EpiK Protocol was born. EpiK Protocol is a decentralized, hyper-scale knowledge graph that organizes and incentivizes knowledge through decentralized storage technology, decentralized autonomous organizations, and generalized economic models. Members of the global community will expand the horizons of artificial intelligence into a smarter future by organizing all areas of human knowledge into a knowledge map that will be shared and continuously updated for the eternal knowledge vault of humanity
And then, for what reason was the fork chosen in the first place?
EpiK’s project founders are all senior blockchain industry practitioners and have been closely following the industry development and application scenarios, among which decentralized storage is a very fresh application scenario.
However, in the development process of Filecoin, the team found that due to some design mechanisms and historical reasons, the team found that Filecoin had some deviations from the original intention of the project at that time, such as the overly harsh penalty mechanism triggered by the threat to weaken security, and the emergence of the computing power competition leading to the emergence of computing power monopoly by large miners, thus monopolizing the packaging rights, which can be brushed with computing power by uploading useless data themselves.
The emergence of these problems will cause the data environment on Filecoin to get worse and worse, which will lead to the lack of real value of the data in the chain, high data redundancy, and the difficulty of commercializing the project to land.
After paying attention to the above problems, the project owner proposes to introduce multi-party roles and a decentralized collaboration platform DAO to ensure the high value of the data on the chain through a reasonable economic model and incentive mechanism, and store the high-value data: knowledge graph on the blockchain through decentralized storage, so that the lack of value of the data on the chain and the monopoly of large miners’ computing power can be solved to a large extent.
Finally, what differences exist between the forked project and Filecoin itself?
On the basis of the above-mentioned issues, EpiK’s design is very different from Filecoin, first of all, EpiK is more focused in terms of business model, and it faces a different market and track from the cloud storage market where Filecoin is located because decentralized storage has no advantage over professional centralized cloud storage in terms of storage cost and user experience.
EpiK focuses on building a decentralized knowledge graph, which reduces data redundancy and safeguards the value of data in the distributed storage chain while preventing the knowledge graph from being tampered with by a few people, thus making the commercialization of the entire project reasonable and feasible.
From the perspective of ecological construction, EpiK treats miners more friendly and solves the pain point of Filecoin to a large extent, firstly, it changes the storage collateral and commitment collateral of Filecoin to one-time collateral.
Miners participating in EpiK Protocol are only required to pledge 1000 EPK per miner, and only once before mining, not in each sector.
What is the concept of 1000 EPKs, you only need to participate in pre-mining for about 50 days to get this portion of the tokens used for pledging. The EPK pre-mining campaign is currently underway, and it runs from early September to December, with a daily release of 50,000 ERC-20 standard EPKs, and the pre-mining nodes whose applications are approved will divide these tokens according to the mining ratio of the day, and these tokens can be exchanged 1:1 directly after they are launched on the main network. This move will continue to expand the number of miners eligible to participate in EPK mining.
Secondly, EpiK has a more lenient penalty mechanism, which is different from Filecoin’s official consensus, storage and contract penalties, because the protocol can only be uploaded by field experts, which is the “Expert to Person” mode. Every miner needs to be backed up, which means that if one or more miners are offline in the network, it will not have much impact on the network, and the miner who fails to upload the proof of time and space in time due to being offline will only be forfeited by the authorities for the effective computing power of this sector, not forfeiting the pledged coins.
If the miner can re-submit the proof of time and space within 28 days, he will regain the power.
Unlike Filecoin’s 32GB sectors, EpiK’s encapsulated sectors are smaller, only 8M each, which will solve Filecoin’s sector space wastage problem to a great extent, and all miners have the opportunity to complete the fast encapsulation, which is very friendly to miners with small computing power.
The data and quality constraints will also ensure that the effective computing power gap between large and small miners will not be closed.
Finally, unlike Filecoin’s P2P data uploading model, EpiK changes the data uploading and maintenance to E2P uploading, that is, field experts upload and ensure the quality and value of the data on the chain, and at the same time introduce the game relationship between data storage roles and data generation roles through a rational economic model to ensure the stability of the whole system and the continuous high-quality output of the data on the chain.
Deep Chain Finance:
Eric, on the eve of Filecoin’s mainline launch, issues such as Filecoin’s pre-collateral have aroused a lot of controversy among the miners. In your opinion, what kind of impact will Filecoin bring to itself and the whole distributed storage ecosystem after it launches? Do you think that the current confusing FIL prices are reasonable and what should be the normal price of FIL?
Filecoin mainnet has launched and many potential problems have been exposed, such as the aforementioned high pre-security problem, the storage resource waste and computing power monopoly caused by unreasonable sector encapsulation, and the harsh penalty mechanism, etc. These problems are quite serious, and will greatly affect the development of Filecoin ecology.
These problems are relatively serious, and will greatly affect the development of Filecoin ecology, here are two examples to illustrate. For example, the problem of big miners computing power monopoly, now after the big miners have monopolized computing power, there will be a very delicate state — — the miners save a file data with ordinary users. There is no way to verify this matter in the chain, whether what he saved is uploaded by himself or someone else. And after the big miners have monopolized computing power, there will be a very delicate state — — the miners will save a file data with ordinary users, there is no way to verify this matter in the chain, whether what he saved is uploaded by himself or someone else. Because I can fake another identity to upload data for myself, but that leads to the fact that for any miner I go to choose which data to save. I have only one goal, and that is to brush my computing power and how fast I can brush my computing power.
There is no difference between saving other people’s data and saving my own data in the matter of computing power. When I save someone else’s data, I don’t know that data. Somewhere in the world, the bandwidth quality between me and him may not be good enough.
The best option is to store my own local data, which makes sense, and that results in no one being able to store data on the chain at all. They only store their own data, because it’s the most economical for them, and the network has essentially no storage utility, no one is providing storage for the masses of retail users.
The harsh penalty mechanism will also severely deplete the miner’s profits, because DDOS attacks are actually a very common attack technique for the attacker, and for a big miner, he can get a very high profit in a short period of time if he attacks other customers, and this thing is a profitable thing for all big miners.
Now as far as the status quo is concerned, the vast majority of miners are actually not very well maintained, so they are not very well protected against these low-DDOS attacks. So the penalty regime is grim for them.
The contradiction between the unreasonable system and the demand will inevitably lead to the evolution of the system in a more reasonable direction, so there will be many forked projects that are more reasonable in terms of mechanism, thus attracting Filecoin miners and a diversion of storage power.
Since each project is in the field of decentralized storage track, the demand for miners is similar or even compatible with each other, so miners will tend to fork the projects with better economic benefits and business scenarios, so as to filter out the projects with real value on the ground.
For the chaotic FIL price, because FIL is also a project that has gone through several years, carrying too many expectations, so it can only be said that the current situation has its own reasons for existence. As for the reasonable price of FIL there is no way to make a prediction because in the long run, it is necessary to consider the commercialization of the project to land and the value of the actual chain of data. In other words, we need to keep observing whether Filecoin will become a game of computing power or a real value carrier.
Deep Chain Finance:
Leo, we just mentioned that the pre-collateral issue of Filecoin caused the dissatisfaction of miners, and after Filecoin launches on the main website, the second round of space race test coins were directly turned into real coins, and the official selling of FIL hit the market phenomenon, so many miners said they were betrayed. What I want to know is, EpiK’s main motto is “save the miners eliminated by Filecoin”, how to deal with the various problems of Filecoin, and how will EpiK achieve “save”?
Originally Filecoin’s tacit approval of the computing power makeup behavior was to declare that the official directly chose to abandon the small miners. And this test coin turned real coin also hurt the interests of the loyal big miners in one cut, we do not know why these low-level problems, we can only regret.
EpiK didn’t do it to fork Filecoin, but because EpiK to build a shared knowledge graph ecology, had to integrate decentralized storage in, so the most hardcore Filecoin’s PoRep and PoSt decentralized verification technology was chosen. In order to ensure the quality of knowledge graph data, EpiK only allows community-voted field experts to upload data, so EpiK naturally prevents miners from making up computing power, and there is no reason for the data that has no value to take up such an expensive decentralized storage resource.
With the inability to make up computing power, the difference between big miners and small miners is minimal when the amount of knowledge graph data is small.
We can’t say that we can save the big miners, but we are definitely the optimal choice for the small miners who are currently in the market to be eliminated by Filecoin.
Deep Chain Finance:
Let me ask Eric: According to EpiK protocol, EpiK adopts the E2P model, which allows only experts in the field who are voted to upload their data. This is very different from Filecoin’s P2P model, which allows individuals to upload data as they wish. In your opinion, what are the advantages of the E2P model? If only voted experts can upload data, does that mean that the EpiK protocol is not available to everyone?
First, let me explain the advantages of the E2P model over the P2P model.
There are five roles in the DAO ecosystem: miner, coin holder, field expert, bounty hunter and gateway. These five roles allocate the EPKs generated every day when the main network is launched.
The miner owns 75% of the EPKs, the field expert owns 9% of the EPKs, and the voting user shares 1% of the EPKs.
The other 15% of the EPK will fluctuate based on the daily traffic to the network, and the 15% is partly a game between the miner and the field expert.
The first describes the relationship between the two roles.
The first group of field experts are selected by the Foundation, who cover different areas of knowledge (a wide range of knowledge here, including not only serious subjects, but also home, food, travel, etc.) This group of field experts can recommend the next group of field experts, and the recommended experts only need to get 100,000 EPK votes to become field experts.
The field expert’s role is to submit high-quality data to the miner, who is responsible for encapsulating this data into blocks.
Network activity is judged by the amount of EPKs pledged by the entire network for daily traffic (1 EPK = 10 MB/day), with a higher percentage indicating higher data demand, which requires the miner to increase bandwidth quality.
If the data demand decreases, this requires field experts to provide higher quality data. This is similar to a library with more visitors needing more seats, i.e., paying the miner to upgrade the bandwidth.
When there are fewer visitors, more money is needed to buy better quality books to attract visitors, i.e., money for bounty hunters and field experts to generate more quality knowledge graph data. The game between miners and field experts is the most important game in the ecosystem, unlike the game between the authorities and big miners in the Filecoin ecosystem.
The game relationship between data producers and data storers and a more rational economic model will inevitably lead to an E2P model that generates stored on-chain data of much higher quality than the P2P model, and the quality of bandwidth for data access will be better than the P2P model, resulting in greater business value and better landing scenarios.
I will then answer the question of whether this means that the EpiK protocol will not be universally accessible to all.
The E2P model only qualifies the quality of the data generated and stored, not the roles in the ecosystem; on the contrary, with the introduction of the DAO model, the variety of roles introduced in the EpiK ecosystem (which includes the roles of ordinary people) is not limited. (Bounty hunters who can be competent in their tasks) gives roles and possibilities for how everyone can participate in the system in a more logical way.
For example, a miner with computing power can provide storage, a person with a certain domain knowledge can apply to become an expert (this includes history, technology, travel, comics, food, etc.), and a person willing to mark and correct data can become a bounty hunter.
The presence of various efficient support tools from the project owner will lower the barriers to entry for various roles, thus allowing different people to do their part in the system and together contribute to the ongoing generation of a high-quality decentralized knowledge graph.
Deep Chain Finance:
Leo, some time ago, EpiK released a white paper and an economy whitepaper, explaining the EpiK concept from the perspective of technology and economy model respectively. What I would like to ask is, what are the shortcomings of the current distributed storage projects, and how will EpiK protocol be improved?
Distributed storage can easily be misunderstood as those of Ali’s OceanDB, but in the field of blockchain, we should focus on decentralized storage first.
There is a big problem with the decentralized storage on the market now, which is “why not eat meat porridge”.
How to understand it? Decentralized storage is cheaper than centralized storage because of its technical principle, and if it is, the centralized storage is too rubbish for comparison.
What incentive does the average user have to spend more money on decentralized storage to store data?
Is it safer?
Existence miners can shut down at any time on decentralized storage by no means save a share of security in Ariadne and Amazon each.
More private?
There’s no difference between encrypted presence on decentralized storage and encrypted presence on Amazon.
The 10,000 gigabytes of bandwidth in decentralized storage simply doesn’t compare to the fiber in a centralized server room. This is the root problem of the business model, no one is using it, no one is buying it, so what’s the big vision.
The goal of EpiK is to guide all community participants in the co-construction and sharing of field knowledge graph data, which is the best way for robots to understand human knowledge, and the more knowledge graph data there is, the more knowledge a robot has, the more intelligent it is exponentially, i.e., EpiK uses decentralized storage technology. The value of exponentially growing data is captured with linearly growing hardware costs, and that’s where the buy-in for EPK comes in.
Organized data is worth a lot more than organized hard drives, and there is a demand for EPK when robots have the need for intelligence.
Deep Chain Finance:
Let me ask Leo, how many forked projects does Filecoin have so far, roughly? Do you think there will be more or less waves of fork after the mainnet launches? Have the requirements of the miners at large changed when it comes to participation?
We don’t have specific statistics, now that the main network launches, we feel that forking projects will increase, there are so many restricted miners in the market that they need to be organized efficiently.
However, we currently see that most forked projects are simply modifying the parameters of Filecoin’s economy model, which is undesirable, and this level of modification can’t change the status quo of miners making up computing power, and the change to the market is just to make some of the big miners feel more comfortable digging up, which won’t help to promote the decentralized storage ecology to land.
We need more reasonable landing scenarios so that idle mining resources can be turned into effective productivity, pitching a 100x coin instead of committing to one Fomo sentiment after another.
Deep Chain Finance:
How far along is the EpiK Protocol project, Eric? What other big moves are coming in the near future?
The development of the EpiK Protocol is divided into 5 major phases.
(a) Phase I testing of the network “Obelisk”.
Phase II Main Network 1.0 “Rosetta”.
Phase III Main Network 2.0 “Hammurabi”.
(a) The Phase IV Enrichment Knowledge Mapping Toolkit.
The fifth stage is to enrich the knowledge graph application ecology.
Currently in the first phase of testing network “Obelisk”, anyone can sign up to participate in the test network pre-mining test to obtain ERC20 EPK tokens, after the mainnet exchange on a one-to-one basis.
We have recently launched ERC20 EPK on Uniswap, you can buy and sell it freely on Uniswap or download our EpiK mobile wallet.
In addition, we will soon launch the EpiK Bounty platform, and welcome all community members to do tasks together to build the EpiK community. At the same time, we are also pushing forward the centralized exchange for token listing.
Users’ Questions
User 1:
Some KOLs said, Filecoin consumed its value in the next few years, so it will plunge, what do you think?
First of all, the judgment of the market is to correspond to the cycle, not optimistic about the FIL first judgment to do is not optimistic about the economic model of the project, or not optimistic about the distributed storage track.
First of all, we are very confident in the distributed storage track and will certainly face a process of growth and decline, so as to make a choice for a better project.
Since the existing group of miners and the computing power already produced is fixed, and since EpiK miners and FIL miners are compatible, anytime miners will also make a choice for more promising and economically viable projects.
Filecoin consumes the value of the next few years this time, so it will plunge.
Regarding the market issues, the plunge is not a prediction, in the industry or to keep learning iteration and value judgment. Because up and down market sentiment is one aspect, there will be more very important factors. For example, the big washout in March this year, so it can only be said that it will slow down the development of the FIL community. But prices are indeed unpredictable.
Actually, in the end, if there are no applications and no one really uploads data, the market value will drop, so what are the landing applications of EpiK?
Leo: The best and most direct application of EpiK’s knowledge graph is the question and answer system, which can be an intelligent legal advisor, an intelligent medical advisor, an intelligent chef, an intelligent tour guide, an intelligent game strategy, and so on.
submitted by EpiK-Protocol to u/EpiK-Protocol [link] [comments]

Mining for Profitability - Horizen (formerly ZenCash) Thanks Early GPU Miners

Mining for Profitability - Horizen (formerly ZenCash) Thanks Early GPU Miners
Thank you for inviting Horizen to the GPU mining AMA!
ZEN had a great run of GPU mining that lasted well over a year, and brought lots of value to the early Zclassic miners. It is mined using Equihash protocol, and there have been ASIC miners available for the algorithm since about June of 2018. GPU mining is not really profitable for Horizen at this point in time.
We’ve got a lot of miners in the Horizen community, and many GPU miners also buy ASIC miners. Happy to talk about algorithm changes, security, and any other aspect of mining in the questions below. There are also links to the Horizen website, blog post, etc. below.
So, if I’m not here to ask you to mine, hold, and love ZEN, what can I offer? Notes on some of the lessons I’ve learned about maximizing mining profitability. An update on Horizen - there is life after moving on from GPU mining. As well as answering your questions during the next 7 days.

Mining for Profitability - Horizen (formerly ZenCash) Thanks Early GPU Miners

Author: Rolf Versluis - co-founder of Horizen

In GPU mining, just like in many of the activities involved with Bitcoin and cryptocurrencies, there is both a cycle and a progression. The Bitcoin price cycle is fairly steady, and by creating a personal handbook of actions to take during the cycle, GPU miners can maximize their profitability.
Maximizing profitability isn't the only aspect of GPU mining that is important, of course, but it is helpful to be able to invest in new hardware, and be able to have enough time to spend on building and maintaining the GPU miners. If it was a constant process that also involved losing money, then it wouldn't be as much fun.

Technology Progression

For a given mining algorithm, there is definitely a technology progression. We can look back on the technology that was used to mine Bitcoin and see how it first started off as Central Processing Unit (CPU) mining, then it moved to Graphical Processing Unit (GPU) mining, then Field Programmable Gate Array (FPGA), and then Application Specific Integrated Circuit (ASIC).
Throughout this evolution we have witnessed a variety of unsavory business practices that unfortunately still happen on occasion, like ASIC Miner manufacturers taking pre-orders 6 months in advance, GPU manufacturers creating commercial cards for large farms that are difficult for retail customers to secure and ASIC Miner manufacturers mining on gear for months before making it available for sale.
When a new crypto-currency is created, in many cases a new mining algorithm is created also. This is important, because if an existing algorithm was used, the coin would be open to a 51% attack from day one, and may not even be able to build a valid blockchain.
Because there's such a focus on profitable software, developers for GPU mining applications are usually able to write a mining application fairly rapidly, then iterate it to the limit of current GPU technology. If it looks like a promising new cryptocurrency, FPGA stream developers and ASIC Hardware Developers start working on their designs at the same time.
The people who create the hashing algorithms run by the miners are usually not very familiar with the design capabilities of Hardware manufacturers. Building application-specific semiconductors is an industry that's almost 60 years old now, and FPGA’s have been around for almost 35 years. This is an industry that has very experienced engineers using advanced design and modeling tools.
Promising cryptocurrencies are usually ones that are deploying new technology, or going after a big market, and who have at least a team of talented software developers. In the best case, the project has a full-stack business team involving development, project management, systems administration, marketing, sales, and leadership. This is the type of project that attracts early investment from the market, which will drive the price of the coin up significantly in the first year.
For any cryptocurrency that's a worthwhile investment of time, money, and electricity for the hashing, there will be a ASIC miners developed for it. Instead of fighting this technology progression, GPU miners may be better off recognizing it as inevitable, and taking advantage of the cryptocurrency cycle to maximize GPU mining profitability instead.

Cryptocurrency Price Cycle

For quality crypto projects, in addition to the one-way technology progression of CPU -> GPU -> FPGA -> ASIC, there is an upward price progression. More importantly, there is a cryptocurrency price cycle that oscillates around an overall upgrade price progression. Plotted against time, a cycle with an upward progressions looks like a sine wave with an ever increasing average value, which is what we see so far with the Bitcoin price.

Cryptocurrency price cycle and progression for miners
This means mining promising new cryptocurrencies with GPU miners, holding them as the price rises, and being ready to sell a significant portion in the first year. Just about every cryptocurrency is going to have a sharp price rise at some point, whether through institutional investor interest or by being the target of a pump-and-dump operation. It’s especially likely in the first year, while the supply is low and there is not much trading volume or liquidity on exchanges.
Miners need to operate in the world of government money, as well as cryptocurrency. The people who run mining businesses at some point have to start selling their mining proceeds to pay the bills, and to buy new equipment as the existing equipment becomes obsolete. Working to maximize profitability means more than just mining new cryptocurrencies, it also means learning when to sell and how to manage money.

Managing Cash for Miners

The worst thing that can happen to a business is to run out of cash. When that happens, the business usually shuts down and goes into bankruptcy. Sometimes an investor comes in and picks up the pieces, but at the point the former owners become employees.
There are two sides to managing cash - one is earning it, the other is spending it, and the cryptocurrency price cycle can tell the GPU miner when it is the best time to do certain things. A market top and bottom is easy to recognize in hindsight, and harder to see when in the middle of it. Even if a miner is able to recognize the tops and bottoms, it is difficult to act when there is so much hype and positivity at the top of the cycle, and so much gloom and doom at the bottom.
A decent rule of thumb for the last few cycles appears to be that at the top and bottom of the cycle BTC is 10x as expensive compared to USD as the last cycle. Newer crypto projects tend to have bigger price swings than Bitcoin, and during the rising of the pricing cycle there is the possibility that an altcoin will have a rise to 100x its starting price.
Taking profits from selling altcoins during the rise is important, but so is maintaining a reserve. In order to catch a 100x move, it may be worth the risk to put some of the altcoin on an exchange and set a very high limit order. For the larger cryptocurrencies like Bitcoin it is important to set trailing sell stops on the way up, and to not buy back in for at least a month if a sell stop gets triggered. Being able to read price charts, see support and resistance areas for price, and knowing how to set sell orders are an important part of mining profitability.

Actions to Take During the Cycle

As the cycle starts to rise from the bottom, this is a good time to buy mining hardware - it will be inexpensive. Also to mine and buy altcoins, which are usually the first to see a price rise, and will have larger price increases than Bitcoin.
On the rise of the cycle, this is a good time to see which altcoins are doing well from a project fundamentals standpoint, and which ones look like they are undergoing accumulation from investors.
Halfway through the rise of the cycle is the time to start selling altcoins for the larger project cryptos like Bitcoin. Miners will miss some of the profit at the top of the cycle, but will not run out of cash by doing this. This is also the time to stop buying mining hardware. Don’t worry, you’ll be able to pick up that same hardware used for a fraction of the price at the next bottom.
As the price nears the top of the cycle, sell enough Bitcoin and other cryptocurrencies to meet the following projected costs:
  • Mining electricity costs for the next 12 months
  • Planned investment into new miners for the next cycle
  • Additional funds needed for things like supporting a family or buying a Lambo
  • Taxes on all the capital gains from the sale of cryptocurrencies
It may be worth selling 70-90% of crypto holdings, maintaining a reserve in case there is second upward move caused by government bankruptcies. But selling a large part of the crypto is helpful to maintaining profitability and having enough cash reserves to make it through the bottom part of the next cycle.
As the cycle has peaked and starts to decline, this is a good time to start investing in mining facilities and other infrastructure, brush up on trading skills, count your winnings, and take some vacation.
At the bottom of the cycle, it is time to start buying both used and new mining equipment. The bottom can be hard to recognize.
If you can continue to mine all the way through bottom part of the cryptocurrency pricing cycle, paying with the funds sold near the top, you will have a profitable and enjoyable cryptocurrency mining business. Any cryptocurrency you are able to hold onto will benefit from the price progression in the next higher cycle phase.

An Update on Horizen - formerly ZenCash

The team at Horizen recognizes the important part that GPU miners played in the early success of Zclassic and ZenCash, and there is always a welcoming attitude to any of ZEN miners, past and present. About 1 year after ZenCash launched, ASIC miners became available for the Equihash algorithm. Looking at a chart of mining difficulty over time shows when it was time for GPU miners to move to mining other cryptocurrencies.

Horizen Historical Block Difficulty Graph
Looking at the hashrate chart, it is straightforward to see that ASIC miners were deployed starting June 2018. It appears that there was a jump in mining hashrate in October of 2017. This may have been larger GPU farms switching over to mine Horizen, FPGA’s on the network, or early version of Equihash ASIC miners that were kept private.
The team understands the importance of the cryptocurrency price cycle as it affects the funds from the Horizen treasury and the investments that can be made. 20% of each block mined is sent to the Horizen non-profit foundation for use to improve the project. Just like miners have to manage money, the team has to decide whether to spend funds when the price is high or convert it to another form in preparation for the bottom part of the cycle.
During the rise and upper part of the last price cycle Horizen was working hard to maximize the value of the project through many different ways, including spending on research and development, project management, marketing, business development with exchanges and merchants, and working to create adoption in all the countries of the world.
During the lower half of the cycle Horizen has reduced the team to the essentials, and worked to build a base of users, relationships with investors, exchanges, and merchants, and continue to develop the higher priority software projects. Lower priority software development, going to trade shows, and paying for business partnerships like exchanges and applications have all been completely stopped.
Miners are still a very important part of the Horizen ecosystem, earning 60% of the block reward. 20% goes to node operators, with 20% to the foundation. In the summer of 2018 the consensus algorithm was modified slightly to make it much more difficult for any group of miners to perform a 51% attack on Horizen. This has so far proven effective.
The team is strong, we provide monthly updates on a YouTube live stream on the first Wednesday of each month where all questions asked during the stream are addressed, and our marketing team works to develop awareness of Horizen worldwide. New wallet software was released recently, and it is the foundation application for people to use and manage their ZEN going forward.
Horizen is a Proof of Work cryptocurrency, and there is no plan to change that by the current development team. If there is a security or centralization concern, there may be change to the algorithm, but that appears unlikely at this time, as the hidden chain mining penalty looks like it is effective in stopping 51% attacks.
During 2019 and 2020 the Horizen team plans to release many new software updates:
  • Sidechains modification to main software
  • Sidechain Software Development Kit
  • Governance and Treasury application running on a sidechain
  • Node tracking and payments running on a sidechain
  • Conversion from blockchain to a Proof of Work BlockDAG using Equihash mining algorithm
After these updates are working well, the team will work to transition Horizen over to a governance model where major decisions and the allocation of treasury funds are done through a form of democratic voting. At this point all the software developed by Horizen is expected to be open source.
When the governance is transitioned, the project should be as decentralized as possible. The goal of decentralization is to enable resilience and preventing the capture of the project by regulators, government, criminal organizations, large corporations, or a small group of individuals.
Everyone involved with Horizen can be proud of what we have accomplished together so far. Miners who were there for the early mining and growth of the project played a large part in securing the network, evangelizing to new community members, and helping to create liquidity on new exchanges. Miners are still a very important part of the project and community. Together we can look forward to achieving many new goals in the future.

Here are some links to find out more about Horizen.
Horizen Website – https://horizen.global
Horizen Blog – https://blog.horizen.global
Horizen Reddit - https://www.reddit.com/Horizen/
Horizen Discord – https://discord.gg/SuaMBTb
Horizen Github – https://github.com/ZencashOfficial
Horizen Forum – https://forum.horizen.global/
Horizen Twitter – https://twitter.com/horizenglobal
Horizen Telegram – https://t.me/horizencommunity
Horizen on Bitcointalk – https://bitcointalk.org/index.php?topic=2047435.0
Horizen YouTube Channel – https://www.youtube.com/c/Horizen/
Buy or Sell Horizen
Horizen on CoinMarketCap – https://coinmarketcap.com/currencies/zencash/

About the Author:

Rolf Versluis is Co-Founder and Executive Advisor of the privacy oriented cryptocurrency Horizen. He also operates multiple private cryptocurrency mining facilities with hundreds of operational systems, and has a blog and YouTube channel on crypto mining called Block Operations.
Rolf applies his engineering background as well as management and leadership experience from running a 60 person IT company in Atlanta and as a US Navy nuclear submarine officer operating out of Hawaii to help grow and improve the businesses in which he is involved.
Thank you again for the Ask Me Anything - please do. I'll be checking the post and answering questions actively from 28 Feb to 6 Mar 2019 - Rolf
submitted by Blockops to gpumining [link] [comments]

An extensive list of blockchain courses, resources and articles to help you get a job working with blockchain.

u/Maximus_no and me spent some time at work collecting and analyzing learning material for blockchain development. The list contains resources for developers, as well as business analysts/consultants looking to learn more about blockchain use-cases and solutions.

Certifications and Courses

IIB Council
Link to course: IIB council : Certified Blockchain Professional
C|BP is an In-Depth, Industry Agnostic, Hands-On Training and Certification Course specifically tailored for Industry Professionals and Developers interested in implementing emerging technologies in the Data-Driven Markets and Digitized Economies.
The IIB Council Certified Blockchain Professional (C|BP) Course was developed to help respective aspiring professionals gain excessive knowledge in Blockchain technology and its implication on businesses.


C|BP is developed in line with the latest industry trends to help current and aspiring Professionals evolve in their career by implementing the latest knowledge in blockchain technology. This course will help professionals understand the foundation of Blockchain technology and the opportunities this emerging technology is offering.


If you are a Developer and you are willing to learn blockchain technology this course is for you. You will learn to build and model Blockchain solutions and Blockchain-based applications for enterprises and businesses in multiple Blockchain Technologies.

Certified Blockchain Business Foundations (CBBF)

This exam is designed for non-technical business professionals who require basic knowledge about Blockchain and how it will be executed within an organization. This exam is NOT appropriate for technology professionals seeking to gain deeper understanding of Blockchain technology implementation or programming.

A person who holds this certification demonstrates their knowledge of:

· What is Blockchain? (What exactly is it?)
· Non-Technical Technology Overview (How does it work?)
· Benefits of Blockchain (Why should anyone consider this?)
· Use Cases (Where and for what apps is it appropriate?)
· Adoption (Who is using it and for what?)
· Future of Blockchain (What is the future?)

Certified Blockchain Solution Architect (CBSA)

A person who holds this certification demonstrates their ability to:

· Architect blockchain solutions
· Work effectively with blockchain engineers and technical leaders
· Choose appropriate blockchain systems for various use cases
· Work effectively with both public and permissioned blockchain systems

This exam will prove that a student completely understands:

· The difference between proof of work, proof of stake, and other proof systems and why they exist
· Why cryptocurrency is needed on certain types of blockchains
· The difference between public, private, and permissioned blockchains
· How blocks are written to the blockchain
· Where cryptography fits into blockchain and the most commonly used systems
· Common use cases for public blockchains
· Common use cases for private & permissioned blockchains
· What is needed to launch your own blockchain
· Common problems & considerations in working with public blockchains
· Awareness of the tech behind common blockchains
· When is mining needed and when it is not
· Byzantine Fault Tolerance
· Consensus among blockchains
· What is hashing
· How addresses, public keys, and private keys work
· What is a smart contract
· Security in blockchain
· Brief history of blockchain
· The programming languages of the most common blockchains
· Common testing and deployment practices for blockchains and blockchain-based apps

Certified Blockchain Developer - Ethereum (CBDE)

A person who holds this certification demonstrates their ability to:

· Plan and prepare production ready applications for the Ethereum blockchain
· Write, test, and deploy secure Solidity smart contracts
· Understand and work with Ethereum fees
· Work within the bounds and limitations of the Ethereum blockchain
· Use the essential tooling and systems needed to work with the Ethereum ecosystem

This exam will prove that a student completely understands how to:

· Implement web3.js
· Write and compile Solidity smart contracts
· Create secure smart contracts
· Deploy smart contracts both the live and test Ethereum networks
· Calculate Ethereum gas costs
· Unit test smart contracts
· Run an Ethereum node on development machines

Princeton: Sixty free lectures from Princeton on bitcoin and cryptocurrencies. Avg length ~15 mins

Basic course with focus on Bitcoin. After this course, you’ll know everything you need to be able to separate fact from fiction when reading claims about Bitcoin and other cryptocurrencies. You’ll have the conceptual foundations you need to engineer secure software that interacts with the Bitcoin network. And you’ll be able to integrate ideas from Bitcoin in your own projects.


· A mid / basic understanding of blockchain technology and its long-term implications for business, coupled with knowledge of its relationship to other emerging technologies such as AI and IoT
· An economic framework for identifying blockchain-based solutions to challenges within your own context, guided by the knowledge of cryptoeconomics expert Christian Catalini
· Recognition of your newfound blockchain knowledge in the form of a certificate of completion from the MIT Sloan School of Management — one of the world’s leading business schools
Orientation Module: Welcome to Your Online Campus
Module 1: An introduction to blockchain technology
Module 2: Bitcoin and the curse of the double-spending problem
Module 3: Costless verification: Blockchain technology and the last mile problem
Module 4: Bootstrapping network effects through blockchain technology and cryptoeconomics
Module 5: Using tokens to design new types of digital platforms
Module 6: The future of blockchain technology, AI, and digital privacy

Oxford Blockchain Strategy Programme

· A mid / basic understanding of what blockchain is and how it works, as well as insights into how it will affect the future of industry and of your organization.
· The ability to make better strategic business decisions by utilizing the Oxford Blockchain Strategic framework, the Oxford Blockchain Regulation framework, the Oxford Blockchain Ecosystem map, and drawing on your knowledge of blockchain and affiliated industries and technologies.
· A certificate of attendance from Oxford Saïd as validation of your newfound blockchain knowledge and skills, as well as access to a global network of like-minded business leaders and innovators.
Module 1: Understanding blockchain
Module 2: The blockchain ecosystem
Module 3: Innovations in value transfer
Module 4: Decentralized apps and smart contracts
Module 5: Transforming enterprise business models
Module 6: Blockchain frontiers

Resources and Articles

Introduction to Distributed Ledger Technologies (DLT) https://www.ibm.com/developerworks/cloud/library/cl-blockchain-basics-intro-bluemix-trs/
Tomas’s Personal Favourite: 150+ Resources for going from web-dev to blockchain engineer https://github.com/benstew/blockchain-for-software-engineers
Hyperledger Frameworks Hyperledger is widely regarded as the most mature open-source framework for building private & permissioned blockchains.
Tutorials: https://www.hyperledger.org/resources/training
R3 Corda Open-source developer frameworks for building private, permissioned blockchains. A little better than Hyperledger on features like privacy and secure channels. Used mostly in financial applications.
Ethereum, Solidity, dApps and Smart-Contracts
Ethereum & Solidity Course (favourite): https://www.udemy.com/ethereum-and-solidity-the-complete-developers-guide/
An Introduction to Ethereum’s Token Standards: https://medium.com/coinmonks/anatomy-of-an-erc-an-exhaustive-survey-8bc1a323b541
How To Create Your First ERC20 Token: https://medium.com/bitfwd/how-to-do-an-ico-on-ethereum-in-less-than-20-minutes-a0062219374
Ethereum Developer Tools [Comprehensive List]: https://github.com/ConsenSys/ethereum-developer-tools-list/blob/masteREADME.md
CryptoZombies – Learn to code dApps through game-development: https://cryptozombies.io/
Intro to Ethereum Development: https://hackernoon.com/ethereum-development-walkthrough-part-1-smart-contracts-b3979e6e573e
Notes from Consensys Academy Participant (free): https://github.com/ScottWorks/ConsenSys-Academy-Notes
AWS Ethereum Templates: https://aws.amazon.com/blogs/aws/get-started-with-blockchain-using-the-new-aws-blockchain-templates/
Create dApps with better user-experience: https://blog.hellobloom.io/how-to-make-a-user-friendly-ethereum-dapp-5a7e5ea6df22
Solidity YouTube Course: https://www.youtube.com/channel/UCaWes1eWQ9TbzA695gl_PtA
[UX &UI] Designing a decentralized profile dApp: https://uxdesign.cc/designing-a-decentralized-profile-dapp-ab12ead4ab56
Scaling Solutions on Ethereum: https://media.consensys.net/the-state-of-scaling-ethereum-b4d095dbafae
Different Platforms for dApps and Smart-Contracts
While Ethereum is the most mature dApp framework with both the best developer tools, resources and community, there are other public blockchain platforms. Third generation blockchains are trying to solve Ethereum’s scaling and performance issues. Here is an overview of dApp platforms that can be worth looking into:
NEO - https://neo.org/ The second most mature dApp platform. NEO has better scalability and performance than Ethereum and has 1’000 TPS to ETH’s 15 by utilizing a dBFT consensus algorithm. While better infrastructure, NEO does not have the maturity of Ethereum’s developer tools, documentation and community.
A writeup on why a company chose to develop on NEO and not Ethereum: https://medium.com/orbismesh/why-we-chose-neo-over-ethereum-37fc9208ffa0
Cardano - https://www.cardano.org/en/home/ While still in alpha with a long and ambitious roadmap ahead of it, Cardano is one of the most anticipated dApp platforms out there. IOHK, the research and engineering company that maintains Cardano, has listed a lot of great resources and scientific papers that is worth looking into.
An Intro to Cardano: https://hackernoon.com/cardano-ethereum-and-neo-killer-or-overhyped-and-overpriced-8fcd5f8abcdf
IOHK Scientific Papers - https://iohk.io/research/papers/
Stellar - https://www.stellar.org/ If moving value fast from one party to another by using smart-contracts is the goal, Stellar Lumens is your platform. Initially as an open-source fork from Ripple, Stellar has become one of the mature frameworks for financial applications. Stellar’s focus lies in interoperability with legacy financial systems and cheap/fast value transfer. It’s smart-contract capability is rather limited in comparison to Ethereum and HyperLedger, so take that in consideration.
Ripplewww.ripple.com Ripple and its close cousin, Stellar, is two of the most well-known cryptocurrencies and DLT frameworks meant for the financial sector. Ripple enables instant settlement between banks for international transactions.

Consensus Algorithms

[Proof of Work] - very short, cuz it's well-known.
[1] Bitcoin - to generate a new block miner must generate hash of the new block header that is in line with given requirements.
Others: Ethereum, Litecoin etc.
[Hybrid of PoW and PoS]
[2] Decred - hybrid of “proof of work” and “proof of stake”. Blocks are created about every 5 minutes. Nodes in the network looking for a solution with a known difficulty to create a block (PoW). Once the solution is found it is broadcast to the network. The network then verifies the solution. Stakeholders who have locked some DCR in return for a ticket* now have the chance to vote on the block (PoS). 5 tickets are chosen pseudo-randomly from the ticket pool and if at least 3 of 5 vote ‘yes’ the block is permanently added to the blockchain. Both miners and voters are compensated with DCR : PoS - 30% and PoW - 60% of about 30 new Decred issued with a block. * 1 ticket = ability to cast 1 vote. Stakeholders must wait an average of 28 days (8,192 blocks) to vote their tickets.
[Proof of Stake]
[3] Nxt - The more tokens are held by account, the greater chance that account will earn the right to generate a block. The total reward received as a result of block generation is the sum of the transaction fees located within the block. Three values are key to determining which account is eligible to generate a block, which account earns the right to generate a block, and which block is taken to be the authoritative one in times of conflict: base target value, target value and cumulative difficulty. Each block on the chain has a generation signature parameter. To participate in the block's forging process, an active account digitally signs the generation signature of the previous block with its own public key. This creates a 64-byte signature, which is then hashed using SHA256. The first 8 bytes of the resulting hash are converted to a number, referred to as the account hit. The hit is compared to the current target value(active balance). If the computed hit is lower than the target, then the next block can be generated.
[4] Peercoin (chain-based proof of stake) - coin age parameter. Hybrid PoW and PoS algorithm. The longer your Peercoins have been stationary in your account (to a maximum of 90 days), the more power (coin age) they have to mint a block. The act of minting a block requires the consumption of coin age value, and the network determines consensus by selecting the chain with the largest total consumed coin age. Reward - minting + 1% yearly.
[5] Reddcoin (Proof of stake Velocity) - quite similar to Peercoin, difference: not linear coin-aging function (new coins gain weight quickly, and old coins gain weight increasingly slowly) to encourage Nodes Activity. Node with most coin age weight have a bigger chance to create block. To create block Node should calculate right hash. Block reward - interest on the weighted age of coins/ 5% annual interest in PoSV phase.
[6] Ethereum (Casper) - uses modified BFT consensus. Blocks will be created using PoW. In the Casper Phase 1 implementation for Ethereum, the “proposal mechanism" is the existing proof of work chain, modified to have a greatly reduced block reward. Blocks will be validated by set of Validators. Block is finalised when 2/3 of validators voted for it (not the number of validators is counted, but their deposit size). Block creator rewarded with Block Reward + Transaction FEES.
[7] Lisk (Delegated Proof-of-stake) - Lisk stakeholders vote with vote transaction (the weight of the vote depends on the amount of Lisk the stakeholder possess) and choose 101 Delegates, who create all blocks in the blockchain. One delegate creates 1 block within 1 round (1 round contains 101 blocks) -> At the beginning of each round, each delegate is assigned a slot indicating their position in the block generation process -> Delegate includes up to 25 transactions into the block, signs it and broadcasts it to the network -> As >51% of available peers agreed that this block is acceptable to be created (Broadhash consensus), a new block is added to the blockchain. *Any account may become a delegate, but only accounts with the required stake (no info how much) are allowed to generate blocks. Block reward - minted Lisks and transaction fees (fees for all 101 blocks are collected firstly and then are divided between delegates). Blocks appears every 10 sec.
[8] Cardano (Ouroboros Proof of Stake) - Blocks(slots) are created by Slot Leaders. Slot Leaders for N Epoch are chosen during n-1 Epoch. Slot Leaders are elected from the group of ADA stakeholders who have enough stake. Election process consist of 3 phases: Commitment phase: each elector generates a random value (secret), signs it and commit as message to network (other electors) saved in to block. -> Reveal phase: Each elector sends special value to open a commitment, all this values (opening) are put into the block. -> Recovery phase: each elector verifies that commitments and openings match and extracts the secrets and forms a SEED (randomly generated bytes string based on secrets). All electors get the same SEED. -> Follow the Satoshi algorithm : Elector who have coin which corresponded to SEED become a SLOT LEADER and get a right to create a block. Slot Leader is rewarded with minted ADA and transactions Fee.
[9] Tezos (Proof Of Stake) - generic and self-amending crypto-ledger. At the beginning of each cycle (2048 blocks), a random seed is derived from numbers that block miners chose and committed to in the penultimate cycle, and revealed in the last. -> Using this random seed, a follow the coin strategy (similar to Follow The Satoshi) is used to allocate mining rights and signing rights to stakeholders for the next cycle*. -> Blocks are mined by a random stakeholder (the miner) and includes multiple signatures of the previous block provided by random stakeholders (the signers). Mining and signing both offer a small reward but also require making a one cycle safety deposit to be forfeited in the event of a double mining or double signing.
· the more coins (rolls) you have - the more your chance to be a minesigner.
[10] Tendermint (Byzantine Fault Tolerance) - A proposal is signed and published by the designated proposer at each round. The proposer is chosen by a deterministic and non-choking round robin selection algorithm that selects proposers in proportion to their voting power. The proposer create the block, that should be validated by >2/3 of Validators, as follow: Propose -> Prevote -> Precommit -> Commit. Proposer rewarded with Transaction FEES.
[11] Tron (Byzantine Fault Tolerance) - This blockhain is still on development stage. Consensus algorithm = PoS + BFT (similar to Tendermint): PoS algorithm chooses a node as Proposer, this node has the power to generate a block. -> Proposer broadcasts a block that it want to release. -> Block enters the Prevote stage. It takes >2/3 of nodes' confirmations to enter the next stage. -> As the block is prevoted, it enters Precommit stage and needs >2/3 of node's confirmation to go further. -> As >2/3 of nodes have precommited the block it's commited to the blockchain with height +1. New blocks appears every 15 sec.
[12] NEO (Delegated Byzantine Fault Tolerance) - Consensus nodes* are elected by NEO holders -> The Speaker is identified (based on algorithm) -> He broadcasts proposal to create block -> Each Delegate (other consensus nodes) validates proposal -> Each Delegate sends response to other Delegates -> Delegate reaches consensus after receiving 2/3 positive responses -> Each Delegate signs the block and publishes it-> Each Delegate receives a full block. Block reward 6 GAS distributed proportionally in accordance with the NEO holding ratio among NEO holders. Speaker rewarded with transaction fees (mostly 0). * Stake 1000 GAS to nominate yourself for Bookkeeping(Consensus Node)
[13] EOS (Delegated Proof of Stake) - those who hold tokens on a blockchain adopting the EOS.IO software may select* block producers through a continuous approval voting system and anyone may choose to participate in block production and will be given an opportunity to produce blocks proportional to the total votes they have received relative to all other producers. At the start of each round 21 unique block producers are chosen. The top 20 by total approval are automatically chosen every round and the last producer is chosen proportional to their number of votes relative to other producers. Block should be confirmed by 2/3 or more of elected Block producers. Block Producer rewarded with Block rewards. *the more EOS tokens a stakeholder owns, the greater their voting power
[The XRP Ledger Consensus Process]
[14] Ripple - Each node receives transaction from external applications -> Each Node forms public list of all valid (not included into last ledger (=block)) transactions aka (Candidate Set) -> Nodes merge its candidate set with UNLs(Unique Node List) candidate sets and vote on the veracity of all transactions (1st round of consensus) -> all transactions that received at least 50% votes are passed on the next round (many rounds may take place) -> final round of consensus requires that min 80% of Nodes UNL agreeing on transactions. It means that at least 80% of Validating nodes should have same Candidate SET of transactions -> after that each Validating node computes a new ledger (=block) with all transactions (with 80% UNL agreement) and calculate ledger hash, signs and broadcasts -> All Validating nodes compare their ledgers hash -> Nodes of the network recognize a ledger instance as validated when a 80% of the peers have signed and broadcast the same validation hash. -> Process repeats. Ledger creation process lasts 5 sec(?). Each transaction includes transaction fee (min 0,00001 XRP) which is destroyed. No block rewards.
[The Stellar consensus protocol]
[15] Stellar (Federated Byzantine Agreement) - quite similar to Ripple. Key difference - quorum slice.
[Proof of Burn]
[16] Slimcoin - to get the right to write blocks Node should “burn” amount of coins. The more coins Node “burns” more chances it has to create blocks (for long period) -> Nodes address gets a score called Effective Burnt Coins that determines chance to find blocks. Block creator rewarded with block rewards.
[Proof of Importance]
[17] NEM - Only accounts that have min 10k vested coins are eligible to harvest (create a block). Accounts with higher importance scores have higher probabilities of harvesting a block. The higher amount of vested coins, the higher the account’s Importance score. And the higher amount of transactions that satisfy following conditions: - transactions sum min 1k coins, - transactions made within last 30 days, - recipient have 10k vested coins too, - the higher account’s Important score. Harvester is rewarded with fees for the transactions in the block. A new block is created approx. every 65 sec.
[Proof of Devotion]
[18] Nebulas (Proof of Devotion + BFT) - quite similar to POI, the PoD selects the accounts with high influence. All accounts are ranked according to their liquidity and propagation (Nebulas Rank) -> Top-ranked accounts are selected -> Chosen accounts pay deposit and are qualified as the blocks Validators* -> Algorithm pseudo-randomly chooses block Proposer -> After a new block is proposed, Validators Set (each Validator is charged a deposit) participate in a round of BFT-Style voting to verify block (1. Prepare stage -> 2. Commit Stage. Validators should have > 2/3 of total deposits to validate Block) -> Block is added. Block rewards : each Validator rewarded with 1 NAS. *Validators Set is dynamic, changes in Set may occur after Epoch change.
[IOTA Algorithm]
[19] IOTA - uses DAG (Directed Acyclic Graph) instead of blockchain (TANGLE equal to Ledger). Graph consist of transactions (not blocks). To issue a new transaction Node must approve 2 random other Transactions (not confirmed). Each transaction should be validate n(?) times. By validating PAST(2) transactions whole Network achieves Consensus. in Order to issue transaction Node: 1. Sign transaction with private key 2. choose two other Transactions to validate based on MCMC(Markov chain Monte Carlo) algorithm, check if 2 transactions are valid (node will never approve conflicting transactions) 3. make some PoW(similar to HashCash). -> New Transaction broadcasted to Network. Node don’t receive reward or fee.
[PBFT + PoW]
[20] Yobicash - uses PBFT and also PoW. Nodes reach consensus on transactions by querying other nodes. A node asks its peers about the state of a transaction: if it is known or not, and if it is a doublespending transaction or not. As follow : Node receives new transaction -> Checks if valid -> queries all known nodes for missing transactions (check if already in DAG ) -> queries 2/3 nodes for doublepsending and possibility -> if everything is ok add to DAG. Reward - nodes receive transaction fees + minting coins.
[Proof of Space/Proof of Capacity]
[21] Filecoin (Power Fault Tolerance) - the probability that the network elects a miner(Leader) to create a new block (it is referred to as the voting power of the miner) is proportional to storage currently in use in relation to the rest of the network. Each node has Power - storage in use verified with Proof of Spacetime by nodes. Leaders extend the chain by creating a block and propagating it to the network. There can be an empty block (when no leader). A block is committed if the majority of the participants add their weight on the chain where the block belongs to, by extending the chain or by signing blocks. Block creator rewarded with Block reward + transaction fees.
[Proof of Elapsed Time (POET)]
[22] Hyperledger Sawtooth - Goal - to solve BFT Validating Nodes limitation. Works only with intel’s SGX. PoET uses a random leader election model or a lottery based election model based on SGX, where the protocol randomly selects the next leader to finalize the block. Every validator requests a wait time from an enclave (a trusted function). -> The validator with the shortest wait time for a particular transaction block is elected the leader. -> The BlockPublisher is responsible for creating candidate blocks to extend the current chain. He takes direction from the consensus algorithm for when to create a block and when to publish a block. He creates, Finalizes, Signs Block and broadcast it -> Block Validators check block -> Block is created on top of blockchain.
[23] Byteball (Delegated Byzantine Fault Tolerance) - only verified nodes are allowed to be Validation nodes (list of requirements https://github.com/byteball/byteball-witness). Users choose in transaction set of 12 Validating nodes. Validating nodes(Witnesses) receive transaction fees.
[24] Nano - uses DAG, PoW (HashCash). Nano uses a block-lattice structure. Each account has its own blockchain (account-chain) equivalent to the account’s transaction/balance history. To add transaction user should make some HashCash PoW -> When user creates transaction Send Block appears on his blockchain and Receive block appears on Recipients blockchain. -> Peers in View receive Block -> Peers verify block (Double spending and check if already in the ledger) -> Peers achieve consensus and add block. In case of Fork (when 2 or more signed blocks reference the same previous block): Nano network resolves forks via a balance-weighted voting system where representative nodes vote for the block they observe, as >50% of weighted votes received, consensus achieved and block is retained in the Node’s ledger (block that lose the vote is discarded).
[25] Holochain - uses distributed hash table (DHT). Instead of trying to manage global consensus for every change to a huge blockchain ledger, every participant has their own signed hash chain. In case of multi-party transaction, it is signed to each party's chain. Each party signs the exact same transaction with links to each of their previous chain entries. After data is signed to local chains, it is shared to a DHT where every neighbor node validate it. Any consensus algorithms can be built on top of Holochain.
[26] Komodo ('Delegated' Delayed Proof of Work (dPoW)) - end-to-end blockchain solutions. DPoW consensus mechanism does not recognize The Longest Chain Rule to resolve a conflict in the network, instead the dPoW looks to backups it inserted previously into the chosen PoW blockchain. The process of inserting backups of Komodo transactions into a secure PoW is “notarization.” Notarisation is performed by the elected Notary nodes. Roughly every ten minutes, the Notary nodes perform a special block hash mined on the Komodo blockchain and take note of the overall Komodo blockchain “height”. The notary nodes process this specifc block so that their signatures are cryptographically included within the content of the notarized data. There are sixty-four “Notary nodes” elected by a stake-weighted vote, where ownership of KMD represents stake in the election. They are a special type of blockchain miner, having certain features in their underlying code that enable them to maintain an effective and cost-efcient blockchain and they periodically receives the privilege to mine a block on “easy difculty.”
Source: https://www.reddit.com/CryptoTechnology/comments/7znnq8/my_brief_observation_of_most_common_consensus/
Whitepapers Worth Looking Into:
IOTA -http://iotatoken.com/IOTA_Whitepaper.pdf
NANO -https://nano.org/en/whitepaper
Bitcoin -https://bitcoin.org/bitcoin.pdf
Ethereum: https://github.com/ethereum/wiki/wiki/White-Paper
Ethereum Plasma (Omise-GO) -https://plasma.io/plasma.pdf
Cardano - https://eprint.iacr.org/2016/889.pdf
submitted by heart_mind_body to CryptoCurrency [link] [comments]

We may have to accept higher fees until September 10th

The cause is the crazy difficulty swing in bcash, which affects bitcoin to some degree. You can see this here. The swing frequency is roughly once every 3 days.
What happens is that the stupidly designed difficulty adjustment algorithm in bcash, which, by the way, deviates from the design of Satoshi Nakamoto's white paper and thus from the bitcoin consensus, causes violent swings in difficulty and hash rate, because miners jump over to mine bcash when its difficulty is very low, mining 30 to 50 blocks per hour rather than the originally envisioned 6.
Conversely, during the high-difficulty phase almost all miners leave bcash, which has already led to block rates of fewer than one block every two hours, almost a standstill. Here is an illustrating graph.
While the miners are not mining bitcoin, the bitcoin block rate goes down to around 4 blocks per hour. During the time when miners jump back on bitcoin, it averages around 8 blocks per hour. Because the low-block-rate phase is longer, at least two days, the total average block rate is below 6, so bitcoin accumulates a backlog that leads to higher fees.
This is unpleasant, but it is also a mitigating effect, because it makes mining bitcoin more attractive, so fewer miners desert. Another mitigating effect is that very-low-value transactions become too expensive, so the total volume of transactions decreases.
Yet another positive effect is that the Segregated Witness upgrade will soon show first effects, increasing the total block size, but this will still take a little time.
What could happen is that some users see the craziness and the utterly stupid design of bcash and sell those coins before everybody else also notices and sells. A lower bcash price will make mining bcash less attractive and will thus also alleviate the problem.
submitted by hgmichna to Bitcoin [link] [comments]

The Great NiceHash Profit Explanation - for Sellers (the guys with the GPUs & CPUs)

Let's make a couple of things crystal clear about what you are not doing here:
But hey, I'm running MINING software!
What the hell am I doing then?!?
Who makes Profit, and how?
How is it possible everyone is making a profit?
Why do profits skyrocket, and will it last (and will this happen again)?
But my profits are decreasing all the time >:[
But why?!? I’m supposed to make lotsa money out of this!!!
But WHY!!!
  1. Interest hype -> Influx of Fiat money -> Coins quotes skyrocket -> Influx of miners -> Difficulty skyrockets -> Most of the price uptrend is choked within weeks, since it’s now harder to mine new blocks.
  2. Interest hype drains out -> Fiat money influx declines -> Coins quotes halt or even fall -> Miners still hold on to their dream -> Difficulty stays up high, even rises -> Earnings decrease, maybe even sharply, as it's still harder to mine new blocks, that may be even paid less.
So, how to judge what’s going on with my profits?
Simple breakdown of the relationship of BTC payouts by NiceHash, BTC/ALT Coins rates, and Fiat value:
BTC quote | ALTs quotes | BTC payout | Fiat value ----------------------------------------------------- UP | UP | stable*) | UP stable | UP | UP | UP UP | stable | DOWN | stable*) stable | stable | stable | stable DOWN | stable | UP | stable*) stable | DOWN | DOWN | DOWN DOWN | DOWN | stable*) | DOWN 
Some rather obvious remarks:
More help:
Disclaimer: I'm a user - Seller like you - not in any way associated with NiceHash; this is my personal view & conclusion about some more or less obvious basics in Crypto mining and particularly using NiceHash.
Comments & critics welcome...
submitted by t_3 to NiceHash [link] [comments]

INT - Comparison with Other IoT Projects

What defines a good IoT project? Defining this will help us understand what some of the problems they might struggle with and which projects excel in those areas. IoT will be a huge industry in the coming years. The true Internet 3.0 will be one of seamless data and value transfer. There will be a tremendous amount of devices connected to this network, from your light bulbs to your refrigerator to your car, all autonomously transacting together in an ever growing network in concert, creating an intelligent, seamless world of satisfying wants and needs.
Let’s use the vastness of what the future state of this network is to be as our basis of what makes a good project.
In that future we will need very high scalability to accommodate the exponential growth in transaction volume that will occur. The network doesn’t need to have the ability to do high transactions per second in the beginning, just a robust plan to grow that ability as the network develops. We’ve seen this issue already with Bitcoin on an admittedly small market penetration. If scaling isn’t a one of the more prominent parts of your framework, that is a glaring hole.
Second to scalability is applicability. One size does not fit all in this space. Some uses will need real-time streaming of data where fast and cheap transactions are key and others will need heavier transactions full of data to be analyzed by the network for predictive uses. Some uses will need smart contracts so that devices can execute actions autonomously and others will need the ability to encrypt data and to transact anonymously to protect the privacy of the users in this future of hyper-connectivity. We cannot possibly predict the all of the future needs of this network so the ease of adaptability in a network of high applicability is a must.
In order for this network to have the high level of applicability mentioned, it would need to have access to real world data outside of it’s network to work off of or even to transact with. This interoperability can come in several forms. I am not a maximalist, thinking that there will be one clear winner in any space. So it is easy, therefore, to imagine that we would want to be able to interact with some other networks for payment/settlement or data gathering. Maybe autonomously paying for bills with Bitcoin or Monero, maybe smart contracts that will need to be fed additional data from the Internet or maybe even sending an auto invite for a wine tasting for the wine shipment that’s been RFID’d and tracked through WTC. In either case, in order to afford the highest applicability, the network will need the ability to interact with outside networks.
How the network gains consensus is often something that is overlooked in the discussion of network suitability. If the network is to support a myriad of application and transaction types, the consensus mechanism must be able to handle it without choking the network or restricting transaction type. PoW can become a bottleneck as the competition for block reward requires an increase in difficulty for block generation, you therefore have to allow time for this computation in between blocks, often leading to less than optimal block times for fast transactions. This can create a transaction backlog as we have seen before. PoS can solve some of these issues but is not immune to this either. A novel approach to gaining consensus will have to be made if it is going to handle the variety and volume to be seen.
All of this can be combined to create a network that is best equipped to take on the IoT ecosystem. But the penetration into the market will be solely held back by the difficulty in connecting and interacting with the network from the perspective of manufacturers and their devices. Having to learn a new code language in order to write a smart contract or create a node or if there are strict requirements on the hardware capability of the devices, these are all barriers that make it harder and more expensive for companies to work with the network. Ultimately, despite how perfect or feature packed your network is, a manufacturer will more likely develop devices for those that are easy to work with.
In short, what the network needs to focus on is:
-Scalability – How does it globally scale?
-Applicability – Does it have data transfer ability, fast, cheap transactions, smart contracts, privacy?
-Interoperability – Can it communicate with the outside world, other blockchains?
-Consensus – Will it gain consensus in a way that supports scalability and applicability?
-Developability – Will it be easy for manufactures to develop devices and interact with the network?
The idea of using blockchain technology to be the basis of the IoT ecosystem is not a new idea. There are several projects out there now that are aiming at tackling the problem. Below you will see a high level breakdown of those projects with some pros and cons from how I interpret the best solution to be. You will also see some supply chain projects listed below. Supply chain solutions are just small niches in the larger IoT ecosystem. Item birth record, manufacturing history, package tracking can all be “Things” which the Internet of Things track. In fact, INT already has leaked some information hinting that they are cooperating with pharmaceutical companies to track the manufacture and packaging of the drugs they produce. INT may someday include WTC or VEN as one of its subchains feeding in information into the ecosystem.
IOTA is a feeless and blockchain-less network called a directed acyclic graph. In my opinion, this creates more issues than it fixes.
The key to keeping IOTA feeless is that there are no miners to pay because the work associated with verifying a transaction is distributed to among all users, with each user verifying two separate transactions for their one. This creates some problems both in the enabling of smart contracts and the ability to create user privacy. Most privacy methods (zk-SNARKs in specific) require the one doing the verifying to use computationally intensive cryptography which are outside the capability of most devices on the IoT network (a weather sensor isn’t going to be able to build the ZK proof of a transaction every second or two). In a network where the device does the verifying of a transaction, cryptographic privacy becomes impractical. And even if there were a few systems capable of processing those transactions, there is no reward for doing the extra work. Fees keep the network safe by incentivizing honesty in the nodes, by paying those who have to work harder to verify a certain transaction, and by making it expensive to attack the network or disrupt privacy (Sybil Attacks).
IOTA also doesn’t have and may never have the ability to enable smart contracts. By the very nature of the Tangle (a chain of transactions with only partial structure unlike a linear and organized blockchain), establishing the correct time order of transactions is difficult, and in some situations, impossible. Even if the transactions have been time stamped, there is no way to verify them and are therefore open to spoofing. Knowing transaction order is absolutely vital to executing step based smart contracts.
There does exist a subset of smart contracts that do not require a strong time order of transactions in order to operate properly. But accepting this just limits the use cases of the network. In any case, smart contracts will not be able to operate directly on chain in IOTA. There will need to be a trusted off chain Oracle that watches transactions, establishes timelines, and runs the smart contract network
-Scalability – High
-Applicability – Low, no smart contracts, no privacy, not able to run on lightweight devices
-Interoperability – Maybe, Oracle possibility
-Consensus – Low, DAG won’t support simple IoT devices and I don’t see all devices confirming other transactions as a reality
-Developability – To be seen, currently working with many manufacturers
Ethereum is the granddaddy of smart contract blockchain. It is, arguably, in the best position to be the center point of the IoT ecosystem. Adoption is wide ranging, it is fast, cheap to transact with and well known; it is a Turing complete decentralized virtual computer that can do anything if you have enough gas and memory. But some of the things that make it the most advanced, will hold it back from being the best choice.
Turing completeness means that the programming language is complete (can describe any problem) and can solve any problem given that there is enough gas to pay for it and enough memory to run the code. You could therefore, create an infinite variety of different smart contracts. This infinite variability makes it impossible to create zk-SNARK verifiers efficiently enough to not cost more gas than is currently available in the block. Implementing zk-SNARKs in Ethereum would therefore require significant changes to the smart contract structure to only allow a small subset of contracts to permit zk-SNARK transactions. That would mean a wholesale change to the Ethereum Virtual Machine. Even in Zcash, where zk-SNARK is successfully implemented for a single, simple transaction type, they had to encode some of the network’s consensus rules into zk-SNARKs to limit the possible outcomes of the proof (Like changing the question of where are you in the US to where are you in the US along these given highways) to limit the computation time required to construct the proof.
Previously I wrote about how INT is using the Double Chain Consensus algorithm to allow easy scaling, segregation of network traffic and blockchain size by breaking the network down into separate cells, each with their own nodes and blockchains. This is building on lessons learned from single chain blockchains like Bitcoin. Ethereum, which is also a single chain blockchain, also suffers from these congestion issues as we have seen from the latest Cryptokitties craze. Although far less of an impact than that which has been seen with Bitcoin, transaction times grew as did the fees associated. Ethereum has proposed a new, second layer solution to solve the scaling issue: Sharding. Sharding draws from the traditional scaling technique called database sharding, which splits up pieces of a database and stores them on separate servers where each server points to the other. The goal of this is to have distinct nodes that store and verify a small set of transactions then tie them up to a larger chain, where all the other nodes communicate. If a node needs to know about a transaction on another chain, it finds another node with that information. What does this sound like? This is as close to an explanation of the Double Chain architecture as to what INT themselves provided in their whitepaper.
-Scalability – Neutral, has current struggles but there are some proposals to fix this
-Applicability – Medium, has endless smart contract possibilities, no privacy currently with some proposals to fix this
-Interoperability – Maybe, Oracle possibility
-Consensus – Medium, PoW currently with proposals to change to better scaling and future proofing.
-Developability – To be seen
A young project, made up of several accredited academics in cryptography, machine learning and data security. This is one of the most technically supported whitepapers I have read.They set out to solve scalability in the relay/subchain architecture proposed by Polkadot and used by INT. This architecture lends well to scaling and adaptability, as there is no end to the amount of subchains you can add to the network, given node and consensus bandwidth.
The way they look to address privacy is interesting. On the main parent (or relay) chain, they plan on implementing some of the technology from Monero, namely, ring signatures, bulletproofs and stealth addresses. While these are proven and respected technologies, this presents some worries as these techniques are known to not be lightweight and it takes away from the inherent generality of the core of the network. I believe the core should be as general and lightweight as possible to allow for scaling, ease of update, and adaptability. With adding this functionality, all data and transactions are made private and untraceable and therefore put through heavier computation. There are some applications where this is not optimal. A data stream may need to be read from many devices where encrypting it requires decryption for every use. A plain, public and traceable network would allow this simple use. This specificity should be made at the subchain level.
Subchains will have the ability to define their needs in terms of block times, smart contracting needs, etc. This lends to high applicability.
They address interoperability directly by laying out the framework for pegging (transaction on one chain causing a transaction on another), and cross-chain communication.
They do not address anywhere in the whitepaper the storage of data in the network. IoT devices will not be transaction only devices, they will need to maintain data, transmit data and query data. Without the ability to do so, the network will be crippled in its application.
IoTeX will use a variation of DPoS as the consensus mechanism. They are not specific on how this mechanism will work with no talk of data flow and node communication diagram. This will be their biggest hurdle and why I believe it was left out of the white paper. Cryptography and theory is easy to elaborate on within each specific subject but tying it all together, subchains with smart contracts, transacting with other side chains, with ring signatures, bulletproofs and stealth addresses on the main chain, will be a challenge that I am not sure can be done efficiently.
They may be well positioned to make this work but you are talking about having some of the core concepts of your network being based on problems that haven’t been solved and computationally heavy technologies, namely private transactions within smart contracts. So while all the theory and technical explanations make my pants tight, the realist in me will believe it when he sees it.
-Scalability – Neutral to medium, has the framework to address it with some issues that will hold it back.
-Applicability – Medium, has smart contract possibilities, privacy baked into network, no data framework
-Interoperability – Medium, inherent in the network design
-Consensus – Low, inherent private transactions may choke network. Consensus mechanism not at all laid out.
-Developability – To be seen, not mentioned.
CPC puts a lot of their focus on data storage. They recognize that one of the core needs of an IoT network will be the ability to quickly store and reference large amounts of data and that this has to be separate from the transactional basis of the network as to not slow it down. They propose solving this using distributed hash tables (DHT) in the same fashion as INT, which stores data in a decentralized fashion so no one source owns the complete record. This system is much the same as the one used by BitTorrent, which allows data to be available regardless of which nodes will be online at a given time. The data privacy issue is solved by using client side encryption with one-to-many public key cryptography allowing many devices to decrypt a singly encrypted file while no two devices share the same key.
This data layer will be run on a separate, parallel chain as to not clog the network and to enable scalability. In spite of this, they don’t discuss how they will scale on the main chain. In order to partially solve this, it will use a two layer consensus structure centered on PoS to increase consensus efficiency. This two layer system will still require the main layer to do the entirety of the verification and block generation. This will be a scaling issue where the network will have no division of labor to segregate congestion to not affect the whole network.
They do recognize that the main chain would not be robust or reliable enough to handle high frequency or real-time devices and therefore propose side chains for those device types. Despite this, they are adding a significant amount of functionality (smart contracts, data interpretation) to the main chain instead of a more general and light weight main chain, which constrains the possible applications for the network and also makes it more difficult to upgrade the network.
So while this project, on the surface level (not very technical whitepaper), seems to be a robust and well thought out framework, it doesn’t lend itself to an all-encompassing IoT network but more for a narrower, data centric, IoT application.
-Scalability – Neutral to medium, has the framework to address it somewhat, too much responsibility and functionality on the main chain may slow it down.
-Applicability – Medium, has smart contract possibilities, elaborate data storage solution with privacy in mind as well has high frequency applications thought out
-Interoperability – Low, not discussed
-Consensus – Low to medium, discussed solution has high reliance on single chain
-Developability – To be seen, not mentioned.
The whitepaper reads like someone just grabbed some of the big hitters in crypto buzzword bingo and threw them in there and explained what they were using Wikipedia. It says nothing about how they will tie it all together, economically incentivize the security of the network or maintain the data structures. I have a feeling none of them actually have any idea how to do any of this. For Christ sake they explain blockchain as the core of the “Solutions” portion of their whitepaper. This project is not worth any more analysis.
Centralization and trust. Not very well thought out at this stage. DPoS consensus on a single chain. Not much more than that.
Waltonchain focuses on tracking and validating the manufacture and shipping of items using RFID technology. The structure will have a main chain/subchain framework, which will allow the network to segregate traffic and infinitely scale by the addition of subchains given available nodes and main chain bandwidth.
DPoST (Stake & Trust) will be the core of their consensus mechanism, which adds trust to the traditional staking structure. This trust is based on the age of the coins in the staker’s node. The longer that node has held the coins, combined with the amount of coins held, the more likely that node will be elected to create the block. I am not sure how I feel about this but generally dislike trust.
Waltonchain's framework will also allow smart contracts on the main chain. Again, this level of main chain specificity worries me at scale and difficulty in upgrading. This smart contract core also does not lend itself to private transactions. In this small subset of IoT ecosystem, that does not matter as the whole basis of tracking is open and public records.
The whitepaper is not very technical so I cannot comment to their technical completeness or exact implementation strategy.
This implementation of the relay/subchain framework is a very narrow and under-utilized application. As I said before, WTC may someday just be one part of a larger IoT ecosystem while interacting with another IoT network. This will not be an all-encompassing network.
-Scalability – High, main/subchain framework infinitely scales
-Applicability – Low to medium, their application is narrow
-Interoperability – Medium, the framework will allow it seamlessly
-Consensus – Neutral, should not choke the network but adds trust to the equation
-Developability – N/A, this is a more centralized project and development will likely be with the WTC
\*Let me preface this by saying I realize there is a place for centralized, corporatized, non-open source projects in this space.* Although I know this project is focused mainly on wider, more general business uses for blockchain, I was requested to include it in this analysis. I have edited my original comment as it was more opinionated and therefore determined not to be productive to the conversation. If you would like to get a feel for my opinion, the original text is in the comments below.\**
This project doesn't have much data to go off as the white paper does not contain much technical detail. It is focused on how they are positioning themselves to enable wider adoption of blockchain technology in the corporate ecosystem.
They also spend a fair amount of time covering their node structure and planned governance. What this reveals is a PoS and PoA combined system with levels of nodes and related reward. Several of the node types require KYC (Know Your Customer) to establish trust in order to be part of the block creating pool.
Again there is not much technically that we can glean from this whitepaper. What is known is that this is not directed at a IoT market and will be a PoS and PoA Ethereum-like network with trusted node setup.
I will leave out the grading points as there is not enough information to properly determine where they are at.
So under this same lens, how does INT stack up? INT borrows their framework from Polkadot, which is a relay/subchain architecture. This framework allows for infinite scaling by the addition of subchains given available nodes and relay chain bandwidth. Custom functionality in subchains allows the one setting up the subchain to define the requirements, be it private transactions, state transaction free data chain, smart contracts, etc. This also lends to endless applicability. The main chain is inherently simple in it’s functionality as to not restrict any uses or future updates in technology or advances.
The consensus structure also takes a novel two-tiered approach in separating validating from block generation in an effort to further enable scaling by removing the block generation choke point from the side chains to the central relay chain. This leaves the subchain nodes to only validate transactions with a light DPoS allowing a free flowing transaction highway.
INT also recognizes the strong need for an IoT network to have robust and efficient data handling and storage. They are utilizing a decentralize storage system using DHT much like the BitTorrent system. This combined with the network implementation of all of the communication protocols (TCP/IP, UDP/IP, MANET) build the framework of a network that will effortlessly integrate any device type for any application.
The multi-chain framework easily accommodates interoperability between established networks like the Internet and enables pegging with other blockchains with a few simple transaction type inclusions. With this cross chain communication, manufactures wouldn’t have to negotiate their needs to fit an established blockchain, they could create their own subchain to fit their needs and interact with the greater network through the relay.
The team also understands the development hurdles facing the environment. They plan to solve this by standardizing requirements for communication and data exchange. They have heavy ties with several manufacturers and are currently developing a IoT router to be the gateway to the network.
-Scalability – High, relay/subchain framework enables infinite scalability
-Applicability – High, highest I could find for IoT. Subchains can be created for every possible application.
-Interoperability – High, able to add established networks for data support and cross chain transactions
-Consensus – High, the only structure that separates the two responsibilities of verifying and block generation to further enable scaling and not choke applicability.
-Developability – Medium, network is set up for ease of development with well-known language and subchain capability. Already working with device manufacturers. To be seen.
So with all that said, INT may be in the best place to tackle this space with their chosen framework and philosophy. They set out to accomplish more than WTC or VEN in a network that is better equipped than IOTA or Ethereum. If they can excecute on what they have laid out, there is no reason that they won’t become the market leader, easily overtaking the market cap of VeChain ($2.5Bn, $10 INT) in the short term and IOTA ($7Bn, $28 INT) in the medium term.
submitted by Graytrain to INT_Chain [link] [comments]

DAG Technology Analysis and Measurement

The report produced by the fire block chain coins Institute, author: Yuan Yuming, Hu Zhiwei, PDF version please read the original text download
The Fire Coin Blockchain Application Research Institute conducts research on distributed ledger technology based on directed acyclic graph (DAG) data structure from a technical perspective, and through the specific technical test of typical representative project IOTA, the main research results are obtained:
Report body
1 Introduction
Blockchain is a distributed ledger technology, and distributed ledger technology is not limited to the "blockchain" technology. In the wave of digital economic development, more distributed ledger technology is being explored and applied in order to improve the original technology and meet more practical business application scenarios. Directed Acylic Graph (hereinafter referred to as "DAG") is one of the representatives.
What is DAG technology and the design behind it? What is the actual application effect?We attempted to obtain analytical conclusions through deep analysis of DAG technology and actual test runs of representative project IOTA.
It should also be noted that the results of the indicator data obtained from the test are not and should not be considered as proof or confirmation of the final effect of the IOTA platform or project. Hereby declare.
2. Main conclusions
After research and test analysis, we have the following main conclusions and technical recommendations:
3.DAG Introduction
3.1. Introduction to DAG Principle
DAG (Directed Acyclic Graph) is a data structure that represents a directed graph, and in this graph, it cannot return to this point (no loop) from any vertex, as shown in the figure. Shown as follows:
After the DAG technology-based distributed ledger (hereinafter referred to as DAG) technology has been proposed in recent years, many people think that it is hopeful to replace the blockchain technology in the narrow sense. Because the goal of DAG at design time is to preserve the advantages of the blockchain and to improve the shortcomings of the blockchain.
Different from the traditional linear blockchain structure, the transaction record of the distributed ledger platform represented by IOTA forms a relational structure with a directed acyclic graph, as shown in the following figure.
3.2. DAG characteristics
Due to the different data structure from the previous blockchain, the DAG-based distributed ledger technology has the characteristics of high scalability, high concurrency and is suitable for IoT scenarios.
3.2.1. High scalability, high concurrency
The data synchronization mechanism of traditional linear blockchains (such as Ethereum) is synchronous, which may cause network congestion. The DAG network adopts an asynchronous communication mechanism, allowing concurrent writing. Multiple nodes can simultaneously trade at different tempos without having a clear sequence. Therefore, the data of the network may be inconsistent at the same time, but it will eventually be synchronized.

3.2.2. Applicable to IoT scenarios

In the traditional blockchain network, there are many transactions in each block. The miners are packaged and sent uniformly, involving multiple users. In the DAG network, there is no concept of “block”, the smallest unit of the network. It is a "transaction", each new transaction needs to verify the first two transactions, so the DAG network does not need miners to pass the trust, transfer does not require a fee, which makes DAG technology suitable for small payments.
4. Analysis of technical ideas
Trilemma, or "trilemma", means that in a particular situation, only two of the three advantageous options can be selected or one of the three adverse choices must be chosen. This type of selection dilemma has related cases in various fields such as religion, law, philosophy, economics, and business management.Blockchain is no exception. The impossible triangle in the blockchain is: Scalability, Decentralization, and Security can only choose two of them.
If you analyze DAG technology according to this idea, according to the previous introduction, then DAG has undoubtedly occupied the two aspects of decentralization and scalability. The decentralization and scalability of the DAG can be considered as two-sided, because of the asynchronous accounting features brought about by the DAG data structure, while achieving the high degree of decentralization of the participating network nodes and the scalability of the transaction.
5. There is a problem
Since the characteristics of the data structure bring decentralization and scalability at the same time, it is speculated that the security is a hidden danger according to the theory of impossible triangles. But because DAG is a relatively innovative and special structure, can it be more perfect to achieve security? This is not the case from the actual results.
5.1. Double flower problem
The characteristics of DAG asynchronous communication make it possible for a double-flower attack. For example, an attacker adds two conflicting transactions (double spending) at two different locations on the network, and the transactions are continuously forward-checked in the network until they appear on the verification path of the same transaction, and the network discovers the conflict. At this time, the common ancestor nodes that the two transactions are gathered together can determine which transaction is a double-flower attack.
If the trading path is too short, there will be a problem like "Blowball": when most transactions are "lazy" in extreme cases, only the early trading, the trading network will form a minority. Early transactions are the core central topology. This is not a good thing for DAGs that rely on ever-increasing transactions to increase network reliability.
Therefore, at present, for the double flower problem, it is necessary to comprehensively consider the actual situation for design. Different DAG networks have their own solutions.
5.2. Shadow chain problem
Due to the potential problem of double flowers, when an attacker can build a sufficient number of transactions, it is possible to fork a fraudulent branch (shadow chain) from the real network data, which contains a double flower transaction, and then this The branch is merged into the DAG network, and in this case it is possible for this branch to replace the original transaction data.
6. Introduction to the current improvement plan
At present, the project mainly guarantees safety by sacrificing the native characteristics of some DAGs.
The IOTA project uses the Markov chain Monte Carlo (MCMC) approach to solve this problem. The IOTA introduces the concept of Cumulative Weight for transactions to record the number of times the transaction has been cited in order to indicate the importance of its transaction. The MCMC algorithm selects the existing transactions in the current network as a reference for the newly added transactions by weighting the random weights of the accumulated weights. That is, the more referenced the transaction path, the easier it is to be selected by the algorithm. The walk strategy has also been optimized in version 1.5.0 to control the "width" of the transaction topology to a reasonable range, making the network more secure.
However, at the beginning of the platform startup, due to the limited number of participating nodes and transactions, it is difficult to prevent a malicious organization from sending a large number of malicious transactions through a large number of nodes to cause the entire network to be attacked by the shadow chain. Therefore, an authoritative arbitration institution is needed to determine the validity of the transaction. In IOTA, this node is a Coordinator, which periodically snapshots the current transaction data network (Tangle); the transactions contained in the snapshot are confirmed as valid transactions. But Coordinator doesn't always exist. As the entire network runs and grows, IOTA will cancel the Coordinator at some point in the future.
The Byteball improvement program features its design for the witness and the main chain. Because the structure of DAG brings a lot of transactions with partial order, and to avoid double flowers, it is necessary to establish a full order relationship for these transactions to form a transaction backbone. An earlier transaction on the main chain is considered a valid transaction.Witnesses, who are held by well-known users or institutions, form a main chain by constantly sending transactions to confirm other user transactions.
The above scheme may also bring different changes to the platform based on the DAG structure. Taking IOTA as an example, because of the introduction of Coordinator, the decentralization characteristics are reduced to some extent.
7. Actual operation
7.1. Positive effects
In addition to solving security problems, the above solutions can also solve the smart contract problem to some extent.
Due to the two potential problems caused by the native features of DAG: (1) The transaction duration is uncontrollable. The current mechanism for requesting retransmission requires some complicated timeout mechanism design on the client side, hoping for a simple one-time confirmation mechanism. (2) There is no global sorting mechanism, which results in limited types of operations supported by the system. Therefore, on the distributed ledger platform based on DAG technology, it is difficult to implement Turing's complete intelligent contract system.
In order to ensure that the smart contract can run, an organization is needed to do the above work. The current Coordinator or main chain can achieve similar results.
7.2. Negative effects
As one of the most intuitive indicators, DAG's TPS should theoretically be unlimited. If the maximum TPS of the IOTA platform is compared to the capacity of a factory, then the daily operation of TPS is the daily production of the plant.
For the largest TPS, the April 2017 IOTA stress test showed that the network had transaction processing capabilities of 112 CTPS and 895 TPS. This is the result of a small test network consisting of 250 nodes.
For the daily operation of TPS, from the data that is currently publicly available, the average TPS of the main network in the near future is about 8.2, and the CTPS (the number of confirmed transactions per second) is about 2.7.
The average average TPS of the test network is about 4, and the CTPS is about 3.
Data source discord bot: generic-iota-bot#5760
Is this related to the existence of Coordinator? Actual testing is needed to further demonstrate.
8. Measured analysis
The operational statistics of the open test network are related to many factors.For further analysis, we continue to use the IOTA platform as an example to build a private test environment for technical measurement analysis.
8.1. Test Architecture
The relationship between the components we built this test is shown below.
among them:
8.2. Testing the hardware environment
The server uses Amazon AWS EC2 C5.4xlarge: 16 core 3GHz, Intel Xeon Platinum 8124M CPU, 32GB memory, 10Gbps LAN network between servers, communication delay (ping) is less than 1ms, operating system is Ubuntu 16.04.
8.3. Test scenarios and results analysis

8.3.1. Default PoW Difficulty Value

Although there is no concept such as “miners”, the IOTA node still needs to prove the workload before sending the transaction to avoid sending a large number of transactions to flood the network. The Minimum Weight Magnitude is similar to Bitcoin. The result of PoW should be the number of digits of "9", 9 of which is "000" in the ternary used by IOTA. The IOTA difficulty value can be set before the node is started.
Currently for the production network, the difficulty value of the IOTA is set to 14; the test network is set to 9. Therefore, we first use the test network's default difficulty value of 9 to test, get the following test results.
Since each IOTA's bundle contains multiple transfers, the actual processed TPS will be higher than the send rate. But by executing the script that parses zmq, it can be observed that the current TPS is very low. Another phenomenon is that the number of requests that can be sent successfully per second is also low.
After analysis, the reason is that the test uses VPS, so in PoW, the CPU is mainly used for calculation, so the transaction speed is mainly affected by the transmission speed.

8.3.2. Decrease the PoW difficulty value

Re-test the difficulty value to 1 and get the following results.
As can be seen from the results, TPS will increase after the difficulty is reduced. Therefore, the current TPS of the IOTA project does not reach the bottleneck where the Coordinator is located, but mainly because of the hardware and network of the client itself that sends the transaction. The IOTA community is currently working on the implementation of FPGA-based Curl algorithm and CPU instruction set optimization. Our test results also confirm that we can continue to explore the performance potential of the DAG platform in this way.

8.3.3. Reduce the number of test network nodes

Due to the characteristics of DAG, the actual TPS of the platform and the number of network nodes may also be related. Therefore, when the difficulty value is kept at 1, the number of network nodes is reduced to 10 and the test is repeated to obtain the following results.
As can be seen from the results, as the number of nodes decreases, the actual processing of TPS also decreases, and is lower than the transmission rate. This shows that in a DAG environment, maintaining a sufficient size node will facilitate the processing of the transaction.
9. Reference materials
submitted by i0tal0ver to Iota [link] [comments]

AMD's Growing CPU Advantage Over Intel

AMD's Growing CPU Advantage Over Intel Mar. 1.18 | About: Advanced Micro (AMD)
Raymond Caron, Ph.D. Tech, solar, natural resources, energy (315 followers) Summary AMD's past and economic hazards. AMD's Current market conditions. AMD Zen CPU advantage over Intel. AMD is primarily a CPU fabrication company with much experience and a great history in that respect. They hold patents for 64-bit processing, as well as ARM based processing patents, and GPU architecture patents. AMD built a name for itself in the mid-to-late 90’s when they introduced the K-series CPU’s to good reviews followed by the Athlon series in ‘99. AMD was profitable, they bought the companies NexGen, Alchemy Semiconductor, and ATI. Past Economic Hazards If AMD has such a great history, then what happened? Before I go over the technical advantage that AMD has over Intel, it’s worth looking to see how AMD failed in the past, and to see if those hazards still present a risk to AMD. As for investment purposes we’re more interested in AMD’s turning a profit. AMD suffered from intermittent CPU fabrication problems, and was also the victim of sustained anti-competitive behaviour from Intel who interfered with AMD’s attempts to sell its CPU’s to the market through Sony, Hitachi, Toshiba, Fujitsu, NEC, Dell, Gateway, HP, Acer, and Lenovo. Intel was investigated and/or fined by multiple countries including Japan, Korea, USA, and EU. These hazard needs to be examined to see if history will repeat itself. There have been some rather large changes in the market since then.
1) The EU has shown they are not averse to leveling large fines, and Intel is still fighting the guilty verdict from the last EU fine levied against them; they’ve already lost one appeal. It’s conceivable to expect that the EU, and other countries, would prosecute Intel again. This is compounded by the recent security problems with Intel CPU’s and the fact that Intel sold these CPU’s under false advertising as secure when Intel knew they were not. Here are some of the largest fines dished out by the EU
2) The Internet has evolved from Web 1.0 to 2.0. Consumers are increasing their online presence each year. This reduces the clout that Intel can wield over the market as AMD can more easily sell to consumers through smaller Internet based companies.
3) Traditional distributors (HP, Dell, Lenovo, etc.) are struggling. All of these companies have had recent issues with declining revenue due to Internet competition, and ARM competition. These companies are struggling for sales and this reduces the clout that Intel has over them, as Intel is no longer able to ensure their future. It no longer pays to be in the club. These points are summarized in the graph below, from Statista, which shows “ODM Direct” sales and “other sales” increasing their market share from 2009 to Q3 2017. 4) AMD spun off Global Foundries as a separate company. AMD has a fabrication agreement with Global Foundries, but is also free to fabricate at another foundry such as TSMC, where AMD has recently announced they will be printing Vega at 7nm.
5) Global Foundries developed the capability to fabricate at 16nm, 14nm, and 12nm alongside Samsung, and IBM, and bought the process from IBM to fabricate at 7nm. These three companies have been cooperating to develop new fabrication nodes.
6) The computer market has grown much larger since the mid-90’s – 2006 when AMD last had a significant tangible advantage over Intel, as computer sales rose steadily until 2011 before starting a slow decline, see Statista graph below. The decline corresponds directly to the loss of competition in the marketplace between AMD and Intel, when AMD released the Bulldozer CPU in 2011. Tablets also became available starting in 2010 and contributed to the fall in computer sales which started falling in 2012. It’s important to note that computer shipments did not fall in 2017, they remained static, and AMD’s GPU market share rose in Q4 2017 at the expense of Nvidia and Intel.
7) In terms of fabrication, AMD has access to 7nm on Global Foundries as well as through TSMC. It’s unlikely that AMD will experience CPU fabrication problems in the future. This is something of a reversal of fortunes as Intel is now experiencing issues with its 10nm fabrication facilities which are behind schedule by more than 2 years, and maybe longer. It would be costly for Intel to use another foundry to print their CPU’s due to the overhead that their current foundries have on their bottom line. If Intel is unable to get the 10nm process working, they’re going to have difficulty competing with AMD. AMD: Current market conditions In 2011 AMD released its Bulldozer line of CPU’s to poor reviews and was relegated to selling on the discount market where sales margins are low. Since that time AMD’s profits have been largely determined by the performance of its GPU and Semi-Custom business. Analysts have become accustomed to looking at AMD’s revenue from a GPU perspective, which isn’t currently being seen in a positive light due to the relation between AMD GPU’s and cryptocurrency mining.
The market views cryptocurrency as further risk to AMD. When Bitcoin was introduced it was also mined with GPU’s. When the currency switched to ASIC circuits (a basic inexpensive and simple circuit) for increased profitability (ASIC’s are cheaper because they’re simple), the GPU’s purchased for mining were resold on the market and ended up competing with and hurting new AMD GPU sales. There is also perceived risk to AMD from Nvidia which has favorable reviews for its Pascal GPU offerings. While AMD has been selling GPU’s they haven’t increased GPU supply due to cryptocurrency demand, while Nvidia has. This resulted in a very high cost for AMD GPU’s relative to Nvidia’s. There are strategic reasons for AMD’s current position:
1) While the AMD GPU’s are profitable and greatly desired for cryptocurrency mining, AMD’s market access is through 3rd party resellers whom enjoy the revenue from marked-up GPU sales. AMD most likely makes lower margins on GPU sales relative to the Zen CPU sales due to higher fabrication costs associated with the fabrication of larger size dies and the corresponding lower yield. For reference I’ve included the size of AMD’s and Nvidia’s GPU’s as well as AMD’s Ryzen CPU and Intel’s Coffee lake 8th generation CPU. This suggests that if AMD had to pick and choose between products, they’d focus on Zen due higher yield and revenue from sales and an increase in margin.
2) If AMD maintained historical levels of GPU production in the face of cryptocurrency demand, while increasing production for Zen products, they would maximize potential income for highest margin products (EPYC), while reducing future vulnerability to second-hand GPU sales being resold on the market. 3) AMD was burned in the past from second hand GPU’s and want to avoid repeating that experience. AMD stated several times that the cryptocurrency boom was not factored into forward looking statements, meaning they haven’t produced more GPU’s to expect more GPU sales.
In contrast, Nvidia increased its production of GPU’s due to cryptocurrency demand, as AMD did in the past. Since their Pascal GPU has entered its 2nd year on the market and is capable of running video games for years to come (1080p and 4k gaming), Nvidia will be entering a position where they will be competing directly with older GPU’s used for mining, that are as capable as the cards Nvidia is currently selling. Second-hand GPU’s from mining are known to function very well, with only a need to replace the fan. This is because semiconductors work best in a steady state, as opposed to being turned on and off, so it will endure less wear when used 24/7.
The market is also pessimistic regarding AMD’s P/E ratio. The market is accustomed to evaluating stocks using the P/E ratio. This statistical test is not actually accurate in evaluating new companies, or companies going into or coming out of bankruptcy. It is more accurate in evaluating companies that have a consistent business operating trend over time.
“Similarly, a company with very low earnings now may command a very high P/E ratio even though it isn’t necessarily overvalued. The company may have just IPO’d and growth expectations are very high, or expectations remain high since the company dominates the technology in its space.” P/E Ratio: Problems With The P/E I regard the pessimism surrounding AMD stock due to GPU’s and past history as a positive trait, because the threat is minor. While AMD is experiencing competitive problems with its GPU’s in gaming AMD holds an advantage in Blockchain processing which stands to be a larger and more lucrative market. I also believe that AMD’s progress with Zen, particularly with EPYC and the recent Meltdown related security and performance issues with all Intel CPU offerings far outweigh any GPU turbulence. This turns the pessimism surrounding AMD regarding its GPU’s into a stock benefit. 1) A pessimistic group prevents the stock from becoming a bubble. -It provides a counter argument against hype relating to product launches that are not proven by earnings. Which is unfortunately a historical trend for AMD as they have had difficulty selling server CPU’s, and consumer CPU’s in the past due to market interference by Intel. 2) It creates predictable daily, weekly, monthly, quarterly fluctuations in the stock price that can be used, to generate income. 3) Due to recent product launches and market conditions (Zen architecture advantage, 12nm node launching, Meltdown performance flaw affecting all Intel CPU’s, Intel’s problems with 10nm) and the fact that AMD is once again selling a competitive product, AMD is making more money each quarter. Therefore the base price of AMD’s stock will rise with earnings, as we’re seeing. This is also a form of investment security, where perceived losses are returned over time, due to a stock that is in a long-term upward trajectory due to new products reaching a responsive market.
4) AMD remains a cheap stock. While it’s volatile it’s stuck in a long-term upward trend due to market conditions and new product launches. An investor can buy more stock (with a limited budget) to maximize earnings. This is advantage also means that the stock is more easily manipulated, as seen during the Q3 2017 ER.
5) The pessimism is unfounded. The cryptocurrency craze hasn’t died, it increased – fell – and recovered. The second hand market did not see an influx of mining GPU’s as mining remains profitable.
6) Blockchain is an emerging market, that will eclipse the gaming market in size due to the wide breath of applications across various industries. Vega is a highly desired product for Blockchain applications as AMD has retained a processing and performance advantage over Nvidia. There are more and rapidly growing applications for Blockchain every day, all (or most) of which will require GPU’s. For instance Microsoft, The Golem supercomputer, IBM, HP, Oracle, Red Hat, and others. Long-term upwards trend AMD is at the beginning of a long-term upward trend supported by a comprehensive and competitive product portfolio that is still being delivered to the market, AMD referred to this as product ramping. AMD’s most effective products with Zen is EPYC, and the Raven Ridge APU. EPYC entered the market in mid-December and was completely sold out by mid-January, but has since been restocked. Intel remains uncompetitive in that industry as their CPU offerings are retarded by a 40% performance flaw due to Meltdown patches. Server CPU sales command the highest margins for both Intel and AMD.
The AMD Raven Ridge APU was recently released to excellent reviews. The APU is significant due to high GPU prices driven buy cryptocurrency, and the fact that the APU is a CPU/GPU hybrid which has the performance to play games available today at 1080p. The APU also supports the Vulcan API, which can call upon multiple GPU’s to increase performance, so a system can be upgraded with an AMD or Nvidia GPU that supports Vulcan API at a later date for increased performance for those games or workloads that been programmed to support it. Or the APU can be replaced when the prices of GPU’s fall.
AMD also stands to benefit as Intel confirmed that their new 10 nm fabrication node is behind in technical capability relative to the Samsung, TSMC, and Global Foundries 7 nm fabrication process. This brings into questions Intel’s competitiveness in 2019 and beyond. Take-Away • AMD was uncompetitive with respect to CPU’s from 2011 to 2017 • When AMD was competitive, from 1996 to 2011 they did record profit and bought 3 companies including ATI. • AMD CPU business suffered from: • Market manipulation from Intel. • Intel fined by EU, Japan, Korea, and settled with the USA • Foundry productivity and upgrade complications • AMD has changed • Global Foundries spun off as an independent business • Has developed 14nm &12nm, and is implementing 7nm fabrication • Intel late on 10nm, is less competitive than 7nm node • AMD to fabricate products using multiple foundries (TSMC, Global Foundries) • The market has changed • More AMD products are available on the Internet and both the adoption of the Internet and the size of the Internet retail market has exploded, thanks to the success of smartphones and tablets. • Consumer habits have changed, more people shop online each year. Traditional retailers have lost market share. • Computer market is larger (on-average), but has been declining. While Computer shipments declined in Q2 and Q3 2017, AMD sold more CPU’s. • AMD was uncompetitive with respect to CPU’s from 2011 to 2017. • Analysts look to GPU and Semi-Custom sales for revenue. • Cryptocurrency boom intensified, no crash occurred. • AMD did not increase GPU production to meet cryptocurrency demand. • Blockchain represents a new growth potential for AMD GPU’s. • Pessimism acts as security against a stock bubble & corresponding bust. • Creates cyclical volatility in the stock that can be used to generate profit. • P/E ratio is misleading when used to evaluate AMD. • AMD has long-term growth potential. • 2017 AMD releases competitive product portfolio. • Since Zen was released in March 2017 AMD has beat ER expectations. • AMD returns to profitability in 2017. • AMD taking measureable market share from Intel in OEM CPU Desktop and in CPU market. • High margin server product EPYC released in December 2017 before worst ever CPU security bug found in Intel CPU’s that are hit with detrimental 40% performance patch. • Ryzen APU (Raven Ridge) announced in February 2018, to meet gaming GPU shortage created by high GPU demand for cryptocurrency mining. • Blockchain is a long-term growth opportunity for AMD. • Intel is behind the competition for the next CPU fabrication node. AMD’s growing CPU advantage over Intel About AMD’s Zen Zen is a technical breakthrough in CPU architecture because it’s a modular design and because it is a small CPU while providing similar or better performance than the Intel competition.
Since Zen was released in March 2017, we’ve seen AMD go from 18% CPU market share in the OEM consumer desktops to essentially 50% market share, this was also supported by comments from Lisa Su during the Q3 2017 ER call, by MindFactory.de, and by Amazon sales of CPU’s. We also saw AMD increase its market share of total desktop CPU’s. We also started seeing market share flux between AMD and Intel as new CPU’s are released. Zen is a technical breakthrough supported by a few general guidelines relating to electronics. This provides AMD with an across the board CPU market advantage over Intel for every CPU market addressed.
1) The larger the CPU the lower the yield. - Zen architecture that makes up Ryzen, Threadripper, and EPYC is smaller (44 mm2 compared to 151 mm2 for Coffee Lake). A larger CPU means fewer CPU’s made during fabrication per wafer. AMD will have roughly 3x the fabrication yield for each Zen printed compared to each Coffee Lake printed, therefore each CPU has a much lower cost of manufacturing.
2) The larger the CPU the harder it is to fabricate without errors. - The chance that a CPU will be perfectly fabricated falls exponentially with increasing surface area. Intel will have fewer high quality CPU’s printed compared to AMD. This means that AMD will make a higher margin on each CPU sold. AMD’s supply of perfect printed Ryzen’s (1800X) are so high that the company had to give them away at a reduced cost in order to meet supply demands for the cheaper Ryzen 5 1600X. If you bought a 1600X in August/September, you probably ended up with an 1800X.
3) Larger CPU’s are harder to fabricate without errors on smaller nodes. -The technical capability to fabricate CPU’s at smaller nodes becomes more difficult due to the higher precision that is required to fabricate at a smaller node, and due to the corresponding increase in errors. “A second reason for the slowdown is that it’s simply getting harder to design, inspect and test chips at advanced nodes. Physical effects such as heat, electrostatic discharge and electromagnetic interference are more pronounced at 7nm than at 28nm. It also takes more power to drive signals through skinny wires, and circuits are more sensitive to test and inspection, as well as to thermal migration across a chip. All of that needs to be accounted for and simulated using multi-physics simulation, emulation and prototyping.“ Is 7nm The Last Major Node? “Simply put, the first generation of 10nm requires small processors to ensure high yields. Intel seems to be putting the smaller die sizes (i.e. anything under 15W for a laptop) into the 10nm Cannon Lake bucket, while the larger 35W+ chips will be on 14++ Coffee Lake, a tried and tested sub-node for larger CPUs. While the desktop sits on 14++ for a bit longer, it gives time for Intel to further develop their 10nm fabrication abilities, leading to their 10+ process for larger chips by working their other large chip segments (FPGA, MIC) first.” There are plenty of steps where errors can be created within a fabricated CPU. This is most likely the culprit behind Intel’s inability to launch its 10nm fabrication process. They’re simply unable to print such a large CPU on such a small node with high enough yields to make the process competitive. Intel thought they were ahead of the competition with respect to printing large CPU’s on a small node, until AMD avoided the issue completely by designing a smaller modular CPU. Intel avoided any mention of its 10nm node during its Q4 2017 ER, which I interpret as bad news for Intel shareholders. If you have nothing good to say, then you don’t say anything. Intel having nothing to say about something that is fundamentally critical to its success as a company can’t be good. Intel is on track however to deliver hybrid CPU’s where some small components are printed on 10nm. It’s recently also come to light that Intel’s 10nm node is less competitive than the Global Foundries, Samsung, and TSMC 7nm nodes, which means that Intel is now firmly behind in CPU fabrication. 4) AMD Zen is a new architecture built from the ground up. Intel’s CPU’s are built on-top of older architecture developed with 30-yr old strategies, some of which we’ve recently discovered are flawed. This resulted in the Meltdown flaw, the Spectre flaws, and also includes the ME, and AMT bugs in Intel CPU’s. While AMD is still affected by Spectre, AMD has only ever acknowledged that they’re completely susceptible to Spectre 1, as AMD considers Spectre 2 to be difficult to exploit on an AMD Zen CPU. “It is much more difficult on all AMD CPUs, because BTB entries are not aliased - the attacker must know (and be able to execute arbitrary code at) the exact address of the targeted branch instruction.” Technical Analysis of Spectre & Meltdown * Amd Further reading Spectre and Meltdown: Linux creator Linus Torvalds criticises Intel's 'garbage' patches | ZDNet FYI: Processor bugs are everywhere - just ask Intel and AMD Meltdown and Spectre: Good news for AMD users, (more) bad news for Intel Cybersecurity agency: The only sure defense against huge chip flaw is a new chip Kernel-memory-leaking Intel processor design flaw forces Linux, Windows redesign Take-Away • AMD Zen enjoys a CPU fabrication yield advantage over Intel • AMD Zen enjoys higher yield of high quality CPU’s • Intel’s CPU’s are affected with 40% performance drop due to Meltdown flaw that affect server CPU sales.
AMD stock drivers 1) EPYC • -A critically acclaimed CPU that is sold at a discount compared to Intel. • -Is not affected by 40% software slow-downs due to Meltdown. 2) Raven Ridge desktop APU • - Targets unfed GPU market which has been stifled due to cryptocurrency demand - Customers can upgrade to a new CPU or add a GPU at a later date without changing the motherboard. • - AM4 motherboard supported until 2020. 3) Vega GPU sales to Intel for 8th generation CPU’s with integrated graphics. • - AMD gains access to the complete desktop and mobile market through Intel.
4) Mobile Ryzen APU sales • -Providing gaming capability in a compact power envelope.
5) Ryzen and Threadripper sales • -Fabricated on 12nm in April. • -May eliminate Intel’s last remaining CPU advantage in IPC single core processing. • -AM4 motherboard supported until 2020. • -7nm Ryzen on track for early 2019. 6) Others: Vega, Polaris, Semi-custom, etc. • -I consider any positive developments here to be gravy. Conclusion While in the past Intel interfered with AMD's ability to bring it's products to market, the market has changed. The internet has grown significantly and is now a large market that dominates when in computer sales. It's questionable if Intel still has the influence to affect this new market, and doing so would most certainly result in fines and further bad press.
AMD's foundry problems were turned into an advantage over Intel.
AMD's more recent past was heavily influenced by the failure of the Bulldozer line of CPU's that dragged on AMD's bottom line from 2011 to 2017.
AMD's Zen line of CPU's is a breakthrough that exploits an alternative, superior strategy, in chip design which results in a smaller CPU. A smaller CPU enjoys compounded yield and quality advantages over Intel's CPU architecture. Intel's lead in CPU performance will at the very least be challenged and will more likely come to an end in 2018, until they release a redesigned CPU.
I previously targeted AMD to be worth $20 by the end of Q4 2017 ER. This was based on the speed that Intel was able to get products to market, in comparison AMD is much slower. I believe the stock should be there, but the GPU related story was prominent due to cryptocurrency craze. Financial analysts need more time to catch on to what’s happening with AMD, they need an ER that is driven by CPU sales. I believe that the Q1 2018 is the ER to do that. AMD had EPYC stock in stores when the Meltdown and Spectre flaws hit the news. These CPU’s were sold out by mid-January and are large margin sales.
There are many variables at play within the market, however barring any disruptions I’d expect that AMD will be worth $20 at some point in 2018 due these market drivers. If AMD sold enough EPYC CPU’s due to Intel’s ongoing CPU security problems, then it may occur following the ER in Q1 2018. However, if anything is customary with AMD, it’s that these things always take longer than expected.
submitted by kchia124 to AMD_Stock [link] [comments]

BITCOIN MINING DIFFICULTY EXPLAINED IN 10 MINUTES! Bitcoin Q&A: What is difficulty targeting? Mining Difficulty - Simply Explained - YouTube Blockchain/Bitcoin for beginners 9: Bitcoin difficulty, target, BITS - all you need to know BITCOIN DIFFICULTY ADJUSTMENT  Satoshi Nakamoto's Wallet  Market Analysis and Bitcoin News

Recently, the on-chain analyst Willy Woo has developed a graph that points out the best opportunity windows to buy bitcoin and how to have long-term return. Baptized as Bitcoin Difficulty Ribbon, the system uses an indicator called ribbon difficulty (ribbon interface difficulty). In this way, he was able to identify which were the best ... The Bitcoin difficulty chart provides the current Bitcoin difficulty (BTC diff) target as well as a historical data graph visualizing Bitcoin mining difficulty chart values with BTC difficulty adjustments (both increases and decreases) defaulted to today with timeline options of 1 day, 1 week, 1 month, 3 months, 6 months, 1 year, 3 years, and all time Bitcoin’s mining difficulty continues “staggering” increase. According to data from CoinMetrics, it’s not just Bitcoin’s hashrate that has gone parabolic. In a Jan. 15 tweet, the company shared a graph showing that Bitcoin’s mining difficulty has continued to rise, calling its increase “staggering.” As Bitcoin’s hash rate increases, mining difficulty also increases. That’s exactly what’s been happening with Bitcoin since the beginning of 2020. As estimated by CoinMetrics, a research firm specialising in the cryptocurrency market, Bitcoin’s mining difficulty has been steadily increasing at a rate of 8% over the last four days. Before we even begin to understand what bitcoin mining difficulty means, we need to know how mining works. We have covered this topic in detail before, so we will just give you a little overview before getting into the different nuances of difficulty. Following that, we will look at how mining difficulty is calculated and how it changes to suit the network’s needs.

[index] [39346] [45969] [1697] [47408] [42567] [49552] [42816] [34299] [48630] [46643]


01:18 Market Update 02:18 BTC Difficulty and Hash Rate Drop 05:01 Satoshi Nakomoto Won't Sell Bitcoin 07:28 eToro Market Analysis 10:59 Paxful in India 13:36 IOST NFT Collectibles and Mystery Box ... Mining Bitcoin or Ethereum is a hard task for your computer. But why? And what does the difficulty have to do with the security of blockchains? Learn all abo... Live Bitcoin Trading With Crypto Trading Robot DeriBot on Deribit DeriBot Alternative channel 932 watching Live now Crypto Mining Difficulty 101 - Everything You Need to Know - Duration: 18:40. To read more with regards to Bitcoin wallet card, litecoin wallet card, please visit website the following: http://www.cryptocoinwalletcards.com/ Tags: asic ... What is crypto mining difficulty, how is it adjusted, what is the point of a block time? Vosk explains how the difficulty for mining a block reward is adjusted when mining Bitcoin on sha-256 or ...