An Introduction to Ethereum and Smart Contracts: Bitcoin ...

Mockingbird X.0

Imagine if there was one desk that all stories could cross so that, at 4am, a media plan could be decided upon and disseminated where all news outlets coordinated to set the goalposts of debate and hyper focused on specific issues to drive a narrative to control how you vote and how you spend money; where Internet shills were given marching orders in tandem to what was shown on television, printed in newspapers and spread throughout articles on the World Wide Web.
https://i.imgur.com/Elnci0M.png
In the past, we had Operation Mockingbird, where the program was supremely confident that it could control stories around the world, even in instructions to cover up any story about a possible “Yeti” sighting, should it turn out they were real.
https://i.imgur.com/121LXqy.png
If, in 1959, the government was confident in its ability to control a story about a Yeti, then what is their level of confidence in controlling stories, today?
https://i.imgur.com/jQFVYew.png
https://i.imgur.com/ZKMYGJj.png
In fact, we have a recent example of a situation similar to the Yeti. When Bill Clinton and Loretta Lynch met on the TARMAC to spike the Hillary email investigation, the FBI was so confident it wasn’t them, that their entire focus was finding the leaker, starting with searching within the local PD. We have documentation that demonstrates the state of mind of the confidence the upper levels of the FBI have when dealing with the media.
https://i.imgur.com/IbjDOkI.png
https://i.imgur.com/NH86ozU.png
The marriage between mainstream media and government is a literal one and this arrangement is perfectly legal.
https://i.imgur.com/OAd4vpf.png
But, this problem extends far beyond politics; the private sector, the scientific community, even advice forums are shilled heavily. People are paid to cause anxiety, recommend people break up and otherwise sow depression and nervousness. This is due to a correlating force that employs “systems psychodynamics”, focusing on “tension centered” strategies to create “organizational paradoxes” by targeting people’s basic assumptions about the world around them to create division and provide distraction.
https://i.imgur.com/6OEWYFN.png
https://i.imgur.com/iG4sdD4.png
https://i.imgur.com/e89Rx6B.png
https://i.imgur.com/uotm9Cg.png
https://i.imgur.com/74wt9tD.png
In this day and age, it is even easier to manage these concepts and push a controlled narrative from a central figure than it has ever been. Allen & Co is a “boutique investment firm” that managed the merger between Disney and Fox and operates as an overseeing force for nearly all media and Internet shill armies, while having it’s fingers in sports, social media, video games, health insurance, etc.
https://i.imgur.com/zlpBh3c.png
https://i.imgur.com/e5ZvFFJ.png
Former director of the CIA and Paul Brennan’s former superior George Tenet, holds the reigns of Allen & Co. The cast of characters involves a lot of the usual suspects.
https://i.imgur.com/3OlrX7G.png
In 1973, Allen & Company bought a stake in Columbia Pictures. When the business was sold in 1982 to Coca-Cola, it netted a significant profit. Since then, Herbert Allen, Jr. has had a place on Coca-Cola's board of directors.
Since its founding in 1982, the Allen & Company Sun Valley Conference has regularly drawn high-profile attendees such as Bill Gates, Warren Buffett, Rupert Murdoch, Barry Diller, Michael Eisner, Oprah Winfrey, Robert Johnson, Andy Grove, Richard Parsons, and Donald Keough.
Allen & Co. was one of ten underwriters for the Google initial public offering in 2004. In 2007, Allen was sole advisor to Activision in its $18 billion merger with Vivendi Games. In 2011, the New York Mets hired Allen & Co. to sell a minority stake of the team. That deal later fell apart. In November 2013, Allen & Co. was one of seven underwriters on the initial public offering of Twitter. Allen & Co. was the adviser of Facebook in its $19 billion acquisition of WhatsApp in February 2014.
In 2015, Allen & Co. was the advisor to Time Warner in its $80 billion 2015 merger with Charter Communications, AOL in its acquisition by Verizon, Centene Corporation in its $6.8 billion acquisition of Health Net, and eBay in its separation from PayPal.
In 2016, Allen & Co was the lead advisor to Time Warner in its $108 billion acquisition by AT&T, LinkedIn for its merger talks with Microsoft, Walmart in its $3.3 billion purchase of Jet.com, and Verizon in its $4.8 billion acquisition of Yahoo!. In 2017, Allen & Co. was the advisor to Chewy.com in PetSmart’s $3.35 billion purchase of the online retailer.
Allen & Co throws the Sun Valley Conference every year where you get a glimpse of who sows up. Harvey Weinstein, though a past visitor, was not invited last year.
https://en.wikipedia.org/wiki/Allen_%26_Company_Sun_Valley_Conference
Previous conference guests have included Bill and Melinda Gates, Warren and Susan Buffett, Tony Blair, Google founders Larry Page and Sergey Brin, Allen alumnus and former Philippine Senator Mar Roxas, Google Chairman Eric Schmidt, Quicken Loans Founder & Chairman Dan Gilbert, Yahoo! co-founder Jerry Yang, financier George Soros, Facebook founder Mark Zuckerberg, Media Mogul Rupert Murdoch, eBay CEO Meg Whitman, BET founder Robert Johnson, Time Warner Chairman Richard Parsons, Nike founder and chairman Phil Knight, Dell founder and CEO Michael Dell, NBA player LeBron James, Professor and Entrepreneur Sebastian Thrun, Governor Chris Christie, entertainer Dan Chandler, Katharine Graham of The Washington Post, Diane Sawyer, InterActiveCorp Chairman Barry Diller, Linkedin co-founder Reid Hoffman, entrepreneur Wences Casares, EXOR and FCA Chairman John Elkann, Sandro Salsano from Salsano Group, and Washington Post CEO Donald E. Graham, Ivanka Trump and Jared Kushner, and Oprah Winfrey.
https://i.imgur.com/VZ0OtFa.png
George Tenet, with the reigns of Allen & Co in his hands, is able to single-handedly steer the entire Mockingbird apparatus from cable television to video games to Internet shills from a singular location determining the spectrum of allowable debate. Not only are they able to target people’s conscious psychology, they can target people’s endocrine systems with food and pornography; where people are unaware, on a conscious level, of how their moods and behavior are being manipulated.
https://i.imgur.com/mA3MzTB.png
"The problem with George Tenet is that he doesn't seem to care to get his facts straight. He is not meticulous. He is willing to make up stories that suit his purposes and to suppress information that does not."
"Sadly but fittingly, 'At the Center of the Storm' is likely to remind us that sometimes what lies at the center of a storm is a deafening silence."
https://i.imgur.com/YHMJnnP.png
Tenet joined President-elect Bill Clinton's national security transition team in November 1992. Clinton appointed Tenet Senior Director for Intelligence Programs at the National Security Council, where he served from 1993 to 1995. Tenet was appointed Deputy Director of Central Intelligence in July 1995. Tenet held the position as the DCI from July 1997 to July 2004. Citing "personal reasons," Tenet submitted his resignation to President Bush on June 3, 2004. Tenet said his resignation "was a personal decision and had only one basis—in fact, the well-being of my wonderful family—nothing more and nothing less. In February 2008, he became a managing director at investment bank Allen & Company.
https://i.imgur.com/JnGHqOS.png
We have the documentation that demonstrates what these people could possibly be doing with all of these tools of manipulation at their fingertips.
The term for it is “covert political action” for which all media put before your eyes is used to serve as a veneer… a reality TV show facade of a darker modus operandum.
https://i.imgur.com/vZC4D29.png
https://www.cia.gov/library/center-for-the-study-of-intelligence/kent-csi/vol36no3/html/v36i3a05p_0001.htm
It is now clear that we are facing an implacable enemy whose avowed objective is world domination by whatever means and at whatever costs. There are no rules in such a game. Hitherto acceptable norms of human conduct do not apply. If the US is to survive, longstanding American concepts of "fair play" must be reconsidered. We must develop effective espionage and counterespionage services and must learn to subvert, sabotage and destroy our enemies by more clever, more sophisticated means than those used against us. It may become necessary that the American people be made acquainted with, understand and support this fundamentally repugnant philosophy.
http://www.nbcnews.com/id/3340677/t/cia-operatives-shadowy-war-force/
Intelligence historian Jeffrey T. Richelson says the S.A. has covered a variety of missions. The group, which recently was reorganized, has had about 200 officers, divided among several groups: the Special Operations Group; the Foreign Training Group, which trains foreign police and intelligence officers; the Propaganda and Political Action Group, which handles disinformation; the Computer Operations Group, which handles information warfare; and the Proprietary Management Staff, which manages whatever companies the CIA sets up as covers for the S.A.
Scientology as a CIA Political Action Group – “It is a continuing arrangement…”: https://mikemcclaughry.wordpress.com/2015/08/25/scientology-as-a-cia-political-action-group-it-is-a-continuing-arrangement/
…Those operations we inaugurated in the years 1955-7 are still secret, but, for present purposes, I can say all that’s worth saying about them in a few sentences – after, that is, I offer these few words of wisdom. The ‘perfect’ political action operation is, by definition, uneventful. Nothing ‘happens’ in it. It is a continuing arrangement, neither a process nor a series of actions proceeding at a starting point and ending with a conclusion.
CIA FBI NSA Personnel Active in Scientology: https://i.imgur.com/acu2Eti.png
When you consider the number of forces that can be contained within a single “political action group” in the form on a “boutique investment firm,” where all sides of political arguments are predetermined by a selected group of actors who have been planted, compromised or leveraged in some way in order to control the way they spin their message.
https://i.imgur.com/tU4MD4S.png
The evidence of this coordinated effort is overwhelming and the “consensus” that you see on TV, in sports, in Hollywood, in the news and on the Internet is fabricated.
Under the guise of a fake account a posting is made which looks legitimate and is towards the truth is made - but the critical point is that it has a VERY WEAK PREMISE without substantive proof to back the posting. Once this is done then under alternative fake accounts a very strong position in your favour is slowly introduced over the life of the posting. It is IMPERATIVE that both sides are initially presented, so the uninformed reader cannot determine which side is the truth. As postings and replies are made the stronger 'evidence' or disinformation in your favour is slowly 'seeded in.'
Thus the uninformed reader will most likely develop the same position as you, and if their position is against you their opposition to your posting will be most likely dropped. However in some cases where the forum members are highly educated and can counter your disinformation with real facts and linked postings, you can then 'abort' the consensus cracking by initiating a 'forum slide.'
When you find yourself feeling like common sense and common courtesy aren’t as common as they ought to be, it is because there is a massive psychological operation controlled from the top down to ensure that as many people as possible are caught in a “tension based” mental loop that is inflicted on them by people acting with purpose to achieve goals that are not in the interest of the general population, but a method of operating in secret and corrupt manner without consequences.
Notice that Jeffrey Katzenberg, of Disney, who is intertwined with Allen & Co funds the Young Turks. He is the perfect example of the relationship between media and politics.
Katzenberg has also been involved in politics. With his active support of Hillary Clinton and Barack Obama, he was called "one of Hollywood's premier political kingmakers and one of the Democratic Party's top national fundraisers."
With cash from Jeffrey Katzenberg, The Young Turks looks to grow paid subscribers:
https://digiday.com/media/with-cash-from-katzenberg-the-young-turks-look-to-grow-paid-subscribers/
Last week, former DreamWorks Animation CEO Jeffrey Katzenberg’s new mobile entertainment company WndrCo was part of a $20 million funding round in TYT Network, which oversees 30 news and commentary shows covering politics, pop culture, sports and more. This includes the flagship “The Young Turks” program that streams live on YouTube every day. Other investors in the round included venture capital firms Greycroft Partners, E.ventures and 3L Capital, which led the round. This brings total funding for Young Turks to $24 million.
How Hollywood's Political Donors Are Changing Strategies for the Trump Era:
https://www.hollywoodreporter.com/features/hollywood-political-donors-are-changing-strategy-post-trump-1150545
Hollywood activism long has been depicted as a club controlled by a handful of powerful white men: Katzenberg, Spielberg, Lear, David Geffen, Haim Saban and Bob Iger are the names most often mentioned. But a new generation of power brokers is ascendant, including J.J. Abrams and his wife, Katie McGrath, cited for their personal donations and bundling skills; Shonda Rhimes, who held a get-out-the-vote rally at USC's Galen Center on Sept. 28 that drew 10,000 people; CAA's Darnell Strom, who has hosted events for Nevada congresswoman Jacky Rosen and Arizona congresswoman Kyrsten Sinema; and former Spotify executive Troy Carter, who held three fundraisers for Maryland gubernatorial candidate Ben Jealous (Carter also was a fundraiser for President Obama).
Soros Group Buys Viacom's DreamWorks Film Library:
https://www.forbes.com/2006/03/17/soros-viacom-dreamworks-cx_gl_0317autofacescan11.html#541a895f1f22
Viacom, after splitting off from Les Moonves Les Moonves ' CBS , still holds Paramount Pictures, and that movie studio in December agreed to acquire DreamWorks SKG, the creative shop founded by the Hollywood triumvirate of Steven Spielberg, David Geffen and Jeffrey Katzenberg (a former exec at The Walt Disney Co.). DreamWorks Animation had been spun off into a separate company.
Now it's time for Freston to make back some money--and who better to do a little business with than George Soros? The billionaire financier leads a consortium of Soros Strategic Partners LP and Dune Entertainment II LLC, which together are buying the DreamWorks library--a collection of 59 flicks, including Saving Private Ryan, Gladiator, and American Beauty.
The money you spend on media and junk food and in taxes goes to these groups who then decide how best to market at you so that they decide how you vote by creating a fake consensus to trick into thinking that you want something other than what is best for you; but will inevitably result in more money being funneled to the top, creating further separation between the super rich and the average person. The goal will be to assert creeping authoritarianism by generating outrage against policies and issues they hate. Part of manipulating your basic assumptions is also to use schadenfreude (think canned laughter on TV) against characters who support the cause that might actually do you the most good (which reaffirms and strengthens your confirmation biased along predetermined political lines).
https://i.imgur.com/PW1cRtj.png
We have a population being taught to hate socialism and love capitalism when the truth is no country is practicing either. These terms are merely disguises for political oligarchies where the collection of wealth is less about getting themselves rich and more about keeping everyone else poor.
What can you guess about the world around you if it turned out that every consensus that was forced on you was fake?
How much money would it take to make it look like 51% of the Internet believed in completely idiotic ideas? Combine shill operations with automation and AI’s, and the cost becomes a good investment relative to the return when measured in political power.
Even the people who are well intentioned and very vocal do not have to consciously be aware that they are working for a political action group. A covert political group will always prefer an unwitting tool to help push their agenda, so that they can remain in the shadows.
FDA Admonishes Drug Maker Over Kim Kardashian Instagram Endorsement https://www.forbes.com/sites/davidkroll/2015/08/11/fda-spanks-drug-maker-over-kim-kardashian-instagram-endorsement/#25174a29587b
The OSS files offer details about other agents than famous chef, Julia Child; including Supreme Court Justice Arthur Goldberg, major league catcher Moe Berg, historian Arthur Schlesinger Jr., and actor Sterling Hayden. http://www.nbcnews.com/id/26186498/ns/us_news-security/t/julia-child-cooked-double-life-spy/
USA Today: Businesses and organizations may refer to it as a tool for competitive advantage and marketing; but make no mistake http://archive.is/37tK3
Shareblue accounts caught in /politics posting links to Shareblue without disclosing their affiliation http://archive.is/7HAkr
Psy Group developed elaborate information operations for commercial clients and political candidates around the world http://archive.is/BBblQ
Top mod of /Mechanical_Gifs tries to sell subreddit on ebay for 999.00 dollars. http://archive.is/kU1Ly
Shill posts picture of a dog in a hammock with the brand clearly visible without indicating that it's an ad in the title of the post http://archive.is/Mfdk9
Arstechnica: GCHQs menu of tools spreads disinformation across Internet- “Effects capabilities” allow analysts to twist truth subtly or spam relentlessly. http://arstechnica.com/security/2014/07/ghcqs-chinese-menu-of-tools-spread-disinformation-across-internet/
Samsung Electronics Fined for Fake Online Comments http://bits.blogs.nytimes.com/2013/10/24/samsung-electronics-fined-for-fake-online-comments/?_r=0
Discover Magazine: Researchers Uncover Twitter Bot Army That’s 350 http://blogs.discovermagazine.com/d-brief/2017/01/20/twitter-bot-army/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A%20DiscoverTechnology%20%28Discover%20Technology%29#.WIMl-oiLTnA
Times of Israel - The internet: Israel’s new PR battlefield http://blogs.timesofisrael.com/the-rise-of-digital-diplomacy-could-be-changing-israels-media-image/
Time: Social Media Manipulation? When “Indie” Bloggers and Businesses Get Cozy http://business.time.com/2013/04/22/social-media-manipulation-when-indie-bloggers-and-businesses-get-cozy/
Content-Driven Detection of Campaigns in Social Media [PDF] http://faculty.cs.tamu.edu/caverlee/pubs/lee11cikm.pdf
the law preventing them from using this in America was repealed http://foreignpolicy.com/2013/07/14/u-s-repeals-propaganda-ban-spreads-government-made-news-to-americans/
Redditor who works for a potato mailing company admits to being a shill. He shows off his 27 thousand dollars he made in /pics
http://i.imgur.com/CcTHwdS.png
Screenshot of post since it was removed. http://i.imgur.com/k9g0WF8.png
Just thought I'd contribute to this thread http://imgur.com/OpSos4u
CNN: A PR firm has revealed that it is behind two blogs that previously appeared to be created by independent supporters of Wal-Mart. The blogs Working Families for Wal-mart and subsidiary site Paid Critics are written by 3 employees of PR firm Edelman http://money.cnn.com/2006/10/20/news/companies/walmart_blogs/index.htm
Vice: Your Government Wants to Militarize Social Media to Influence Your Beliefs http://motherboard.vice.com/read/your-government-wants-to-militarize-social-media-to-influence-your-beliefs
BBC News: China's Internet spin doctors http://news.bbc.co.uk/2/hi/7783640.stm
BBC News: US plans to 'fight the net' revealed http://news.bbc.co.uk/2/hi/americas/4655196.stm
Wall Street Journal: Turkey's Government Forms 6 http://online.wsj.com/news/articles/SB10001424127887323527004579079151479634742?mg=reno64-wsj&url=http%3A%2F%2Fonline.wsj.com%2Farticle%2FSB10001424127887323527004579079151479634742.html
Fake product reviews may be pervasive http://phys.org/news/2013-07-fake-product-pervasive.html#nRlv
USA Today: The co-owner of a major Pentagon propaganda contractor publicly admitted that he was behind a series of websites used in an attempt to discredit two USA TODAY journalists who had reported on the contractor. http://usatoday30.usatoday.com/news/military/story/2012-05-24/Leonie-usa-today-propaganda-pentagon/55190450/1
ADWEEK: Marketing on Reddit Is Scary http://www.adweek.com/news/technology/marketing-reddit-scary-these-success-stories-show-big-potential-168278
BBC- How online chatbots are already tricking you- Intelligent machines that can pass for humans have long been dreamed of http://www.bbc.com/future/story/20140609-how-online-bots-are-tricking-you
BBC news: Amazon targets 1 http://www.bbc.com/news/technology-34565631
BBC: More than four times as many tweets were made by automated accounts in favour of Donald Trump around the first US presidential debate as by those backing Hillary Clinton http://www.bbc.com/news/technology-37684418
Fake five-star reviews being bought and sold online - Fake online reviews are being openly traded on the internet
http://www.bbc.com/news/technology-43907695
http://www.bbc.com/news/world-asia-20982985
http://www.bbc.com/news/world-asia-20982985
Bloomberg: How to Hack an Election [and influence voters with fake social media accounts] http://www.bloomberg.com/features/2016-how-to-hack-an-election/
"Internet Reputation Management http://www.bloomberg.com/news/articles/2008-04-30/do-reputation-management-services-work-businessweek-business-news-stock-market-and-financial-advice
Buzzfeed: Documents Show How Russia’s Troll Army Hit America http://www.buzzfeed.com/maxseddon/documents-show-how-russias-troll-army-hit-america#.ki8Mz97ly
The Rise of Social Bots http://www.cacm.acm.org/magazines/2016/7/204021-the-rise-of-social-bots/fulltext
CBC News- Canadian government monitors online forums http://www.cbc.ca/news/canada/bureaucrats-monitor-online-forums-1.906351
Chicago Tribune: Nutrition for sale: How Kellogg worked with 'independent experts' to tout cereal http://www.chicagotribune.com/business/ct-kellogg-independent-experts-cereal-20161121-story.html
DailyKos: HBGary: Automated social media management http://www.dailykos.com/story/2011/02/16/945768/-UPDATED-The-HB-Gary-Email-That-Should-Concern-Us-All
Meme Warfare Center http://www.dtic.mil/dtic/tfulltext/u2/a507172.pdf
Shilling on Reddit is openly admitted to in this Forbes article http://www.forbes.com/sites/julesschroede2016/03/10/the-magic-formula-behind-going-viral-on-reddit/#1d2485b05271
Forbes: From Tinder Bots To 'Cuban Twitter' http://www.forbes.com/sites/kashmirhill/2014/04/17/from-tinder-bots-to-covert-social-networks-welcome-to-cognitive-hacking/#4b78e2d92a7d
Hivemind http://www.hivemind.cc/rank/shills
Huffington Post- Exposing Cyber Shills and Social Media's Underworld http://www.huffingtonpost.com/sam-fiorella/cyber-shills_b_2803801.html
The Independent: Massive British PR firm caught on video: "We've got all sorts of dark arts...The ambition is to drown that negative content and make sure that you have positive content online." They discuss techniques for managing reputations online and creating/maintaining 3rd-party blogs that seem independent. http://www.independent.co.uk/news/uk/politics/caught-on-camera-top-lobbyists-boasting-how-they-influence-the-pm-6272760.html
New York Times: Lifestyle Lift http://www.nytimes.com/2009/07/15/technology/internet/15lift.html?_r=1&emc=eta1
New York Times: Give Yourself 5 Stars? Online http://www.nytimes.com/2013/09/23/technology/give-yourself-4-stars-online-it-might-cost-you.html?src=me&ref=general
NY Times- From a nondescript office building in St. Petersburg http://www.nytimes.com/2015/06/07/magazine/the-agency.html?_r=1
NY Times: Effort to Expose Russia’s ‘Troll Army’ Draws Vicious Retaliation http://www.nytimes.com/2016/05/31/world/europe/russia-finland-nato-trolls.html?_r=1
PBS Frontline Documentary - Generation Like http://www.pbs.org/wgbh/frontline/film/generation-like/
Gamers promote gaming-gambling site on youtube by pretending to hit jackpot without disclosing that they own the site. They tried to retroactively write a disclosure covering their tracks http://www.pcgamer.com/csgo-lotto-investigation-uncovers-colossal-conflict-of-interest/
Raw Story: CENTCOM engages bloggers http://www.rawstory.com/news/2006/Raw_obtains_CENTCOM_email_to_bloggers_1016.html
Raw Story: Air Force ordered software to manage army of fake virtual people http://www.rawstory.com/rs/2011/02/18/revealed-air-force-ordered-software-to-manage-army-of-fake-virtual-people/
Redective http://www.redective.com/?r=e&a=search&s=subreddit&t=redective&q=shills
Salon: Why Reddit moderators are censoring Glenn Greenwald’s latest news story on shills http://www.salon.com/2014/02/28/why_reddit_moderators_are_censoring_glenn_greenwalds_latest_bombshell_partne
The Atlantic: Kim Kardashian was paid to post a selfie on Instagram and Twitter advertising a pharmaceutical product. Sent to 42 million followers on Instagram and 32 million on Twitter http://www.theatlantic.com/health/archive/2015/09/fda-drug-promotion-social-media/404563/
WAR.COM: THE INTERNET AND PSYCHOLOGICAL OPERATIONS http://www.theblackvault.com/documents/ADA389269.pdf
The Guardian: Internet Astroturfing http://www.theguardian.com/commentisfree/libertycentral/2010/dec/13/astroturf-libertarians-internet-democracy
The Guardian: Israel ups the stakes in the propaganda war http://www.theguardian.com/media/2006/nov/20/mondaymediasection.israel
Operation Earnest Voice http://www.theguardian.com/technology/2011/ma17/us-spy-operation-social-networks
The Guardian: British army creates team of Facebook warriors http://www.theguardian.com/uk-news/2015/jan/31/british-army-facebook-warriors-77th-brigade
The Guardian: US military studied how to influence Twitter [and Reddit] users in Darpa-funded research [2014] http://www.theguardian.com/world/2014/jul/08/darpa-social-networks-research-twitter-influence-studies
The Guardian: Chinese officials flood the Chinese internet with positive social media posts to distract their population http://www.theguardian.com/world/2016/may/20/chinese-officials-create-488m-social-media-posts-a-year-study-finds
Times of Israel: Israeli government paying bilingual students to spread propaganda online primarily to international communities without having to identify themselves as working for the government. "The [student] union will operate computer rooms for the project...it was decided to establish a permanent structure of activity on the Internet through the students at academic institutions in the country." http://www.timesofisrael.com/pmo-stealthily-recruiting-students-for-online-advocacy/
USA Today: Lord & Taylor settles FTC charges over paid Instagram posts http://www.usatoday.com/story/money/2016/03/15/lord--taylor-settles-ftc-charges-over-paid-instagram-posts/81801972/
Researcher's algorithm weeds out people using multiple online accounts to spread propaganda - Based on word choice http://www.utsa.edu/today/2016/10/astroturfing.html
http://www.webinknow.com/2008/12/the-us-air-force-armed-with-social-media.html
Wired: Powered by rapid advances in artificial intelligence http://www.wired.co.uk/magazine/archive/2015/06/wired-world-2015/robot-propaganda
Wired: Clinton Staff and Volunteers Busted for Astroturfing [in 2007] http://www.wired.com/2007/12/clinton-staff-a/
Wired: Pro-Government Twitter Bots Try to Hush Mexican Activists http://www.wired.com/2015/08/pro-government-twitter-bots-try-hush-mexican-activists/
Wired: Microsoft http://www.wired.com/2015/09/ftc-machinima-microsoft-youtube/
Wired: Military Report: Secretly ‘Recruit or Hire Bloggers’ http://www.wired.com/dangerroom/2008/03/report-recruit/
Wired: Air Force Releases ‘Counter-Blog’ Marching Orders http://www.wired.com/dangerroom/2009/01/usaf-blog-respo/
Reddit Secrets https://archive.fo/NAwBx
Reddit Secrets https://archive.fo/SCWN7
Boostupvotes.com https://archive.fo/WdbYQ
"Once we isolate key people https://archive.is/PoUMo
GCHQ has their own internet shilling program https://en.wikipedia.org/wiki/Joint_Threat_Research_Intelligence_Group
Russia https://en.wikipedia.org/wiki/State-sponsored_Internet_sockpuppetry
US also operates in conjunction with the UK to collect and share intelligence data https://en.wikipedia.org/wiki/UKUSA_Agreement
Glenn Greenwald: How Covert Agents Infiltrate the Internet to Manipulate https://firstlook.org/theintercept/2014/02/24/jtrig-manipulation/
Glenn Greenwald: Hacking Online Polls and Other Ways British Spies Seek to Control the Internet https://firstlook.org/theintercept/2014/07/14/manipulating-online-polls-ways-british-spies-seek-control-internet/
Here is a direct link to your image for the benefit of mobile users https://imgur.com/OpSos4u.jpg
Reddit for iPhone https://itunes.apple.com/us/app/reddit-the-official-app/id1064216828?mt=8
Why Satoshi Nakamoto Has Gone https://medium.com/@ducktatosatoshi-nakamoto-has-gone-4cef923d7acd
What I learned selling my Reddit accounts https://medium.com/@Rob79/what-i-learned-selling-my-reddit-accounts-c5e9f6348005#.u5zt0mti3
Artificial intelligence chatbots will overwhelm human speech online; the rise of MADCOMs https://medium.com/artificial-intelligence-policy-laws-and-ethics/artificial-intelligence-chatbots-will-overwhelm-human-speech-online-the-rise-of-madcoms-e007818f31a1
How Reddit Got Huge: Tons of Fake Accounts - According to Reddit cofounder Steve Huffman https://motherboard.vice.com/en_us/article/how-reddit-got-huge-tons-of-fake-accounts--2
Whistleblower and subsequent investigation: Paid trolls on /Bitcoin https://np.reddit.com/Bitcoin/comments/34m7yn/professional_bitcoin_trolls_exist/cqwjdlw
Confession of Hillary Shill from /SandersForPresident https://np.reddit.com/conspiracy/comments/3rncq9/confession_of_hillary_shill_from/
Why do I exist? https://np.reddit.com/DirectImageLinkerBot/wiki/index
Already a direct link? https://np.reddit.com/DirectImageLinkerBot/wiki/res_links
Here's the thread. https://np.reddit.com/HailCorporate/comments/3gl8zi/that_potato_mailing_company_is_at_it_again/
/netsec talks about gaming reddit via sockpuppets and how online discourse is (easily) manipulated. https://np.reddit.com/netsec/comments/38wl43/we_used_sock_puppets_in_rnetsec_last_year_and_are
Redditor comes clean about being paid to chat on Reddit. They work to promote a politician https://np.reddit.com/offmychest/comments/3gk56y/i_get_paid_to_chat_on_reddit/
Shill whistleblower https://np.reddit.com/politics/comments/rtr6b/a_very_interesting_insight_into_how_certain/
Russian bots were active on Reddit last year https://np.reddit.com/RussiaLago/comments/76cq4d/exclusive_we_can_now_definitively_state_that/?st=j8s7535j&sh=36805d5d
The Bush and Gore campaigns of 2000 used methods similar to the Chinese government for conducting “guided discussions” in chatrooms designed to influence citizens https://np.reddit.com/shills/comments/3xhoq8/til_the_advent_of_social_media_offers_new_routes/?st=j0o5xr9c&sh=3662f0dc
source paper. https://np.reddit.com/shills/comments/4d3l3s/government_agents_and_their_allies_might_ente
or Click Here. https://np.reddit.com/shills/comments/4kdq7n/astroturfing_information_megathread_revision_8/?st=iwlbcoon&sh=9e44591e Alleged paid shill leaks details of organization and actions.
https://np.reddit.com/shills/comments/4wl19alleged_paid_shill_leaks_details_of_organization/?st=irktcssh&sh=8713f4be
Shill Confessions and Additional Information https://np.reddit.com/shills/comments/5pzcnx/shill_confessions_and_additional_information/?st=izz0ga8r&sh=43621acd
Corporate and governmental manipulation of Wikipedia articles https://np.reddit.com/shills/comments/5sb7pi/new_york_times_corporate_editing_of_wikipedia/?st=iyteny9b&sh=b488263f
Ex -MMA fighter and ex-police officer exposes corrupt police practices https://np.reddit.com/shills/comments/6jn27s/ex_mma_fighter_and_expolice_officer_exposes/
User pushes InfoWars links on Reddit https://np.reddit.com/shills/comments/6uau99/chemicals_in_reddit_are_turning_memes_gay_take/?st=j6r0g2om&sh=96f3dbf4
Some websites use shill accounts to spam their competitor's articles https://np.reddit.com/TheoryOfReddit/comments/1ja4nf/lets_talk_about_those_playing_reddit_with/?st=iunay35w&sh=d841095d
User posts video using GoPro https://np.reddit.com/videos/comments/2ejpbb/yes_it_is_true_i_boiled_my_gopro_to_get_you_this/ck0btnb/?context=3&st=j0qt0xnf&sh=ef13ba81
Fracking shill whistleblower spills the beans on Fracking Internet PR https://np.reddit.com/worldnews/comments/31wo57/the_chevron_tapes_video_shows_oil_giant_allegedly/cq5uhse?context=3
https://i.imgur.com/Q3gjFg9.jpg
https://i.imgur.com/q2uFIV0.jpg
TOP SECRET SPECIAL HANDLING NOFORN
CENTRAL INTELLIGENCE AGENCY
Directorate of Operations
October 16, 1964
MEMORANDUM FOR THE DIRECTOR OF THE CIA
Subject: After action report of
Operation CUCKOO (TS)
INTRODUCTION

1) Operation CUCKOO was part of the overall operation CLEANSWEEP, aimed at eliminating domestic opposition to activities undertaken by the Central Intelligence Agency's special activities division, in main regard to operation GUILLOTINE.

2) Operation CUCKOO was approved by the Joint Chiefs of Staff, Department of Defense and the office of The President of the United States as a covert domestic action to be under taken within the limits of Washington D.C as outlined by Secret Executive Order 37.

3) Following the publishing of the Warren Commission, former special agent Mary Pinchot Meyer (Operation MOCKINGBIRD, Operation SIREN) also was married to Cord Meyer (Operation MOCKINGBIRD, Operation GUILLOTINE) threatened to disclose the details of several Special Activities Divisions' operations, including but not limited to, Operation SIREN and GUILLOTENE.
​1
TOP SECRET SPECIAL HANDLING NOFORN
4) It was deemed necessary by senior Directorate of Operations members to initiate Operation CUCKOO as an extension of Operation CLEANSWEEP on November 30th. After Mary Pinchot Meyer threatened to report her knowledge of Operation GUILLOTENE and the details of her work in Operation SIREN from her affair with the former President.

5) Special Activities Division was given the green light after briefing president Johnson on the situation. The situation report was forwarded to the Department of Defense and the Joint Chiefs of staff, who both approved of the parameters of the operation, as outlined under article C of secret executive order 37 (see attached copy of article).
​PLANNING STAGES
6) 8 members of the special activities division handpicked by operation lead William King Harvey began planning for the operation on October 3rd, with planned execution before October 16th.

7) The (?) of the operation was set as the neighborhood of Georgetown along the Potomac river, where the operators would observe, take note on routines, and eventually carry the operation.

8) After nothing Meyer's routines, Edward "Eddy" Reid was picked as the operation point man who would intersect Meyer on her walk on October 12th, with lead William King Harvey providing long range support if necessary from across the Chesapeake and Ohio canal (see illustration A for detailed map).

9) Edward Reid was planned to be dressed in the manner of a homeless black man, due to his resemblances to local trash collector (later found out to be Raymond Crump) who inhabits the AO and the path that Reid was planned to intersect Meyer.
2
TOP SECRET SPECIAL HANDLING NOFORN
submitted by The_Web_Of_Slime to Intelligence [link] [comments]

Be careful with RaiBlocks. It's a coin with a lack of notion of confirmations/finality. Your coins are never really confirmed.

I'm sure I'll be accused of spreading FUD, so some brief notes about my bio:
Now about RaiBlocks. I do not want to do a full review and identify actual exploitable weaknesses. I just want to point some red flags which I discovered why reading the whitepaper. Whether these problems are actually exploitable is another question...
So let's start from the fact that there are two white papers. When you google "RaiBlocks white paper", you can find the old one, here.
it defines a concept of confirmations. Some quotes:
This is a clear definition of confirmation. There might be some subtle issues in it, but in normal case this algorithm will work. But it's, basically, a fantastically inefficient version of proof-of-stake, which requires orders of magnitude more bandwidth then necessary. Note that this paper doesn't describe delegation, so you have all nodes voting for each transaction, thus wasting millions time more traffic then necessary.
I think at some point Colin LeMahieu realised that he implemented a shitty version of PoS which doesn't scale, and tried to make it scale. You can find the new version of paper on Raiblocks.net web site. It's much more sciency looking. It seems that Colin was desperate to improve the protocol without changing the architecture. So you see some mental contortions. First:
Since agreements in RaiBlocks are reached quickly, on the order of milliseconds to seconds, we can present the user with two familiar categories of incoming transactions: settled and unsettled. Settled transactions are transactions where an account has generated receive blocks. Unsettled transactions have not yet been incorporated in to the receiver’s cumulative balance. This is a replacement for the more complex and unfamiliar confirmations metric in other cryptocurrencies.
So Colin tells us that we do not need a notion of "confirmed" and can use a notion of "settled" instead. So what's the difference?
Well, Colin is honest with us: settled doesn't mean confirmed. It only means that your node have acknowledged reception of coins, but that doesn't mean that coins are finally yours. There's no notion of finality of this system. Delegates can replace blocks with their votes on any time, so your money might disappear weeks after it was settled.
Without explicit voting on every transaction, you don't have a notion of confirmation or finality.
Another red flag:
... a fork must be the result of poor programming or malicious intent (double-spend) by the account’s owner. Upon detection, a representative will create a vote referencing the block ˆbi in it’s ledger and broadcast it to the network.
So conflicts, or forks, are resolved through voting. But how are they detected?
If a node can identify a conflict, it might be able to resolve it. But detection of discrepancy is one of major topics of consensus.
E.g. suppose Alice's node received version 1 of a block, while Bob's node received version 2. If they do communicate, they won't be aware of the conflict.
So how are conflicts detected in the RaiBlocks? The paper doesn't define this, but it mentions that block messages are sent between nodes, so a node can detect conflict when it receives different versions of blocks from different peers.
So conflict detection is possible in this model, but is it reliable? There's no evidence for that.
In theory, if you can guarantee that every message is delivered, you can achieve reliable conflict detection. But in practice, networks are not reliable. And you do not want full connectivity anyway (each node talking with each other node is fantastically expensive). And on top of that, RaiBlocks uses UDP network protocol, which is unreliable. There's no guarantee of message delivery. And if messages are lost, conflict might be undetected, thus Alice's node will think she received coins from Bob while the rest of the network will think otherwise.
This topic is not discussed in the paper.
RaiBlocks, not having a proper blockchain, also lacks a way to compare state of two nodes. In Bitcoin you only need to compare the latest hash: if hash is the same, then nodes are in perfect sync. But in RaiBlocks you have multiple "blockchains" for each account, so basically you have to compare state of every account to check that you are in sync. This is incredibly wasteful.
So, to summarize, I'd describe RaiBlocks as "UDP coin". It might work quite well if network conditions are good and messages are delivered. It can even tolerate some degree of packet loss. But there's no proof that it works in all conditions, in fact, the paper avoids the topic. There's no notion of confirmation. You never know if you received coins or not. There are probably many conditions in which the system would fail.
I'm not interested in finding an actual failure, it's not a good use of my time. So treat the above as an opinion of a guy who has significant knowledge about consensus algorithm upon reading the Raiblocks papers. Feel free to ignore it. :)
submitted by killerstorm to CryptoCurrency [link] [comments]

Which type of curren(t) do you want to see(cy)? A analysis of the intention behind bitcoin(s). [Part 2]

Part 1
It's been a bit of time since the first post during which I believe things have crystallised further as to the intentions of the three primary bitcoin variants. I was going to go on a long winded journey to try to weave together the various bits and pieces to let the reader discern from themselves but there's simply too much material that needs to be covered and the effort that it would require is not something that I can invest right now.
Firstly we must define what bitcoin actually is. Many people think of bitcoin as a unit of a digital currency like a dollar in your bank but without a physical substrate. That's kind of correct as a way to explain its likeness to something many people are familiar with but instead it's a bit more nuanced than that. If we look at a wallet from 2011 that has never moved any coins, we can find that there are now multiple "bitcoins" on multiple different blockchains. This post will discuss the main three variants which are Bitcoin Core, Bitcoin Cash and Bitcoin SV. In this respect many people are still hotly debating which is the REAL bitcoin variant and which bitcoins you want to be "investing" in.
The genius of bitcoin was not in defining a class of non physical objects to send around. Why bitcoin was so revolutionary is that it combined cryptography, economics, law, computer science, networking, mathematics, etc. and created a protocol which was basically a rule set to be followed which creates a game of incentives that provides security to a p2p network to prevent double spends. The game theory is extremely important to understand. When a transaction is made on the bitcoin network your wallet essentially generates a string of characters which includes your public cryptographic key, a signature which is derived from the private key:pub key pair, the hash of the previous block and an address derived from a public key of the person you want to send the coins to. Because each transaction includes the hash of the previous block (a hash is something that will always generate the same 64 character string result from EXACTLY the same data inputs) the blocks are literally chained together. Bitcoin and the blockchain are thus defined in the technical white paper which accompanied the release client as a chain of digital signatures.
The miners validate transactions on the network and compete with one another to detect double spends on the network. If a miner finds the correct solution to the current block (and in doing so is the one who writes all the transactions that have elapsed since the last block was found, in to the next block) says that a transaction is confirmed but then the rest of the network disagree that the transactions occurred in the order that this miner says (for double spends), then the network will reject the version of the blockchain that that miner is working on. In that respect the miners are incentivised to check each other's work and ensure the majority are working on the correct version of the chain. The miners are thus bound by the game theoretical design of NAKAMOTO CONSENSUS and the ENFORCES of the rule set. It is important to note the term ENFORCER rather than RULE CREATOR as this is defined in the white paper which is a document copyrighted by Satoshi Nakamoto in 2009.

Now if we look at the three primary variants of bitcoin understanding these important defining characteristics of what the bitcoin protocol actually is we can make an argument that the variants that changed some of these defining attributes as no longer being bitcoin rather than trying to argue based off market appraisal which is essentially defining bitcoin as a social media consensus rather than a set in stone rule set.
BITCOIN CORE: On first examination Bitcoin Core appears to be the incumbent bitcoin that many are being lead to believe is the "true" bitcoin and the others are knock off scams. The outward stated rationale behind the bitcoin core variant is that computational resources, bandwidth, storage are scarce and that before increasing the size of each block to allow for more transactions we should be increasing the efficiency with which the data being fed in to a block is stored. In order to achieve this one of the first suggested implementations was a process known as SegWit (segregating the witness data). This means that when you construct a bitcoin transaction, in the header of the tx, instead of the inputs being public key and a signature + Hash + address(to), the signature data is moved outside of header as this can save space within the header and allow more transactions to fill the block. More of the history of the proposal can be read about here (bearing in mind that article is published by the bitcoinmagazine which is founded by ethereum devs Vitalik and Mihai and can't necessarily be trusted to give an unbiased record of events). The idea of a segwit like solution was proposed as early as 2012 by the likes of Greg Maxwell and Luke Dash Jnr and Peter Todd in an apparent effort to "FIX" transaction malleability and enable side chains. Those familiar with the motto "problem reaction solution" may understand here that the problem being presented may not always be an authentic problem and it may actually just be necessary preparation for implementing a desired solution.
The real technical arguments as to whether moving signature data outside of the transaction in the header actually invalidates the definition of bitcoin as being a chain of digital signatures is outside my realm of expertise but instead we can examine the character of the individuals and groups involved in endorsing such a solution. Greg Maxwell is a hard to know individual that has been involved with bitcoin since its very early days but in some articles he portrays himself as portrays himself as one of bitcoins harshest earliest critics. Before that he worked with Mozilla and Wikipedia and a few mentions of him can be found on some old linux sites or such. He has no entry on wikipedia other than a non hyperlinked listing as the CTO of Blockstream. Blockstream was a company founded by Greg Maxwell and Adam Back, but in business registration documents only Adam Back is listed as the business contact but registered by James Murdock as the agent. They received funding from a number of VC firms but also Joi Ito and Reid Hoffman and there are suggestions that MIT media labs and the Digital Currency Initiative. For those paying attention Joi Ito and Reid Hoffman have links to Jeffrey Epstein and his offsider Ghislaine Maxwell.

Ghislaine is the daughter of publishing tycoon and fraudster Robert Maxwell (Ján Ludvík Hyman Binyamin Hoch, a yiddish orthodox czech). It is emerging that the Maxwells are implicated with Mossad and involved in many different psyops throughout the last decades. Greg Maxwell is verified as nullc but a few months ago was outed using sock puppets as another reddit user contrarian__ who also admits to being Jewish in one of his comments as the former. Greg has had a colourful history with his roll as a bitcoin core developer successfully ousting two of the developers put there by Satoshi (Gavin Andreson and Mike Hearn) and being referred to by Andreson as a toxic troll with counterpart Samon Mow. At this point rather than crafting the narrative around Greg, I will provide a few links for the reader to assess on their own time:
  1. https://coinspice.io/news/btc-dev-gregory-maxwell-fake-social-media-account-accusations-nonsense/
  2. https://www.trustnodes.com/2017/06/06/making-gregory-maxwell-bitcoin-core-committer-huge-mistake-says-gavin-andresen
  3. https://www.ccn.com/gavin-andresen-samson-mow-and-greg-maxwell-toxic-trolls//
  4. https://www.nytimes.com/2016/01/17/business/dealbook/the-bitcoin-believer-who-gave-up.html
  5. https://www.coindesk.com/mozilla-accepting-bitcoin-donations
  6. https://spectrum.ieee.org/tech-talk/computing/networks/the-bitcoin-for-is-a-coup
  7. https://www.reddit.com/btc/comments/68pusp/gavin_andresen_on_twitter_im_looking_for_beta/dh1cmfl/
  8. https://www.reddit.com/btc/comments/d14qee/can_someone_post_the_details_of_the_relationships/?ref=tokendaily
  9. https://www.coindesk.com/court-docs-detail-sexual-misconduct-allegations-against-bitcoin-consultant-peter-todd
  10. https://coinspice.io/news/billionaire-jeffrey-epstein-btc-maximalist-bitcoin-is-a-store-of-value-not-a-currency/
  11. https://www.dailymail.co.uk/news/article-7579851/More-300-paedophiles-arrested-worldwide-massive-child-abuse-website-taken-down.html
  12. https://news.bitcoin.com/risks-segregated-witness-opening-door-mining-cartels-undermine-bitcoin-network/
  13. https://micky.com.au/craig-wrights-crackpot-bitcoin-theory-covered-by-uks-financial-times/
  14. https://www.reddit.com/btc/comments/74se80/wikipedia_admins_gregory_maxwell_of_blockstream/

Now I could just go on dumping more and more articles but that doesn't really weave it all together. Essentially it is very well possible that the 'FIX' of bitcoin proposed with SegWit was done by those who are moral reprobates who have been rubbing shoulders money launderers and human traffickers. Gregory Maxwell was removed from wikipedia, worked with Mozilla who donated a quarter of a million to MIT media labs and had relationship with Joi Ito, the company he founded received funding from people associated with Epstein who have demonstrated their poor character and dishonesty and attempted to wage toxic wars against those early bitcoin developers who wished to scale bitcoin as per the white paper and without changing consensus rules or signature structures.
The argument that BTC is bitcoin because the exchanges and the market have chosen is not necessarily a logical supposition when the vast majority of the money that has flown in to inflate the price of BTC comes from a cryptographic USD token that was created by Brock Pierce (Might Ducks child stahollywood pedo scandal Digital Entertainment Network) who attended Jeffrey Epstein's Island for conferences. The group Tether who issues the USDT has been getting nailed by the New York Attorney General office with claims of $1.4 trillion in damages from their dodgey practices. Brock Pierce has since distanced himself from Tether but Blockstream still works closely with them and they are now exploring issuing tether on the ethereum network. Tether lost it's US banking partner in early 2017 before the monstrous run up for bitcoin prices. Afterwards they alleged they had full reserves of USD however, they were never audited and were printing hundreds of millions of dollars of tether each week during peak mania which was used to buy bitcoin (which was then used as collateral to issue more tether against the bitcoin they bought at a value they inflated). Around $30m in USDT is crossing between China to Russia daily and when some of the groups also related to USDT/Tether were raided they found them in possession of hundreds of thousands of dollars worth of counterfeit physical US bills.
Because of all this it then becomes important to reassess the arguments that were made for the implementation of pegged sidechains, segregated witnesses and other second layer solutions. If preventing the bitcoin blockchain from bloating was the main argument for second layer solutions, what was the plan for scaling the data related to the records of transactions that occur on the second layer. You will then need to rely on less robust ways of securing the second layer than Proof Of Work but still have the same amount of data to contend with, unless there was plans all along for second layer solutions to enable records to be deleted /pruned to facilitate money laundering and violation of laws put in place to prevent banking secrecy etc.
There's much more to it as well and I encourage anyone interested to go digging on their own in to this murky cesspit. Although I know very well what sort of stuff Epstein has been up to I have been out of the loop and haven't familiarised myself with everyone involved in his network that is coming to light.
Stay tuned for part 3 which will be an analysis of the shit show that is the Bitcoin Cash variant...
submitted by whipnil to C_S_T [link] [comments]

[uncensored-r/CryptoCurrency] Be careful with RaiBlocks. It's a coin with a lack of notion of confirmations/finality. Your coin...

The following post by killerstorm is being replicated because some comments within the post(but not the post itself) have been openly removed.
The original post can be found(in censored form) at this link:
np.reddit.com/ CryptoCurrency/comments/7oax4e
The original post's content was as follows:
I'm sure I'll be accused of spreading FUD, so some brief notes about my bio:
  • I've been involved in cryptocurrency consensus and scalability research since 2011; I was the first to propose sidechains and sharding, back in 2011 when very few people were concerned about scaling
  • I co-authored two academic, peer-reviewed papers on consensus, on is called Proof-of-Activity, another called "Cryptocurrencies without proof-of-work" (Proof-of-consensus)
  • I identified weaknesses in Peercoin consensus algorithm back when it was released in 2012, which results in several consensus algorithm changes; I also pointed out flaws in Mastercoin, which led to changes in how development process is organized
  • so yeah, I "spread FUD" occasionally, but my FUD is well-justified
Now about RaiBlocks. I do not want to do a full review and identify actual exploitable weaknesses. I just want to point some red flags which I discovered why reading the whitepaper. Whether these problems are actually exploitable is another question...
So let's start from the fact that there are two white papers. When you google "RaiBlocks white paper", you can find the old one, here.
it defines a concept of confirmations. Some quotes:
  • When a node receives a send block to an account it controls, it first runs the confirmation procedure followed by adding the block into its ledger.
  • ... voting nodes will sign the block with their voting key and publish a confirm message. A message is considered confirmed if there are no conflicting blocks and a 50% vote quorum has been reached. If there is a conflicting block the node will wait 4 voting periods, 1 minute total, and confirm the winning block.
This is a clear definition of confirmation. There might be some subtle issues in it, but in normal case this algorithm will work. But it's, basically, a fantastically inefficient version of proof-of-stake, which requires orders of magnitude more bandwidth then necessary. Note that this paper doesn't describe delegation, so you have all nodes voting for each transaction, thus wasting millions time more traffic then necessary.
I think at some point Colin LeMahieu realised that he implemented a shitty version of PoS which doesn't scale, and tried to make it scale. You can find the new version of paper on Raiblocks.net web site. It's much more sciency looking. It seems that Colin was desperate to improve the protocol without changing the architecture. So you see some mental contortions. First:
Since agreements in RaiBlocks are reached quickly, on the order of milliseconds to seconds, we can present the user with two familiar categories of incoming transactions: settled and unsettled. Settled transactions are transactions where an account has generated receive blocks. Unsettled transactions have not yet been incorporated in to the receiver’s cumulative balance. This is a replacement for the more complex and unfamiliar confirmations metric in other cryptocurrencies.
So Colin tells us that we do not need a notion of "confirmed" and can use a notion of "settled" instead. So what's the difference?
Well, Colin is honest with us: settled doesn't mean confirmed. It only means that your node have acknowledged reception of coins, but that doesn't mean that coins are finally yours. There's no notion of finality of this system. Delegates can replace blocks with their votes on any time, so your money might disappear weeks after it was settled.
Without explicit voting on every transaction, you don't have a notion of confirmation or finality.
Another red flag:
... a fork must be the result of poor programming or malicious intent (double-spend) by the account’s owner. Upon detection, a representative will create a vote referencing the block ˆbi in it’s ledger and broadcast it to the network.
So conflicts, or forks, are resolved through voting. But how are they detected?
If a node can identify a conflict, it might be able to resolve it. But detection of discrepancy is one of major topics of consensus.
E.g. suppose Alice's node received version 1 of a block, while Bob's node received version 2. If they do communicate, they won't be aware of the conflict.
So how are conflicts detected in the RaiBlocks? The paper doesn't define this, but it mentions that block messages are sent between nodes, so a node can detect conflict when it receives different versions of blocks from different peers.
So conflict detection is possible in this model, but is it reliable? There's no evidence for that.
In theory, if you can guarantee that every message is delivered, you can achieve reliable conflict detection. But in practice, networks are not reliable. And you do not want full connectivity anyway (each node talking with each other node is fantastically expensive). And on top of that, RaiBlocks uses UDP network protocol, which is unreliable. There's no guarantee of message delivery. And if messages are lost, conflict might be undetected, thus Alice's node will think she received coins from Bob while the rest of the network will think otherwise.
This topic is not discussed in the paper.
RaiBlocks, not having a proper blockchain, also lacks a way to compare state of two nodes. In Bitcoin you only need to compare the latest hash: if hash is the same, then nodes are in perfect sync. But in RaiBlocks you have multiple "blockchains" for each account, so basically you have to compare state of every account to check that you are in sync. This is incredibly wasteful.
So, to summarize, I'd describe RaiBlocks as "UDP coin". It might work quite well if network conditions are good and messages are delivered. It can even tolerate some degree of packet loss. But there's no proof that it works in all conditions, in fact, the paper avoids the topic. There's no notion of confirmation. You never know if you received coins or not. There are probably many conditions in which the system would fail.
I'm not interested in finding an actual failure, it's not a good use of my time. So treat the above as an opinion of a guy who has significant knowledge about consensus algorithm upon reading the Raiblocks papers. Feel free to ignore it. :)
submitted by censorship_notifier to noncensored_bitcoin [link] [comments]

Why NYA is an attack on Bitcoin and why it will fail (long)

I wrote a rather lengthy response to a reddit post that I think is worth sharing, especially for newcomers to dispell some false narratives about S2X and Barry Silberts' New-York Agreement aka hostile takeover attempt of Bitcoin that is doomed to fail.
big block hard-liners wanted block size only, no SegWit.
Which doesn't make any logical sense. A lot of fud was actively being spread about how segwit was unsafe (such as the ANYONECANSPEND fud) but segwit is ofcourse working as intended thanks to the world class engineering of the Bitcoin Core developers. This led to the suspicion that BitMain was behind the opposition of segwit. BitMain miners use "covert AsicBoost" which is a technique that allows their rigs to use less electricity than competing mining equipment. However, segwit introduced changes to Bitcoin that made using covert AsicBoost impossible, which would explain their fierce opposition to segwit. We're talking big money here - the AsicBoost advantage is worth US$ 100 million according to estimates of experts.
After segwit was finalized, the Bitcoin software was programmed to activate segwit but not before 95% of the hashpower signalled to be ready. After all, miners are tasked with creating valid blocks and should be given the opportunity to update their software for protocol changes such as segwit. As a courtesy to the miners, the Bitcoin software basically said: "ok, segwit is here, but I'll politely hold off its activation until 95% of you say that you're ready to deal with this protocol change".
Sadly, mining is heavily centralized, and segwit was never getting activated due to the opposition of a few or perhaps even a single person: Jihan Wu of BitMain. As an aside, the centralization of hash power is also a direct result of AsicBoost. How this works: since AsicBoosted rigs are able to mine more efficiently than their competitors, these rigs drive up the difficulty and with that the average amount of hashes required to find a block. This in turn causes less efficient rigs to mine at a loss because they need to expend more energy to find a block. As a result, BitMain competitors got pushed out and BitMain became the dominant self-mining ASIC manufacturer.
After segwit was finalized, it required 95% of the hashpower to activate but it never gained more than around 30%. So 70% of hash power abused the courtesy of the Bitcoin software to wait until they were ready for activation and refused to give the go ahead. This went on for months and worst case it would have taken until August 2018 before segwit would activate.
let's do a compromise- we do SegWit AND we hard fork
In March 2017 a pseudonymous user called Shaolin Fry created BIP148 which is a softfork that invalidates any block that wouldn't signal segwit readiness starting August 1st 2017. This also became known as the UASF (User-Activated Soft Fork, as opposed to the original miner-activated soft fork that didn't work as intended). This patch saw significant adoption and miners would soon be forced to signal segwit or else see their blocks being invalidated by the network, which would cause them significant financial losses.
In May 2017 so after BIP148, the backroom New-York Agreement (NYA) was created by the Digital Currency Group of Barry Silbert together with businesses in the Bitcoin space such as BitPay and almost all miners. The NYA was the beginning of an outright misinformation campaign.
The NYA was trumpeted to be a "compromise". Miners would finally agree to activate segwit. In return, Bitcoin would hardfork and double its capacity on top of the doubling already achieved by segwit. In reality, BIP148 was already going to force miners to signal the activation of segwit. Also, developers and most users were notably absent in this NYA. So, given that segwit was already unstoppable because of BIP148, the parties around the table had to "compromise" to do something that they all wanted: hardfork Bitcoin to increase its capacity.
Or, is it all in fact really about increasing capacity? After all, segwit already achieved this. Bcash was created which doubled block size as well but without segwit. And then there is good old Litecoin having four times the transaction capacity of Bitcoin and segwit. Plenty of working alternatives that obsolete the need for yet another altcoin. So, perhaps transaction capacity is used as an excuse to reach a different goal. Let's explore.
Apparently after not-so-careful study of the Bitcoin whitepaper, the NYA participants came up with an absurd redefinition of what is "Bitcoin". According to this bizarre definition, they started to claim that Bitcoin is being defined as:
  1. Any blockchain that has the most cumulative hashpower behind it (measured from the Genesis block at the inception of Bitcoin):
  2. Using the SHA256 hashing algorithm;
  3. Having the current difficulty adjustment algorithm (resetting difficulty every 2016 blocks).
Ad 1. Note that it starts with "any blockchain". This also includes blockchains that contain invalid blocks, in other words, blocks that Bitcoin nodes would reject.
This is ofcourse bizarre but it is exactly what the NYA participants claim. It effectively puts all power in the hand of miners. Instead of nodes validating blocks, according to this novel and absurd interpretation of Bitcoin it will be miners that call the shots. Whatever block a miner produces will be valid as long as they mine on top of their own block, because that chain will then have the most cumulative hash power. Nodes become mere distributors of blocks and lose all their authority as they can no longer decide over the validity of a block. MinerCoin is born.
The Bitcoin whitepaper actually mentions this scenario where a majority of the hashpower takes over the network and starts producing invalid blocks and refers to it as being an attack. It is worth quoting this section 8, second paragraph in its entirety:
"As such, the verification is reliable as long as honest nodes control the network, but is more vulnerable if the network is overpowered by an attacker. While network nodes can verify transactions for themselves, the simplified method can be fooled by an attacker's fabricated transactions for as long as the attacker can continue to overpower the network. One strategy to protect against this would be to accept alerts from network nodes when they detect an invalid block, prompting the user's software to download the full block and alerted transactions to confirm the inconsistency. Businesses that receive frequent payments will probably still want to run their own nodes for more independent security and quicker verification." (emphasises mine).
Any doubt left whether "most hashpower wins" is an attack should be removed by a telling remark in the release notes of 0.3.19:
"Safe mode can still be triggered by seeing a longer (greater total PoW) invalid block chain."
As mentioned, miners representing 95% of all hash power participate in the NYA. They are currently expressing their support for the NYA by putting "NYA" inside blocks. The NYA participants intend to remove their hash power from Bitcoin completely and point it towards their altcoin. To double down on their claim that Bitcoin is defined by hashpower, they show some serious audacity by referring to their altcoin as... "Bitcoin". Anyone not part of the NYA refers to their coin as segwit2x, S2X or sometimes 2x.
The NYA participants proceed to proclaim victory. They reason that with all hash power on their blockchain and hardly any left for Bitcoin, "legacy Bitcoin" will be stuck as blocks will be created so slowly that Bitcoin becomes unusable, forcing everyone to switch to the "real" Bitcoin (sic). In other words, it was part of the plan was to remove hash power from Bitcoin to disrupt and force users into their altcoin.
Ofcourse, Bitcoin Core would not just sit idle and let such an attack happen. There are several ways to defend against this attack. As a last resort, an emergency difficulty reset combined with a change in the PoW algorithm can be deployed to get Bitcoin going again.
This is not likely to be necessary however as miners simply can't afford to mine a coin that has a small fraction of the value of Bitcoin. They have large bills to pay which is impossible by mining a coin that has half or even less the value of Bitcoin. In other words, miners would bankrupt themselves unless their altcoin attains the same value as Bitcoin. Given the lack of user, community and developer support it is save to say that this is not going to happen. Their coin will have only a small fraction of the value of Bitcoin and miners have no choice but to continue mine Bitcoin in order to receive the income necessary to pay for their huge operational expenses.
A moment was set for the hardfork: block 494,784 a big block will be produced such that it is invalid for the current Bitcoin network and will discard it.
Ofcourse, some nodes must accept the new, bigger S2X blocks. Therefore, Jeff Garzik (co-founder of a company called Bloq) started out to create btc1 which is a fork of the Bitcoin node software and which is adapted such that it accepts blocks up to twice in size, so that the segwit2x altcoin can exist. Note the 1 in btc1 which refers to their version numbering. Bitcoin Core releases are still 0.x but btc1 is numbered 1.x. This is to send the message that they have released the real Bitcoin that is now no longer a beta 0.x release but a production ready 1.x. This nonwithstanding the fact that btc1 is a copy of Bitcoin 0.14 with some minor changes and without any significant development causing it to quickly fall behind Bitcoin.
The NYA participants go on to claim that when hash power is on the btc1 blockchain, and Bitcoin is dead as a result because no or hardly any new blocks are being created, then the Bitcoin Core developers have no choice but to start contributing to their btc1 github controlled by Jeff Garzik.
In the NYA end state, Bitcoin is a coin of which miners set the consensus rules, and the Core developers sheepishly contribute to software in a repository controlled by Jeff Garzik or whoever pays him.
Needless to say, this is never ever going to happen.
The small block hard-liners are now against 2x and want SegWit only.
There is no such thing as small block hardliners. As is probably clear by now, NYA is not about block size. It is about control over Bitcoin. As a matter of fact, Bitcoin Core has never closed the door on a block size increase. In the scaling roadmap published in December 2015, Bitcoin Core notes:
"Finally--at some point the capacity increases from the above may not be enough. Delivery on relay improvements, segwit fraud proofs, dynamic block size controls, and other advances in technology will reduce the risk and therefore controversy around moderate block size increase proposals (such as 2/4/8 rescaled to respect segwit's increase). Bitcoin will be able to move forward with these increases when improvements and understanding render their risks widely acceptable relative to the risks of not deploying them. In Bitcoin Core we should keep patches ready to implement them as the need and the will arises, to keep the basic software engineering from being the limiting factor."
Bitcoin Core literally says here very clearly that further increases of block size are on the table as an option in the future.
For my personal opinion-
I hope that your personal opinion has changed after taking notes of the above.
submitted by trilli0nn to Bitcoin [link] [comments]

A Lightning Tx is *NOT* a bitcoin Tx, and here's why:

User ABrandsen is spreading his lies and misinformation about this subject again on /bitcoin. It is getting annoying.
What is a bitcoin transaction?
A Bitcoin transaction isn't just a signed message. Because that description completely leaves out the blockchain and thus the core innovation of Bitcoin. Which means that the actual definition of a Bitcoin transaction is a transaction which is confirmed on the Bitcoin blockchain. Or to a lesser extend something which at least has the potential and reasonable expectation of confirming on the blockchain.
To reduce Bitcoin to some small subset of itself to fit a certain narrative is, in my book, downright evil.
While I have some disagreements with jratcliff63367, he is spot on with his comment:
"A LN transaction is, in fact, a bitcoin transaction. However, it's also a zero confirmation transaction. Not an ordinary zero confirmation transaction though. This one is backed up, not only by the signatures, but also some game theory."
In reality a zero-conf was meant to be a transaction for which you did not detect a double spend getting broadcast (within a few seconds), so you could reasonably assume that your transaction would confirm. A lightning transaction is another beast all-together, better in some ways, worse in others.
Why isn't it a Bitcoin transaction?
It's actually very simply: A bitcoin transaction has very different security characteristics as a Lightning transaction.
For a normal transaction, the security goes up for every confirmation. So if you send a high value transaction, you can wait an amount of time to make the attack more expensive than any gains which can be had from double spending. An attack becomes infeasible pretty quickly. Awesome!
With Lightning you will have all the security you can get instantly. Which is also awesome. But you need to stay online to prevent you getting double spend. Furthermore, even if you are online, miners can still block the correct settlement transaction and steal your funds.
Maybe that is why Core (supporters) are complaining about miner centralisation. Because for normal Bitcoin transactions security is gained through incentives. Which means that even if there is only one miner, it still cost a certain amount of money to double spend. With Lightning that stops being true. Centralised mining would mean it costs nothing to steal Lightning funds.
Lightning is literally turning Bitcoin on its head. And all under the pretext of being conservative. I assure you: it's not.
Disclaimer
I like Lightning, it is cool tech. And we should not try to stop it.
I just think that pushing it as the only scaling solution is very dangerous. More so if you need to lie to get people to use it. Even more so if you need to kill off Bitcoin to push people into it.
There is a sickness spreading in Bitcoin.
submitted by seweso to btc [link] [comments]

UPDATE: We studied Blue Apron to figure out how to ship Maryland crabs to your home

We have come a long way since our first post (6 months ago), here!
I plan to continue updating our progress every 6 months, highlighting our mistakes and our hits in hopes your can utilize some ideas to help your ecommerce and the difficult business that is fresh (24 hours) perishable shipping.
Who we are: https://www.cameronsseafood.com/in 1985 my Dad and Uncle started the Maryland Seafood business and today it does $20 million in gross revenue each year. We sell raw and cooked seafood, and prepared dishes at 14 locations — 11 storefronts and three trucks — We have over 1,000,000 customers in the Baltimore-Washington-Philadelphia market. On June 24th 2017 my cousin and I started the nationwide home shipping business as a separate entity. The operation is run by me, my wife, dad, uncle, brother, cousin and 60 employees. I have no ownership in the stores, food trucks, and franchises. My uncle owns and handles all that.
My Background: the business was named after me in 1985 as I am the oldest son of 6 children. My main business is apartment brokerage and investing. I have been a MD, DC, and VA broker for 17 years www.idealrealty.com. I sell 100+ unit complexes to institutions and high net-worth individuals.
Coolest Online Customers: Gilbert Arenas and Mia Khalifa
What seafood do we sell online: Virtually everything but, 85% of sales are Maryland crabs, Maryland crab cakes, Maryland crab soups, and Free shipping samplers.
What we do: we ship freshly cooked Maryland Blue Crabs, Crab cakes and seafood to your door in 24 hours after being caught in the Chesapeake Bay, Maryland. We send you seafood that is 3 days fresher than the grocery store. Btw, we accept bitcoin!
Where do we get our Seafood? Chesapeake Bay, Maryland for Maryland products, using our own crabbers and contracted crabbers over the past 32 years. Although our COGS is 30%, shipping with 1-2 day delivery is very expensive, with the packaging materials outweighing the FedEx fees. We ship it fresh with Snow/King crab legs, soft shells (in off-season) and lobster tails being the items we ship frozen. Some items we receive frozen like Bee Gee shrimp from Louisiana.
We are True Blue Certified, meaning In order to be True Blue certified, participating food service establishments commit that at least 75% of their annual crab usage will be from Maryland harvested or processed crabs.
Startup Leverage: We do have some amazing advantages and you should tab into yours: 1) We don’t pay rent because we operate out of my uncles seafood headquarters. 2) We don’t need employees to handle extra orders (my partners handles up to 50 orders a day by himself) because we can use our existing employees. 4) We don’t have “employees” we contract existing employees meaning you don’t have to pay 15% tax 3) We don’t have food spoilage because we buy only what we need from our the stores each morning.
Online Profit Margins: We aim for 35% gross margins with our cost of goods sold at 30%. However, packaging and shipping costs wipe out most of it while paid-advertisement has wiped out the rest leaving us with 10% gross for the first 6 months. 1) We eliminated AdWords since our ROI/customer acquisition costs were too high. 2) We reduced all packaging costs through trial and error. We eliminated anything not necessary then negotiated each material with three vendors. You need to create a bidding war. 3) We negotiated shipping rates by switching vendors 3x. We formed a strategic partnership to tab into their FedEx account. With a growing customer base we are on track to hit 30% gross next year but it’s possible to hit 40% and 10% net.
Free Shipping Model: We offer free shipping to 29 states (1-2 day zones through FedEx ground network) when a customer spends over $200. Since our average order is $160 we think that’s a solid minimum order. We offer flat-rate air shipping everywhere else. National shipping is $94.99 or $79.99 when they spend $200+. We offer many free shipping sampler combos to local and regional customers. It’s too expensive to ship nationally without ridiculous pricing. That’s ok, if we can capitalize on the 29 ground states we will hit our $20,000,000 number. We don’t make any money on shipping, and I wish we could. Shipping page.
Chargeback Fraud: people are creative and fraud has cost us thousands We cannot require signatures on shipments without incurring a $4.50 fee and what if the person isn’t home? FedEx will return the box to their hub subjecting it to transit issues and spoilage. A lot of our customers order our food as gifts so the billing and shipping don’t match. We learned you can get expensive software that charges a per transaction fee. It’s only worth if at higher volume but you can do your own fraud detection. For example, look up the shipping address in google maps. Google the person and look for articles about them to show they live in the state. Modify your payment processor’s security features so you can monitor the results. We noted most fraudsters order our frozen items (to store or resell them) so we carefully review each frozen order with wide eyes.
Losses: We have made many errors totaling $15,000. Shipping wrong items, missing items, item arrives late or spoiled, gel packs melt, things happen. The important thing is to address the root cause, which helped us lower our losses rate from 15% down to 5% with a 3% goal in mind for 2018.
Shipping – pin FedEx vs UPS and save money. Make sure the “rates” include a residential fee and fuel fees. Also know like new credit cards they will give you introductory rates that eventually run out and use your monthly sales volume to adjust up/down. Negotiate longer into rate periods if you can! UPS offers insurance on the entire sale and will grant 25% off next day air on any bad deliveries and charge $1.80 per $100 but there is a catch. Your customers need to provide you photo proofs, and UPS has to be at fault to receive a claim (late delivery which occurs less than 1%) or a forgetting to deliver. However, UPS has abysmal Saturday ground delivery networks as it’s new as of August 2017 when FedEx has the entire network open. UPS has a smaller ground delivery range that FedEx too. No brainer for us, we chose FedEx. We don’t take insurance because it’s a loss. This will depend on your line of business.
Packaging Perishables – we reverse engineered Blue Apron and competitors to figure out how to ship fresh (and live) seafood. It also teaches you where to find suppliers (use manufacturers not resellers as they have a markup). Call them and form relationships.
Gel Packs: It takes 5 weeks to properly freeze a gel pack! I thought our business was doomed when I learned this because how can I store that many gel packs and replenish them within my walk-in freezer? Solution: we pay for pre-frozen ones and have pallets stored at -10. We learned this from ordering from Blue Apron and calling the gel pack manufacturer.
Boxes: to ship perishable seafood you probably need an insulated cooler and corrugated box kit. Since we started, we reduced costs by 30% by searching for a manufacturer (not a distributor) that can cut costs and store surplus for us. Costs include freight so find someone local within 1-2 hours of your HQ.
Customer Service: We sell seafood but we are in the customer service business. We are open 7 days per week and either I or my brother will answer your phone calls (888-404-7454 x1). Our competitors are only open 5-days per week. We offer cash refunds and reshipments on any customer complaint. Our competitors may give you a credit on your next order…The customer is always right and we ensure 100% satisfaction guaranteed. This has converted customers to repeat customers. We treat each customer as we want to be treated. Give a little, get a lot.
Website: I know you think I am biased because my wife created our site from scratch but she did an amazing job for her first ecommerce site! We modify content daily and advertise to our email list once per week with discount codes. This would have cost me $10,000 to $30,000 with all the changes we have made. It’s constantly evolving and the project never ends. Find a good partner that will grow with you. No 3rd party will put in the passion a strategic partner could offer. Try offering a lower hourly rate but give them a piece of the action for the difference.
Advertising: The best advertisement for us has been word-of-mouth. We carry 5-star reviews on Facebook but getting satisfied customers to review is hard (after a sale they receive an email asking them to rate their experience). We thought about offering a coupon but it feels like a bribe. We do offer a coupon once someone abandons their cart to remarket. We send out weekly coupons via mailing list and we offer weekly storewide specials (the real savings happen when you sign up). Social media is free, get good at it. Learn which outlets suit your business. For us, Facebook and Instagram work whereas Twitter has no traction. I learned ads on social media don’t convert. Nobody wants to be spammed ads. They want to discuss a topic and engage on pictures, videos, and education about your field. They will find a way to buy from you. Instead of offering a coupon teach them a recipe, explain why a Maryland Crab is the world’s best crab (in the Chesapeake Bay, due to the specific climate, the Blue Crabs lie dormant for 6 months and form a layer of fat on their meat which gives them a their sweat buttery flavor!). You see, that’s interesting! When you post ask yourself how will this engage an audience? You want to advertise? Then try doing giveaways using www.gleam.io, which has amazing social networking tools to spread the word.
Facebook is another animal where most of our success has been through remarketing. Currently, we are brainstorming both organic and paid Facebook ideas…I’m open to any suggestions. Getting customers to your homepage is the hardest part. Once they get there, your site has to convert them. When we started, we used Adwords to bring attention to our product pages but we had no other supportive information to convert them. We recrafted each page to stand on its own (assuming they never leave that page) and doubled our conversion rates!
We outsource our SEO/AdWords to a company that we learned about through our first Reddit Post. SEO can take at least 6+ months to build up your keywords on the rankings list. You need to be on the 1st page or you won’t convert traffic. We started with most organic keyword rankings on the 64th page and are have almost all of our keywords now on the 3rd page. By February most of our keywords should be on the 1st page! Many things went into this including getting quality backlinks, blogging 6 times per month with SEO rich content, carefully titling each page, section, and product; and Keyword/URL optimization.
Adwords: We foolishly spent $42,000 on AdWords and ended our campaign with $37 cost per conversion and 186.29% ROI, which doesn’t allow us to make profit during the off-season (crabs are seasonal from April to November) so we will try again in Q2, 2018.
Influencers: overall this hasn’t been profitable. We have social media influencers with 100k+ dedicated seafood/food followers whereby we grant them a vanity link and discount but it hasn’t worked. We belong to several influencer networks were they receive 8% for posting banner adds, this has only brought in $10,000…
Mia Khalifa: We reached out to Mia as she has the strongest influences (4m+ followers) for a Maryland native that loves our seafood. We sent her food and she spent a week hyping the brand including social media posts, PMs and featured a Twitch episode about Cameron’s. Definitely drove tremendous traffic although we can only ship to the USA due the transit time lag of customs. We look forward to working more with her.
Gilbert Arenas: I’m a huge Gilbert and Wizards fan! He replied to Mia’s post and a PM worked to get his interest. He is a real character and orders a lot of our seafood each month. He love the high-protein variety that (Maryland) seafood provides. Chicken and vegetables does get boring.
Washington Post: We were featured in the Washington Post on Dec 1st, see here. They did a good job summarizing our business so far. We have also been featured in Forbes, New York Times, Huffington Post and more. How? I googled the food reviewer from each of the above and figured out their contact info. Sent them a 2-line email asking them to review our food and boom!
Videos We started sharing videos of the entire process so you can see the experience before you risk order fresh seafood online. We plan to continue posting new videos in 2018 and I’d love feedback on what you would like to see?
What we do
First Customer Unboxing
Another unboxing
Resteam Maryland Crabs (gif recipe style)
Packaging demo
2018 Goals
Please provide us any feedback or ideas. We want to get better and need your help.
Discount code "holiday" will save you 10% on all order and we accept Bitcoin!
submitted by comikins to Entrepreneur [link] [comments]

Merkle Trees and Mountain Ranges - Making UTXO Set Growth Irrelevant With Low-Latency Delayed TXO Commitments

Original link: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2016-May/012715.html
Unedited text and originally written by:

Peter Todd pete at petertodd.org
Tue May 17 13:23:11 UTC 2016
Previous message: [bitcoin-dev] Bip44 extension for P2SH/P2WSH/...
Next message: [bitcoin-dev] Making UTXO Set Growth Irrelevant With Low-Latency Delayed TXO Commitments
Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
# Motivation

UTXO growth is a serious concern for Bitcoin's long-term decentralization. To
run a competitive mining operation potentially the entire UTXO set must be in
RAM to achieve competitive latency; your larger, more centralized, competitors
will have the UTXO set in RAM. Mining is a zero-sum game, so the extra latency
of not doing so if they do directly impacts your profit margin. Secondly,
having possession of the UTXO set is one of the minimum requirements to run a
full node; the larger the set the harder it is to run a full node.

Currently the maximum size of the UTXO set is unbounded as there is no
consensus rule that limits growth, other than the block-size limit itself; as
of writing the UTXO set is 1.3GB in the on-disk, compressed serialization,
which expands to significantly more in memory. UTXO growth is driven by a
number of factors, including the fact that there is little incentive to merge
inputs, lost coins, dust outputs that can't be economically spent, and
non-btc-value-transfer "blockchain" use-cases such as anti-replay oracles and
timestamping.

We don't have good tools to combat UTXO growth. Segregated Witness proposes to
give witness space a 75% discount, in part of make reducing the UTXO set size
by spending txouts cheaper. While this may change wallets to more often spend
dust, it's hard to imagine an incentive sufficiently strong to discourage most,
let alone all, UTXO growing behavior.

For example, timestamping applications often create unspendable outputs due to
ease of implementation, and because doing so is an easy way to make sure that
the data required to reconstruct the timestamp proof won't get lost - all
Bitcoin full nodes are forced to keep a copy of it. Similarly anti-replay
use-cases like using the UTXO set for key rotation piggyback on the uniquely
strong security and decentralization guarantee that Bitcoin provides; it's very
difficult - perhaps impossible - to provide these applications with
alternatives that are equally secure. These non-btc-value-transfer use-cases
can often afford to pay far higher fees per UTXO created than competing
btc-value-transfer use-cases; many users could afford to spend $50 to register
a new PGP key, yet would rather not spend $50 in fees to create a standard two
output transaction. Effective techniques to resist miner censorship exist, so
without resorting to whitelists blocking non-btc-value-transfer use-cases as
"spam" is not a long-term, incentive compatible, solution.

A hard upper limit on UTXO set size could create a more level playing field in
the form of fixed minimum requirements to run a performant Bitcoin node, and
make the issue of UTXO "spam" less important. However, making any coins
unspendable, regardless of age or value, is a politically untenable economic
change.


# TXO Commitments

A merkle tree committing to the state of all transaction outputs, both spent
and unspent, we can provide a method of compactly proving the current state of
an output. This lets us "archive" less frequently accessed parts of the UTXO
set, allowing full nodes to discard the associated data, still providing a
mechanism to spend those archived outputs by proving to those nodes that the
outputs are in fact unspent.

Specifically TXO commitments proposes a Merkle Mountain Range¹ (MMR), a
type of deterministic, indexable, insertion ordered merkle tree, which allows
new items to be cheaply appended to the tree with minimal storage requirements,
just log2(n) "mountain tips". Once an output is added to the TXO MMR it is
never removed; if an output is spent its status is updated in place. Both the
state of a specific item in the MMR, as well the validity of changes to items
in the MMR, can be proven with log2(n) sized proofs consisting of a merkle path
to the tip of the tree.

At an extreme, with TXO commitments we could even have no UTXO set at all,
entirely eliminating the UTXO growth problem. Transactions would simply be
accompanied by TXO commitment proofs showing that the outputs they wanted to
spend were still unspent; nodes could update the state of the TXO MMR purely
from TXO commitment proofs. However, the log2(n) bandwidth overhead per txin is
substantial, so a more realistic implementation is be to have a UTXO cache for
recent transactions, with TXO commitments acting as a alternate for the (rare)
event that an old txout needs to be spent.

Proofs can be generated and added to transactions without the involvement of
the signers, even after the fact; there's no need for the proof itself to
signed and the proof is not part of the transaction hash. Anyone with access to
TXO MMR data can (re)generate missing proofs, so minimal, if any, changes are
required to wallet software to make use of TXO commitments.


## Delayed Commitments

TXO commitments aren't a new idea - the author proposed them years ago in
response to UTXO commitments. However it's critical for small miners' orphan
rates that block validation be fast, and so far it has proven difficult to
create (U)TXO implementations with acceptable performance; updating and
recalculating cryptographicly hashed merkelized datasets is inherently more
work than not doing so. Fortunately if we maintain a UTXO set for recent
outputs, TXO commitments are only needed when spending old, archived, outputs.
We can take advantage of this by delaying the commitment, allowing it to be
calculated well in advance of it actually being used, thus changing a
latency-critical task into a much easier average throughput problem.

Concretely each block B_i commits to the TXO set state as of block B_{i-n}, in
other words what the TXO commitment would have been n blocks ago, if not for
the n block delay. Since that commitment only depends on the contents of the
blockchain up until block B_{i-n}, the contents of any block after are
irrelevant to the calculation.


## Implementation

Our proposed high-performance/low-latency delayed commitment full-node
implementation needs to store the following data:

1) UTXO set

Low-latency K:V map of txouts definitely known to be unspent. Similar to
existing UTXO implementation, but with the key difference that old,
unspent, outputs may be pruned from the UTXO set.


2) STXO set

Low-latency set of transaction outputs known to have been spent by
transactions after the most recent TXO commitment, but created prior to the
TXO commitment.


3) TXO journal

FIFO of outputs that need to be marked as spent in the TXO MMR. Appends
must be low-latency; removals can be high-latency.


4) TXO MMR list

Prunable, ordered list of TXO MMR's, mainly the highest pending commitment,
backed by a reference counted, cryptographically hashed object store
indexed by digest (similar to how git repos work). High-latency ok. We'll
cover this in more in detail later.


### Fast-Path: Verifying a Txout Spend In a Block

When a transaction output is spent by a transaction in a block we have two
cases:

1) Recently created output

Output created after the most recent TXO commitment, so it should be in the
UTXO set; the transaction spending it does not need a TXO commitment proof.
Remove the output from the UTXO set and append it to the TXO journal.

2) Archived output

Output created prior to the most recent TXO commitment, so there's no
guarantee it's in the UTXO set; transaction will have a TXO commitment
proof for the most recent TXO commitment showing that it was unspent.
Check that the output isn't already in the STXO set (double-spent), and if
not add it. Append the output and TXO commitment proof to the TXO journal.

In both cases recording an output as spent requires no more than two key:value
updates, and one journal append. The existing UTXO set requires one key:value
update per spend, so we can expect new block validation latency to be within 2x
of the status quo even in the worst case of 100% archived output spends.


### Slow-Path: Calculating Pending TXO Commitments

In a low-priority background task we flush the TXO journal, recording the
outputs spent by each block in the TXO MMR, and hashing MMR data to obtain the
TXO commitment digest. Additionally this background task removes STXO's that
have been recorded in TXO commitments, and prunes TXO commitment data no longer
needed.

Throughput for the TXO commitment calculation will be worse than the existing
UTXO only scheme. This impacts bulk verification, e.g. initial block download.
That said, TXO commitments provides other possible tradeoffs that can mitigate
impact of slower validation throughput, such as skipping validation of old
history, as well as fraud proof approaches.


### TXO MMR Implementation Details

Each TXO MMR state is a modification of the previous one with most information
shared, so we an space-efficiently store a large number of TXO commitments
states, where each state is a small delta of the previous state, by sharing
unchanged data between each state; cycles are impossible in merkelized data
structures, so simple reference counting is sufficient for garbage collection.
Data no longer needed can be pruned by dropping it from the database, and
unpruned by adding it again. Since everything is committed to via cryptographic
hash, we're guaranteed that regardless of where we get the data, after
unpruning we'll have the right data.

Let's look at how the TXO MMR works in detail. Consider the following TXO MMR
with two txouts, which we'll call state #0:

0
/ \
a b

If we add another entry we get state #1:

1
/ \
0 \
/ \ \
a b c

Note how it 100% of the state #0 data was reused in commitment #1. Let's
add two more entries to get state #2:

2
/ \
2 \
/ \ \
/ \ \
/ \ \
0 2 \
/ \ / \ \
a b c d e

This time part of state #1 wasn't reused - it's wasn't a perfect binary
tree - but we've still got a lot of re-use.

Now suppose state #2 is committed into the blockchain by the most recent block.
Future transactions attempting to spend outputs created as of state #2 are
obliged to prove that they are unspent; essentially they're forced to provide
part of the state #2 MMR data. This lets us prune that data, discarding it,
leaving us with only the bare minimum data we need to append new txouts to the
TXO MMR, the tips of the perfect binary trees ("mountains") within the MMR:

2
/ \
2 \
\
\
\
\
\
e

Note that we're glossing over some nuance here about exactly what data needs to
be kept; depending on the details of the implementation the only data we need
for nodes "2" and "e" may be their hash digest.

Adding another three more txouts results in state #3:

3
/ \
/ \
/ \
/ \
/ \
/ \
/ \
2 3
/ \
/ \
/ \
3 3
/ \ / \
e f g h

Suppose recently created txout f is spent. We have all the data required to
update the MMR, giving us state #4. It modifies two inner nodes and one leaf
node:

4
/ \
/ \
/ \
/ \
/ \
/ \
/ \
2 4
/ \
/ \
/ \
4 3
/ \ / \
e (f) g h

If an archived txout is spent requires the transaction to provide the merkle
path to the most recently committed TXO, in our case state #2. If txout b is
spent that means the transaction must provide the following data from state #2:

2
/
2
/
/
/
0
\
b

We can add that data to our local knowledge of the TXO MMR, unpruning part of
it:

4
/ \
/ \
/ \
/ \
/ \
/ \
/ \
2 4
/ / \
/ / \
/ / \
0 4 3
\ / \ / \
b e (f) g h

Remember, we haven't _modified_ state #4 yet; we just have more data about it.
When we mark txout b as spent we get state #5:

5
/ \
/ \
/ \
/ \
/ \
/ \
/ \
5 4
/ / \
/ / \
/ / \
5 4 3
\ / \ / \
(b) e (f) g h

Secondly by now state #3 has been committed into the chain, and transactions
that want to spend txouts created as of state #3 must provide a TXO proof
consisting of state #3 data. The leaf nodes for outputs g and h, and the inner
node above them, are part of state #3, so we prune them:

5
/ \
/ \
/ \
/ \
/ \
/ \
/ \
5 4
/ /
/ /
/ /
5 4
\ / \
(b) e (f)

Finally, lets put this all together, by spending txouts a, c, and g, and
creating three new txouts i, j, and k. State #3 was the most recently committed
state, so the transactions spending a and g are providing merkle paths up to
it. This includes part of the state #2 data:

3
/ \
/ \
/ \
/ \
/ \
/ \
/ \
2 3
/ \ \
/ \ \
/ \ \
0 2 3
/ / /
a c g

After unpruning we have the following data for state #5:

5
/ \
/ \
/ \
/ \
/ \
/ \
/ \
5 4
/ \ / \
/ \ / \
/ \ / \
5 2 4 3
/ \ / / \ /
a (b) c e (f) g

That's sufficient to mark the three outputs as spent and add the three new
txouts, resulting in state #6:

6
/ \
/ \
/ \
/ \
/ \
6 \
/ \ \
/ \ \
/ \ \
/ \ \
/ \ \
/ \ \
/ \ \
6 6 \
/ \ / \ \
/ \ / \ 6
/ \ / \ / \
6 6 4 6 6 \
/ \ / / \ / / \ \
(a) (b) (c) e (f) (g) i j k

Again, state #4 related data can be pruned. In addition, depending on how the
STXO set is implemented may also be able to prune data related to spent txouts
after that state, including inner nodes where all txouts under them have been
spent (more on pruning spent inner nodes later).


### Consensus and Pruning

It's important to note that pruning behavior is consensus critical: a full node
that is missing data due to pruning it too soon will fall out of consensus, and
a miner that fails to include a merkle proof that is required by the consensus
is creating an invalid block. At the same time many full nodes will have
significantly more data on hand than the bare minimum so they can help wallets
make transactions spending old coins; implementations should strongly consider
separating the data that is, and isn't, strictly required for consensus.

A reasonable approach for the low-level cryptography may be to actually treat
the two cases differently, with the TXO commitments committing too what data
does and does not need to be kept on hand by the UTXO expiration rules. On the
other hand, leaving that uncommitted allows for certain types of soft-forks
where the protocol is changed to require more data than it previously did.


### Consensus Critical Storage Overheads

Only the UTXO and STXO sets need to be kept on fast random access storage.
Since STXO set entries can only be created by spending a UTXO - and are smaller
than a UTXO entry - we can guarantee that the peak size of the UTXO and STXO
sets combined will always be less than the peak size of the UTXO set alone in
the existing UTXO-only scheme (though the combined size can be temporarily
higher than what the UTXO set size alone would be when large numbers of
archived txouts are spent).

TXO journal entries and unpruned entries in the TXO MMR have log2(n) maximum
overhead per entry: a unique merkle path to a TXO commitment (by "unique" we
mean that no other entry shares data with it). On a reasonably fast system the
TXO journal will be flushed quickly, converting it into TXO MMR data; the TXO
journal will never be more than a few blocks in size.

Transactions spending non-archived txouts are not required to provide any TXO
commitment data; we must have that data on hand in the form of one TXO MMR
entry per UTXO. Once spent however the TXO MMR leaf node associated with that
non-archived txout can be immediately pruned - it's no longer in the UTXO set
so any attempt to spend it will fail; the data is now immutable and we'll never
need it again. Inner nodes in the TXO MMR can also be pruned if all leafs under
them are fully spent; detecting this is easy the TXO MMR is a merkle-sum tree,
with each inner node committing to the sum of the unspent txouts under it.

When a archived txout is spent the transaction is required to provide a merkle
path to the most recent TXO commitment. As shown above that path is sufficient
information to unprune the necessary nodes in the TXO MMR and apply the spend
immediately, reducing this case to the TXO journal size question (non-consensus
critical overhead is a different question, which we'll address in the next
section).

Taking all this into account the only significant storage overhead of our TXO
commitments scheme when compared to the status quo is the log2(n) merkle path
overhead; as long as less than 1/log2(n) of the UTXO set is active,
non-archived, UTXO's we've come out ahead, even in the unrealistic case where
all storage available is equally fast. In the real world that isn't yet the
case - even SSD's significantly slower than RAM.


### Non-Consensus Critical Storage Overheads

Transactions spending archived txouts pose two challenges:

1) Obtaining up-to-date TXO commitment proofs

2) Updating those proofs as blocks are mined

The first challenge can be handled by specialized archival nodes, not unlike
how some nodes make transaction data available to wallets via bloom filters or
the Electrum protocol. There's a whole variety of options available, and the
the data can be easily sharded to scale horizontally; the data is
self-validating allowing horizontal scaling without trust.

While miners and relay nodes don't need to be concerned about the initial
commitment proof, updating that proof is another matter. If a node aggressively
prunes old versions of the TXO MMR as it calculates pending TXO commitments, it
won't have the data available to update the TXO commitment proof to be against
the next block, when that block is found; the child nodes of the TXO MMR tip
are guaranteed to have changed, yet aggressive pruning would have discarded that
data.

Relay nodes could ignore this problem if they simply accept the fact that
they'll only be able to fully relay the transaction once, when it is initially
broadcast, and won't be able to provide mempool functionality after the initial
relay. Modulo high-latency mixnets, this is probably acceptable; the author has
previously argued that relay nodes don't need a mempool² at all.

For a miner though not having the data necessary to update the proofs as blocks
are found means potentially losing out on transactions fees. So how much extra
data is necessary to make this a non-issue?

Since the TXO MMR is insertion ordered, spending a non-archived txout can only
invalidate the upper nodes in of the archived txout's TXO MMR proof (if this
isn't clear, imagine a two-level scheme, with a per-block TXO MMRs, committed
by a master MMR for all blocks). The maximum number of relevant inner nodes
changed is log2(n) per block, so if there are n non-archival blocks between the
most recent TXO commitment and the pending TXO MMR tip, we have to store
log2(n)*n inner nodes - on the order of a few dozen MB even when n is a
(seemingly ridiculously high) year worth of blocks.

Archived txout spends on the other hand can invalidate TXO MMR proofs at any
level - consider the case of two adjacent txouts being spent. To guarantee
success requires storing full proofs. However, they're limited by the blocksize
limit, and additionally are expected to be relatively uncommon. For example, if
1% of 1MB blocks was archival spends, our hypothetical year long TXO commitment
delay is only a few hundred MB of data with low-IO-performance requirements.


## Security Model

Of course, a TXO commitment delay of a year sounds ridiculous. Even the slowest
imaginable computer isn't going to need more than a few blocks of TXO
commitment delay to keep up ~100% of the time, and there's no reason why we
can't have the UTXO archive delay be significantly longer than the TXO
commitment delay.

However, as with UTXO commitments, TXO commitments raise issues with Bitcoin's
security model by allowing relatively miners to profitably mine transactions
without bothering to validate prior history. At the extreme, if there was no
commitment delay at all at the cost of a bit of some extra network bandwidth
"full" nodes could operate and even mine blocks completely statelessly by
expecting all transactions to include "proof" that their inputs are unspent; a
TXO commitment proof for a commitment you haven't verified isn't a proof that a
transaction output is unspent, it's a proof that some miners claimed the txout
was unspent.

At one extreme, we could simply implement TXO commitments in a "virtual"
fashion, without miners actually including the TXO commitment digest in their
blocks at all. Full nodes would be forced to compute the commitment from
scratch, in the same way they are forced to compute the UTXO state, or total
work. Of course a full node operator who doesn't want to verify old history can
get a copy of the TXO state from a trusted source - no different from how you
could get a copy of the UTXO set from a trusted source.

A more pragmatic approach is to accept that people will do that anyway, and
instead assume that sufficiently old blocks are valid. But how old is
"sufficiently old"? First of all, if your full node implementation comes "from
the factory" with a reasonably up-to-date minimum accepted total-work
thresholdⁱ - in other words it won't accept a chain with less than that amount
of total work - it may be reasonable to assume any Sybil attacker with
sufficient hashing power to make a forked chain meeting that threshold with,
say, six months worth of blocks has enough hashing power to threaten the main
chain as well.

That leaves public attempts to falsify TXO commitments, done out in the open by
the majority of hashing power. In this circumstance the "assumed valid"
threshold determines how long the attack would have to go on before full nodes
start accepting the invalid chain, or at least, newly installed/recently reset
full nodes. The minimum age that we can "assume valid" is tradeoff between
political/social/technical concerns; we probably want at least a few weeks to
guarantee the defenders a chance to organise themselves.

With this in mind, a longer-than-technically-necessary TXO commitment delayʲ
may help ensure that full node software actually validates some minimum number
of blocks out-of-the-box, without taking shortcuts. However this can be
achieved in a wide variety of ways, such as the author's prev-block-proof
proposal³, fraud proofs, or even a PoW with an inner loop dependent on
blockchain data. Like UTXO commitments, TXO commitments are also potentially
very useful in reducing the need for SPV wallet software to trust third parties
providing them with transaction data.

i) Checkpoints that reject any chain without a specific block are a more
common, if uglier, way of achieving this protection.

j) A good homework problem is to figure out how the TXO commitment could be
designed such that the delay could be reduced in a soft-fork.


## Further Work

While we've shown that TXO commitments certainly could be implemented without
increasing peak IO bandwidth/block validation latency significantly with the
delayed commitment approach, we're far from being certain that they should be
implemented this way (or at all).

1) Can a TXO commitment scheme be optimized sufficiently to be used directly
without a commitment delay? Obviously it'd be preferable to avoid all the above
complexity entirely.

2) Is it possible to use a metric other than age, e.g. priority? While this
complicates the pruning logic, it could use the UTXO set space more
efficiently, especially if your goal is to prioritise bitcoin value-transfer
over other uses (though if "normal" wallets nearly never need to use TXO
commitments proofs to spend outputs, the infrastructure to actually do this may
rot).

3) Should UTXO archiving be based on a fixed size UTXO set, rather than an
age/priority/etc. threshold?

4) By fixing the problem (or possibly just "fixing" the problem) are we
encouraging/legitimising blockchain use-cases other than BTC value transfer?
Should we?

5) Instead of TXO commitment proofs counting towards the blocksize limit, can
we use a different miner fairness/decentralization metric/incentive? For
instance it might be reasonable for the TXO commitment proof size to be
discounted, or ignored entirely, if a proof-of-propagation scheme (e.g.
thinblocks) is used to ensure all miners have received the proof in advance.

6) How does this interact with fraud proofs? Obviously furthering dependency on
non-cryptographically-committed STXO/UTXO databases is incompatible with the
modularized validation approach to implementing fraud proofs.


# References

1) "Merkle Mountain Ranges",
Peter Todd, OpenTimestamps, Mar 18 2013,
https://github.com/opentimestamps/opentimestamps-serveblob/mastedoc/merkle-mountain-range.md

2) "Do we really need a mempool? (for relay nodes)",
Peter Todd, bitcoin-dev mailing list, Jul 18th 2015,
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-July/009479.html

3) "Segregated witnesses and validationless mining",
Peter Todd, bitcoin-dev mailing list, Dec 23rd 2015,
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-Decembe012103.html

--
https://petertodd.org 'peter'[:-1]@petertodd.org
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 455 bytes
Desc: Digital signature
URL: <http://lists.linuxfoundation.org/pipermail/bitcoin-dev/attachments/20160517/33f69665/attachment-0001.sig>
submitted by Godballz to CryptoTechnology [link] [comments]

Preventing double-spends is an "embarrassingly parallel" massive search problem - like Google, [email protected], [email protected], or PrimeGrid. BUIP024 "address sharding" is similar to Google's MapReduce & Berkeley's BOINC grid computing - "divide-and-conquer" providing unlimited on-chain scaling for Bitcoin.

TL;DR: Like all other successful projects involving "embarrassingly parallel" search problems in massive search spaces, Bitcoin can and should - and inevitably will - move to a distributed computing paradigm based on successful "sharding" architectures such as Google Search (based on Google's MapReduce algorithm), or [email protected], [email protected], or PrimeGrid (based on Berkeley's BOINC grid computing architecture) - which use simple mathematical "decompose" and "recompose" operations to break big problems into tiny pieces, providing virtually unlimited scaling (plus fault tolerance) at the logical / software level, on top of possibly severely limited (and faulty) resources at the physical / hardware level.
The discredited "heavy" (and over-complicated) design philosophy of centralized "legacy" dev teams such as Core / Blockstream (requiring every single node to download, store and verify the massively growing blockchain, and pinning their hopes on non-existent off-chain vaporware such as the so-called "Lightning Network" which has no mathematical definition and is missing crucial components such as decentralized routing) is doomed to failure, and will be out-competed by simpler on-chain "lightweight" distributed approaches such as distributed trustless Merkle trees or BUIP024's "Address Sharding" emerging from independent devs such as u/thezerg1 (involved with Bitcoin Unlimited).
No one in their right mind would expect Google's vast search engine to fit entirely on a Raspberry Pi behind a crappy Internet connection - and no one in their right mind should expect Bitcoin's vast financial network to fit entirely on a Raspberry Pi behind a crappy Internet connection either.
Any "normal" (ie, competent) company with $76 million to spend could provide virtually unlimited on-chain scaling for Bitcoin in a matter of months - simply by working with devs who would just go ahead and apply the existing obvious mature successful tried-and-true "recipes" for solving "embarrassingly parallel" search problems in massive search spaces, based on standard DISTRIBUTED COMPUTING approaches like Google Search (based on Google's MapReduce algorithm), or [email protected], [email protected], or PrimeGrid (based on Berkeley's BOINC grid computing architecture). The fact that Blockstream / Core devs refuse to consider any standard DISTRIBUTED COMPUTING approaches just proves that they're "embarrassingly stupid" - and the only way Bitcoin will succeed is by routing around their damage.
Proven, mature sharding architectures like the ones powering Google Search, [email protected], [email protected], or PrimeGrid will allow Bitcoin to achieve virtually unlimited on-chain scaling, with minimal disruption to the existing Bitcoin network topology and mining and wallet software.
Longer Summary:
People who argue that "Bitcoin can't scale" - because it involves major physical / hardware requirements (lots of processing power, upload bandwidth, storage space) - are at best simply misinformed or incompetent - or at worst outright lying to you.
Bitcoin mainly involves searching the blockchain to prevent double-spends - and so it is similar to many other projects involving "embarrassingly parallel" searching in massive search spaces - like Google Search, [email protected], [email protected], or PrimeGrid.
But there's a big difference between those long-running wildly successful massively distributed infinitely scalable parallel computing projects, and Bitcoin.
Those other projects do their data storage and processing across a distributed network. But Bitcoin (under the misguided "leadership" of Core / Blockstream devs) instists on a fatally flawed design philosophy where every individual node must be able to download, store and verify the system's entire data structure. And it's even wore than that - they want to let the least powerful nodes in the system dictate the resource requirements for everyone else.
Meanwhile, those other projects are all based on some kind of "distributed computing" involving "sharding". They achieve massive scaling by adding a virtually unlimited (and fault-tolerant) logical / software layer on top of the underlying resource-constrained / limited physical / hardware layer - using approaches like Google's MapReduce algorithm or Berkeley's Open Infrastructure for Network Computing (BOINC) grid computing architecture.
This shows that it is a fundamental error to continue insisting on viewing an individual Bitcoin "node" as the fundamental "unit" of the Bitcoin network. Coordinated distributed pools already exist for mining the blockchain - and eventually coordinated distributed trustless architectures will also exist for verifying and querying it. Any architecture or design philosophy where a single "node" is expected to be forever responsible for storing or verifying the entire blockchain is the wrong approach, and is doomed to failure.
The most well-known example of this doomed approach is Blockstream / Core's "roadmap" - which is based on two disastrously erroneous design requirements:
  • Core / Blockstream erroneously insist that the entire blockchain must always be downloadable, storable and verifiable on a single node, as dictated by the least powerful nodes in the system (eg, u/bitusher in Costa Rica), or u/Luke-Jr in the underserved backwoods of Florida); and
  • Core / Blockstream support convoluted, incomplete off-chain scaling approaches such as the so-called "Lightning Network" - which lacks a mathematical foundation, and also has some serious gaps (eg, no solution for decentralized routing).
Instead, the future of Bitcoin will inevitably be based on unlimited on-chain scaling, where all of Bitcoin's existing algorithms and data structures and networking are essentially preserved unchanged / as-is - but they are distributed at the logical / software level using sharding approaches such as u/thezerg1's BUIP024 or distributed trustless Merkle trees.
These kinds of sharding architectures will allow individual nodes to use a minimum of physical resources to access a maximum of logical storage and processing resources across a distributed network with virtually unlimited on-chain scaling - where every node will be able to use and verify the entire blockchain without having to download and store the whole thing - just like Google Search, [email protected], [email protected], or PrimeGrid and other successful distributed sharding-based projects have already been successfully doing for years.
Details:
Sharding, which has been so successful in many other areas, is a topic that keeps resurfacing in various shapes and forms among independent Bitcoin developers.
The highly successful track record of sharding architectures on other projects involving "embarrassingly parallel" massive search problems (harnessing resource-constrained machines at the physical level into a distributed network at the logical level, in order to provide fault tolerance and virtually unlimited scaling searching for web pages, interstellar radio signals, protein sequences, or prime numbers in massive search spaces up to hundreds of terabytes in size) provides convincing evidence that sharding architectures will also work for Bitcoin (which also requires virtually unlimited on-chain scaling, searching the ever-expanding blockchain for previous "spends" from an existing address, before appending a new transaction from this address to the blockchain).
Below are some links involving proposals for sharding Bitcoin, plus more discussion and related examples.
BUIP024: Extension Blocks with Address Sharding
https://np.reddit.com/btc/comments/54afm7/buip024_extension_blocks_with_address_sharding/
Why aren't we as a community talking about Sharding as a scaling solution?
https://np.reddit.com/Bitcoin/comments/3u1m36/why_arent_we_as_a_community_talking_about/
(There are some detailed, partially encouraging comments from u/petertodd in that thread.)
[Brainstorming] Could Bitcoin ever scale like BitTorrent, using something like "mempool sharding"?
https://np.reddit.com/btc/comments/3v070a/brainstorming_could_bitcoin_ever_scale_like/
[Brainstorming] "Let's Fork Smarter, Not Harder"? Can we find some natural way(s) of making the scaling problem "embarrassingly parallel", perhaps introducing some hierarchical (tree) structures or some natural "sharding" at the level of the network and/or the mempool and/or the blockchain?
https://np.reddit.com/btc/comments/3wtwa7/brainstorming_lets_fork_smarter_not_harder_can_we/
"Braiding the Blockchain" (32 min + Q&A): We can't remove all sources of latency. We can redesign the "chain" to tolerate multiple simultaneous writers. Let miners mine and validate at the same time. Ideal block time / size / difficulty can become emergent per-node properties of the network topology
https://np.reddit.com/btc/comments/4su1gf/braiding_the_blockchain_32_min_qa_we_cant_remove/
Some kind of sharding - perhaps based on address sharding as in BUIP024, or based on distributed trustless Merkle trees as proposed earlier by u/thezerg1 - is very likely to turn out to be the simplest, and safest approach towards massive on-chain scaling.
A thought experiment showing that we already have most of the ingredients for a kind of simplistic "instant sharding"
A simplistic thought experiment can be used to illustrate how easy it could be to do sharding - with almost no changes to the existing Bitcoin system.
Recall that Bitcoin addresses and keys are composed from an alphabet of 58 characters. So, in this simplified thought experiment, we will outline a way to add a kind of "instant sharding" within the existing system - by using the last character of each address in order to assign that address to one of 58 shards.
(Maybe you can already see where this is going...)
Similar to vanity address generation, a user who wants to receive Bitcoins would be required to generate 58 different receiving addresses (each ending with a different character) - and, similarly, miners could be required to pick one of the 58 shards to mine on.
Then, when a user wanted to send money, they would have to look at the last character of their "send from" address - and also select a "send to" address ending in the same character - and presto! we already have a kind of simplistic "instant sharding". (And note that this part of the thought experiment would require only the "softest" kind of soft fork: indeed, we haven't changed any of the code at all, but instead we simply adopted a new convention by agreement, while using the existing code.)
Of course, this simplistic "instant sharding" example would still need a few more features in order to be complete - but they'd all be fairly straightforward to provide:
  • A transaction can actually send from multiple addresses, to multiple addresses - so the approach of simply looking at the final character of a single (receive) address would not be enough to instantly assign a transaction to a particular shard. But a slightly more sophisticated decision criterion could easily be developed - and computed using code - to assign every transaction to a particular shard, based on the "from" and "to" addresses in the transaction. The basic concept from the "simplistic" example would remain the same, sharding the network based on some characteristic of transactions.
  • If we had 58 shards, then the mining reward would have to be decreased to 1/58 of what it currently is - and also the mining hash power on each of the shards would end up being roughly 1/58 of what it is now. In general, many people might agree that decreased mining rewards would actually be a good thing (spreading out mining rewards among more people, instead of the current problems where mining is done by about 8 entities). Also, network hashing power has been growing insanely for years, so we probably have way more than enough needed to secure the network - after all, Bitcoin was secure back when network hash power was 1/58 of what it is now.
  • This simplistic example does not handle cases where you need to do "cross-shard" transactions. But it should be feasible to implement such a thing. The various proposals from u/thezerg1 such as BUIP024 do deal with "cross-shard" transactions.
(Also, the fact that a simplified address-based sharding mechanics can be outlined in just a few paragraphs as shown here suggests that this might be "simple and understandable enough to actually work" - unlike something such as the so-called "Lightning Network", which is actually just a catchy-sounding name with no clearly defined mechanics or mathematics behind it.)
Addresses are plentiful, and can be generated locally, and you can generate addresses satisfying a certain pattern (eg ending in a certain character) the same way people can already generate vanity addresses. So imposing a "convention" where the "send" and "receive" address would have to end in the same character (and where the miner has to only mine transactions in that shard) - would be easy to understand and do.
Similarly, the earlier solution proposed by u/thezerg1, involving distributed trustless Merkle trees, is easy to understand: you'd just be distributing the Merkle tree across multiple nodes, while still preserving its immutablity guarantees.
Such approaches don't really change much about the actual system itself. They preserve the existing system, and just split its data structures into multiple pieces, distributed across the network. As long as we have the appropriate operators for decomposing and recomposing the pieces, then everything should work the same - but more efficiently, with unlimited on-chain scaling, and much lower resource requirements.
The examples below show how these kinds of "sharding" approaches have already been implemented successfully in many other systems.
Massive search is already efficiently performed with virtually unlimited scaling using divide-and-conquer / decompose-and-recompose approaches such as MapReduce and BOINC.
Every time you do a Google search, you're using Google's MapReduce algorithm to solve an embarrassingly parallel problem.
And distributed computing grids using the Berkeley Open Infrastructure for Network Computing (BOINC) are constantly setting new records searching for protein combinations, prime numbers, or radio signals from possible intelligent life in the universe.
We all use Google to search hundreds of terabytes of data on the web and get results in a fraction of a second - using cheap "commodity boxes" on the server side, and possibly using limited bandwidth on the client side - with fault tolerance to handle crashing servers and dropped connections.
Other examples are [email protected], [email protected] and PrimeGrid - involving searching massive search spaces for protein sequences, interstellar radio signals, or prime numbers hundreds of thousands of digits long. Each of these examples uses sharding to decompose a giant search space into smaller sub-spaces which are searched separately in parallel and then the resulting (sub-)solutions are recomposed to provide the overall search results.
It seems obvious to apply this tactic to Bitcoin - searching the blockchain for existing transactions involving a "send" from an address, before appending a new "send" transaction from that address to the blockchain.
Some people might object that those systems are different from Bitcoin.
But we should remember that preventing double-spends (the main thing that the Bitcoin does) is, after all, an embarrassingly parallel massive search problem - and all of these other systems also involve embarrassingly parallel massive search problems.
The mathematics of Google's MapReduce and Berkeley's BOINC is simple, elegant, powerful - and provably correct.
Google's MapReduce and Berkeley's BOINC have demonstrated that in order to provide massive scaling for efficient searching of massive search spaces, all you need is...
  • an appropriate "decompose" operation,
  • an appropriate "recompose" operation,
  • the necessary coordination mechanisms
...in order to distribute a single problem across multiple, cheap, fault-tolerant processors.
This allows you to decompose the problem into tiny sub-problems, solving each sub-problem to provide a sub-solution, and then recompose the sub-solutions into the overall solution - gaining virtually unlimited scaling and massive efficiency.
The only "hard" part involves analyzing the search space in order to select the appropriate DECOMPOSE and RECOMPOSE operations which guarantee that recomposing the "sub-solutions" obtained by decomposing the original problem is equivalent to the solving the original problem. This essential property could be expressed in "pseudo-code" as follows:
  • (DECOMPOSE ; SUB-SOLVE ; RECOMPOSE) = (SOLVE)
Selecting the appropriate DECOMPOSE and RECOMPOSE operations (and implementing the inter-machine communication coordination) can be somewhat challenging, but it's certainly doable.
In fact, as mentioned already, these things have already been done in many distributed computing systems. So there's hardly any "original work to be done in this case. All we need to focus on now is translating the existing single-processor architecture of Bitcoin to a distributed architecture, adopting the mature, proven, efficient "recipes" provided by the many examples of successful distributed systems already up and running like such as Google Search (based on Google's MapReduce algorithm), or [email protected], [email protected], or PrimeGrid (based on Berkeley's BOINC grid computing architecture).
That's what any "competent" company with $76 million to spend would have done already - simply work with some devs who know how to implement open-source distributed systems, and focus on adapting Bitcoin's particular data structures (merkle trees, hashed chains) to a distributed environment. That's a realistic roadmap that any team of decent programmers with distributed computing experience could easily implement in a few months, and any decent managers could easily manage and roll out on a pre-determined schedule - instead of all these broken promises and missed deadlines and non-existent vaporware and pathetic excuses we've been getting from the incompetent losers and frauds involved with Core / Blockstream.
ASIDE: MapReduce and BOINC are based on math - but the so-called "Lightning Network" is based on wishful thinking involving kludges on top of workarounds on top of hacks - which is how you can tell that LN will never work.
Once you have succeeded in selecting the appropriate mathematical DECOMPOSE and RECOMPOSE operations, you get simple massive scaling - and it's also simple for anyone to verify that these operations are correct - often in about a half-page of math and code.
An example of this kind of elegance and brevity (and provable correctness) involving compositionality can be seen in this YouTube clip by the accomplished mathematician Lucius Greg Meredith presenting some operators for scaling Ethereum - in just a half page of code:
https://youtu.be/uzahKc_ukfM?t=1101
Conversely, if you fail to select the appropriate mathematical DECOMPOSE and RECOMPOSE operations, then you end up with a convoluted mess of wishful thinking - like the "whitepaper" for the so-called "Lightning Network", which is just a cool-sounding name with no actual mathematics behind it.
The LN "whitepaper" is an amateurish, non-mathematical meandering mishmash of 60 pages of "Alice sends Bob" examples involving hacks on top of workarounds on top of kludges - also containing a fatal flaw (a lack of any proposed solution for doing decentralized routing).
The disaster of the so-called "Lightning Network" - involving adding never-ending kludges on top of hacks on top of workarounds (plus all kinds of "timing" dependencies) - is reminiscent of the "epicycles" which were desperately added in a last-ditch attempt to make Ptolemy's "geocentric" system work - based on the incorrect assumption that the Sun revolved around the Earth.
This is how you can tell that the approach of the so-called "Lightning Network" is simply wrong, and it would never work - because it fails to provide appropriate (and simple, and provably correct) mathematical DECOMPOSE and RECOMPOSE operations in less than a single page of math and code.
Meanwhile, sharding approaches based on a DECOMPOSE and RECOMPOSE operation are simple and elegant - and "functional" (ie, they don't involve "procedural" timing dependencies like keeping your node running all the time, or closing out your channel before a certain deadline).
Bitcoin only has 6,000 nodes - but the leading sharding-based projects have over 100,000 nodes, with no financial incentives.
Many of these sharding-based projects have many more nodes than the Bitcoin network.
The Bitcoin network currently has about 6,000 nodes - even though there are financial incentives for running a node (ie, verifying your own Bitcoin balance.
[email protected] and [email protected] each have over 100,000 active users - even though these projects don't provide any financial incentives. This higher number of users might be due in part the the low resource demands required in these BOINC-based projects, which all are based on sharding the data set.
[email protected]
As part of the client-server network architecture, the volunteered machines each receive pieces of a simulation (work units), complete them, and return them to the project's database servers, where the units are compiled into an overall simulation.
In 2007, Guinness World Records recognized [email protected] as the most powerful distributed computing network. As of September 30, 2014, the project has 107,708 active CPU cores and 63,977 active GPUs for a total of 40.190 x86 petaFLOPS (19.282 native petaFLOPS). At the same time, the combined efforts of all distributed computing projects under BOINC totals 7.924 petaFLOPS.
[email protected]
Using distributed computing, [email protected] sends the millions of chunks of data to be analyzed off-site by home computers, and then have those computers report the results. Thus what appears an onerous problem in data analysis is reduced to a reasonable one by aid from a large, Internet-based community of borrowed computer resources.
Observational data are recorded on 2-terabyte SATA hard disk drives at the Arecibo Observatory in Puerto Rico, each holding about 2.5 days of observations, which are then sent to Berkeley. Arecibo does not have a broadband Internet connection, so data must go by postal mail to Berkeley. Once there, it is divided in both time and frequency domains work units of 107 seconds of data, or approximately 0.35 megabytes (350 kilobytes or 350,000 bytes), which overlap in time but not in frequency. These work units are then sent from the [email protected] server over the Internet to personal computers around the world to analyze.
Data is merged into a database using [email protected] computers in Berkeley.
The [email protected] distributed computing software runs either as a screensaver or continuously while a user works, making use of processor time that would otherwise be unused.
Active users: 121,780 (January 2015)
PrimeGrid
PrimeGrid is a distributed computing project for searching for prime numbers of world-record size. It makes use of the Berkeley Open Infrastructure for Network Computing (BOINC) platform.
Active users 8,382 (March 2016)
MapReduce
A MapReduce program is composed of a Map() procedure (method) that performs filtering and sorting (such as sorting students by first name into queues, one queue for each name) and a Reduce() method that performs a summary operation (such as counting the number of students in each queue, yielding name frequencies).
How can we go about developing sharding approaches for Bitcoin?
We have to identify a part of the problem which is in some sense "invariant" or "unchanged" under the operations of DECOMPOSE and RECOMPOSE - and we also have to develop a coordination mechanism which orchestrates the DECOMPOSE and RECOMPOSE operations among the machines.
The simplistic thought experiment above outlined an "instant sharding" approach where we would agree upon a convention where the "send" and "receive" address would have to end in the same character - instantly providing a starting point illustrating some of the mechanics of an actual sharding solution.
BUIP024 involves address sharding and deals with the additional features needed for a complete solution - such as cross-shard transactions.
And distributed trustless Merkle trees would involve storing Merkle trees across a distributed network - which would provide the same guarantees of immutability, while drastically reducing storage requirements.
So how can we apply ideas like MapReduce and BOINC to providing massive on-chain scaling for Bitcoin?
First we have to examine the structure of the problem that we're trying to solve - and we have to try to identify how the problem involves a massive search space which can be decomposed and recomposed.
In the case of Bitcoin, the problem involves:
  • sequentializing (serializing) APPEND operations to a blockchain data structure
  • in such a way as to avoid double-spends
Can we view "preventing Bitcoin double-spends" as a "massive search space problem"?
Yes we can!
Just like Google efficiently searches hundreds of terabytes of web pages for a particular phrase (and [email protected], [email protected], PrimeGrid etc. efficiently search massive search spaces for other patterns), in the case of "preventing Bitcoin double-spends", all we're actually doing is searching a massive seach space (the blockchain) in order to detect a previous "spend" of the same coin(s).
So, let's imagine how a possible future sharding-based architecture of Bitcoin might look.
We can observe that, in all cases of successful sharding solutions involving searching massive search spaces, the entire data structure is never stored / searched on a single machine.
Instead, the DECOMPOSE and RECOMPOSE operations (and the coordination mechanism) a "virtual" layer or grid across multiple machines - allowing the data structure to be distributed across all of them, and allowing users to search across all of them.
This suggests that requiring everyone to store 80 Gigabytes (and growing) of blockchain on their own individual machine should no longer be a long-term design goal for Bitcoin.
Instead, in a sharding environment, the DECOMPOSE and RECOMPOSE operations (and the coordination mechanism) should allow everyone to only store a portion of the blockchain on their machine - while also allowing anyone to search the entire blockchain across everyone's machines.
This might involve something like BUIP024's "address sharding" - or it could involve something like distributed trustless Merkle trees.
In either case, it's easy to see that the basic data structures of the system would remain conceptually unaltered - but in the sharding approaches, these structures would be logically distributed across multiple physical devices, in order to provide virtually unlimited scaling while dramatically reducing resource requirements.
This would be the most "conservative" approach to scaling Bitcoin: leaving the data structures of the system conceptually the same - and just spreading them out more, by adding the appropriately defined mathematical DECOMPOSE and RECOMPOSE operators (used in successful sharding approaches), which can be easily proven to preserve the same properties as the original system.
Conclusion
Bitcoin isn't the only project in the world which is permissionless and distributed.
Other projects (BOINC-based permisionless decentralized [email protected], [email protected], and PrimeGrid - as well as Google's (permissioned centralized) MapReduce-based search engine) have already achieved unlimited scaling by providing simple mathematical DECOMPOSE and RECOMPOSE operations (and coordination mechanisms) to break big problems into smaller pieces - without changing the properties of the problems or solutions. This provides massive scaling while dramatically reducing resource requirements - with several projects attracting over 100,000 nodes, much more than Bitcoin's mere 6,000 nodes - without even offering any of Bitcoin's financial incentives.
Although certain "legacy" Bitcoin development teams such as Blockstream / Core have been neglecting sharding-based scaling approaches to massive on-chain scaling (perhaps because their business models are based on misguided off-chain scaling approaches involving radical changes to Bitcoin's current successful network architecture, or even perhaps because their owners such as AXA and PwC don't want a counterparty-free new asset class to succeed and destroy their debt-based fiat wealth), emerging proposals from independent developers suggest that on-chain scaling for Bitcoin will be based on proven sharding architectures such as MapReduce and BOINC - and so we should pay more attention to these innovative, independent developers who are pursuing this important and promising line of research into providing sharding solutions for virtually unlimited on-chain Bitcoin scaling.
submitted by ydtm to btc [link] [comments]

Bitcoin 51% Attack - Clearly Explained btc transaction accelerator Bitcoin Transaction Accelerator : Speed Up Bitcoin Transaction Chris Pacia - Bitcoin Cash and Open Bazaar Bitcoin Transaction Accelerator

Bitcoin manages the double spending problem by implementing a confirmation mechanism and maintaining a universal ledger (called “blockchain”), similar to the traditional cash monetary system.. Bitcoin’s blockchain maintains a chronologically-ordered, time-stamped transaction ledger from the very start of its operation in 2009. Bitcoin and the Double-Spending Problem ... and can thus spend currency twice with a low chance of facing the risk posed by the action. The action causes the value of a currency unit to be misplaced among two indistinguishable copies, and can be considered a market failure. A currency system in which value comes apart from the currency itself is useless. With traditional physical currency, the ... Introduction: Bitcoin and the Double-Spending Problem. In 2009, someone, under the alias of Satoshi Nakamoto, released this iconic Bitcoin whitepaper. Bitcoin was poised to solve a very specific problem: how can the double-spending problem be solved without a central authority acting as arbiter to each transaction? To be fair, this problem had been in the minds of researchers for some time ... Bitcoin SV v1.0.6 (release code name “Push”) New functions to provide and verify Merkle proofs. ZeroMQ notifications on double spend detection (WIP) p2p broadcast of double spend detection to enable network wide awareness. mAPI v1.2. Push based callback notifications for merkle proofs and double spends. SPV Channels v1.0.0 However, Bitcoin is revolutionary because the double-spending problem can be solved without needing a third party. In computer science, the double-spending problem refers to the problem that digital money could be easily spent more than once. Consider the situation where digital money is merely a computer file, just like a digital document. Alice could send $10 to Bob by sending a money file ...

[index] [50988] [5784] [26771] [6913] [35886] [1884] [13137] [23878] [38660] [50430]

Bitcoin 51% Attack - Clearly Explained

Responding to this "Keyword: Crypto" podcast episode: https://anchor.fm/keywordcrypto/episodes/Mario-Gibey-DESTROYS-NANO-ekhfsa Re: Mario Gibney DESTROYS $NA... Bitcoin 51% Attack - Clearly Explained In this video I explain what a 51% attack is in the world of blockchain & cryptocurrency. Did you enjoy this video? SUBSCRIBE for more: https://www.youtube ... How To Double Spend Your Stuck Bitcoin Transaction with FSS-RBF - Duration: 8:22. m1xolyd1an 89,619 views. 8:22. Speed Up My Bitcoin Transaction - How It Works - Duration: 3:11. ... Panel Discussion - Double-spend Proofs Versus Double-spend Relay - Duration: 1:19:38. ... Lightning Network vs. Bitcoin Cash - Duration: 19:10. Ryan X. Charles 15,164 views. 19:10 . Avoid Paying ... Nicolas Courtois On Hash Rate 51 and Protection Against Double Spending In Bitcoin and Other Crypto

#