Stuff The Internet Says On Scalability For January 25th, 2019

Wake up! It's HighScalability time:

My god, it's full of synapses! (3D map of a fly's brain)

Do you like this sort of Stuff? Please go to Patreon and do what comes natural. Need cloud? Stand under Explain the Cloud Like I'm 10 (35 nearly 5 star reviews).

  • 10%: Netflix captured screen time in US; 8.3 million: concurrent Fortnite players;  773 Million: Record "Collection #1" Data Breach; 284M+: Reddit monthly views; 1 billion: people impacted by data breaches; 1st: seed germinated on the moon; 4x: k8s api growth from v1 to v1.4; 7x: faster PyPy python; 9B: gallons of water/day for lawns; 2.6 terabytes: largest data leak in history; $14B: serverless market by 2024; 100 million: Alexas sold; 51%: mobile games share of global market; 160 TB: total data transfer during re:Invent 2018; 100+ million: stackoverflow users; 40%: increase in median data usage; 3%: drop in Comcast's network spending; 1 billion: tweets about gaming in 2018; 53%: investment of Baidu, Alibaba, and Tencent in China's 190 major AI companies; 104,954: hard drives used by Backblaze to store 750 petabytes of data; 9,100: IBM US patents; 1.4bn: active Apple devices; 

  • Quoteable Quotes:
    • Vincent Deluard: If technology is everywhere, the tech sector no longer exists. If the tech sector no longer exists, its premium is no longer justified. 
    • Brenon Daly: There’s a new exit off Sand Hill Road that’s proving increasingly popular for startups. Rather than following the well-worn path that leads into another venture portfolio, startups are taking an unexpected turn into private equity (PE) holdings at a record rate. For the first time in history, a VC-backed startup in 2018 was more likely to sell to a PE buyer than a fellow VC-backed company
    • Jason Lee:  tech stocks really do look like goners. Publicly traded companies that are classified as “tech” now trade at one of the smallest premiums in history, according to a recent JP Morgan analyst note. The most famous of these companies—the so-called faangs, of Facebook, Apple, Amazon, Netflix, and Google—have seen their price-earnings ratios collapse by more than 60 percent in the past two years
    • Rob Pike (1984): A collection of impressions after doing a week’s work (rather than demo reception) at PARC. Enough is known about the good stuff at PARC that I will concentrate on the bad stuff. The following may therefore leave too negative an impression, and I apologize. Nonetheless...A few years ago, PARC probably had most of the good ideas. But I don’t think they ran far enough with them, and they didn’t take in enough new ones. The Smalltalk group is astonishingly insular, almost childlike, but is just now opening up, looking at other systems with open-eyed curiosity and fascination. The people there are absorbing much, and I think the next generation of Smalltalk will be much more modern, something I would find always comfortable. However, it will probably be another model Earth. 
    • Sascha Segan: The processor in the Samsung Galaxy S10 performs better than the latest iPhones on most measures.
    • George Dyson: The next revolution will be the rise of analog systems that can no longer be mastered by digital programming. For those who sought to control nature through programmable machines, it responds by allowing us to build machines whose nature is that they can no longer be controlled by programs.
    • @TServerless: We sat with a solution architect, apparently they are aware of the latency issue and suggested to ditch api gw and build our own solution. Right now api gw is good enough for our poc, but definitely not for our production load. I'll be more than happy to talk about it in person
    • Rajesh Menon: If machines are going to be seeing these images and video more than humans, then why don’t we think about redesigning the cameras purely for machines? ... Because like a fly’s eye, what matters in the AI world isn’t so much the high-quality of a single data source but rather the proliferation of data sources
    • Dieter Bohn: As in any platform war, the numbers come out front and center, and Amazon has the lead on many of those numbers: more than 150 products with Alexa built in, more than 28,000 smart home devices that work with Alexa made by more than 4,500 different manufacturers, and over 70,000 Alexa skills. The numbers for Google Assistant were lower across the board the last time we heard them, but it’s likely Google will use CES to check in with new ones.
    • Mark Fontecchio: VC has turned into an industry characterized by ‘fewer, but bigger.’ That’s true in funding as well as exits. The M&A KnowledgeBase tallied the sale of just 603 startups in 2018, the second-fewest exits in the past half-decade. Proceeds from those deals, however, smashed all records. Last year’s total of $83bn in announced or estimated deal value almost eclipsed the total for the three previous years combined.
    • Brewer: We are at a spot where we can move past seeing the wafer as an incremental piece. In fact, it’s worse than that. Some people make microprocessors. Some people make memories. We’re going to look back on that someday at what a simplistic, archaic view of the world that is. I hope AI is the disruptive piece that will help change our industry.
    • @r0wdy_: I find this to be very true in infosec. There's no shortage of people who would make great hires. But: A) We want to pay them sh*t or B) We build experience walls because we don't want to train
    • @etherealmind: "In 3Q18, for the first time, qtrly vendor revenues from IT infrastructure product sales into cloud environments surpassed revenues from sales into traditional IT environments, accounting for 50.9% of the tot worldwide IT infrastructure vendor revenues, up 43.6% a yr ago" IDC
    • @allenholub: "Agility at scale" is nuts. Scale the work down, not the process up. I've never seen an Agile process that was "scaled up" retain its agility. Can very large systems be built in an agile way? Sure. Should they be? Probably not. Build smaller cooperative systems instead.
    • rsweeney21: Ex-Netflix here. When I was there we had two metrics we cared about: hours watched and retention. Whenever we introduced a new feature we carefully measured it's impact on those two metrics. Ultimately, retention was the most important metric, but hours watched was a leading indicator for retention. I always had a bit of a moral dilemma working at Netflix because my goal was to get people to spend more time watching Netflix, which I didn't really think was good for humanity.
    • neunhoef: ArangoDB is native multi-model, which means it is a document store (JSON), a graph database and a key/value store, all in one engine and with a uniform query language which supports all three data models and allows to mix them, even in a single query. Furthermore, ArangoDB is designed as a fault-tolerant distributed and scalable system. In addition it is extensible with user defined JavaScript code running in a sandbox in the database server. Finally, we do our best to make this distributed system devops friendly with good tooling and k8s integration.
    • tonyjstark: I don't understand the run for GraphQL everywhere, does everybody query sparse and deep nested data on their website? From my experience, Apollo works well except when it doesn't and then you have a lot of magic going on. Zeplin works quite well for our team and creates a nice connection to our designers. Storybook on the other hand not so much. At first we developers used it, then we had to update some things for Apollo but Storybook was not ready for that. Now everything runs again but nobody uses Storybook actively anymore...I have the feeling that the software industry is often driven by personal preference instead of sane decisions. I see projects that use micro services without any reason, using React for static content, dockerizing everything to a ridiculous amount, K8s because why not. All of that because it's interesting for the developer not because it's good for the user.
    • Michael Feldman: We expect Rome will turn up in a growing number of HPC systems, large and small, throughout 2019. Unless AMD has a major hiccup, 2019 is going to be a watershed year for company in the high end of the market.
    • @ben11kehoe: Sure, AWS Lambda is cgi-bin...on a fleet of servers that scales for you, with an OS you never need to patch, with no network attack surface, where each process gets its own entire compute env, billed only for the milliseconds it runs. The execution model isn't the new thing.
    • @nathankpeck: I generally expect under 100ms from the time that a web or mobile client sends the request (after connection and SSL handshake already complete) until the response comes back. This generally only gives code about 20-30ms once you count in the network latency from client to serve
    • @Mr_DrinksOnMe: A programmer was arrested for writing unreadable code. He refused to comment.
    • Dropbox: During this time, the size of the server fleet within the internal Dropbox cloud increased by more than 15%. At the same time, the average percentage of machines in a non-functional or state-requiring-repair remained below 0.5%. Finally, during this time period, the headcount of Operations Engineers remained the same.
    • Jonathan Swanson: If you can invent a new business model that monetizes better than your competitors, you will win.
    • Mithuna Thottethodi: We submit that GPGPUs have a fundamental disadvantage over the TPU for these workloads. Further, we argue that the systolic array (first proposed by IBM for DNNs) is a compute unit organization specifically for capturing far more reuse than the GPGPU.
    • Idreos et al: Our intuition as that most designs (and even inventions) are about combining a small set of fundamental concepts in different ways or tunings… Our vision is to build the periodic table of data structures so we can express their massive design space. We take the first step in the paper, presenting a set of first principles that can synthesize an order of magnitude more data structure designs than what has been published in the literature...The quest for the first principles of data structures needs to continue to find the primitives for additional significant classes of designs including updates, compression, concurrency, adaptivity, graphs, spatial data, version control management, and replication.
    • @CodeWisdom: "We build our computer (systems) the way we build our cities: over time, without a plan, on top of ruins." - Ellen Ullman
    • Viviane Callier: Ultimately, however, the incentive to move towards data openness is likely to come from within the research community itself. “It really is just about changing expectations for what people expect of you,” says Broad’s Carpenter. “It’s embarrassing if your lab is the only one that isn’t sharing data.”
    • Ashkan Soltani: None of [the foods in the grocery store] can include arsenic, [but] we're not required to test our products. That's kind of the online regime that we have for digital safety and digital security.
    • sonic0002: Redis uses a “hash-slot” approach, which can be considered as a mix of server-side partitioning and client-side partitioning. It achieves certain advantages over the 3 traditional approaches. The whole range of possible hash codes of keys are divided into 16384 slots. These slots are automatically assigned to different master nodes. The client could get the allocation information easily by using “CLUSTER SLOTS” command (Redis Labs). 
    • Susan Rambo: Reliable functional safety that spans 18 to 20 years of service in harsh environments, or under constant use with autonomous taxis or trucks, is a massive undertaking that will require engineering advances in areas such as artificial intelligence, LiDAR, radar, and vehicle-to-vehicle communication. And it will require management of a global supply chain that is populated by startups, chipmakers with no automotive experience, and automotive suppliers with little experience in advanced electronics. At this point no one knows exactly how reliable a 7nm AI system will be, or how effectively it will fail over to another system in case of a malfunction. In fact, no one is even sure what are the right questions to ask during testing.
    • Memory Guy: Current thinking shows MRAM replacing DRAM.  There are two reasons for this: First, MRAM is very fast, perhaps equally as fast as DRAM, whereas most other technologies are not.  Second, the selectors for both MRAM and DRAM are three-terminal devices, so the die sizes for MRAM and DRAM are similar for the same process geometry.  Once DRAM reaches its scaling limit then MRAM should be able to travel down the same cost decline curve that DRAM was originally following.
    • @kellabyte: What I’ve learned is even Intel Optane 3D XPoint can’t overcome really small fsync() calls. Kernel just can’t do it. Where it shines is fairly shallow and medium depth queues of random writes. I haven’t added mixing reads and writes but Optane should REALLY thrive at that.
    • @thedawgyg: If you look at the stats, I finished in 6th place over all on @Hacker0x01 for 2018 calendar year. There are many people that make alot more than the 30k stated, most of our money comes from private programs
    • @kevohara: Here's a typical #AWS bill for a startup building their MVP primarily on #serverless technology like Lambda. This represents numerous, production-ready API's across three separate environments plus multiple static sites, databases, encryption, messaging, etc. The biggest savings is in compute. The cost for this in traditional cloud infrastructure would be several 100's of dollars/mo. A several hundred dollars may not seem like a lot, but it's painful to watch the cash burn on unused infrastructure when you are pre-revenue. It all adds up in the end. This doesn't mean that #serverless is a fit for every startup, but it's definitely the ideal cost model for pre-revenue businesses and should at least be considered.
    • Stackoverflow: The biggest expansion in our business this year by far was on Teams and Enterprise. The product idea is simple: give organizations a private version of Stack Overflow Q&A just for their team.
    • @davidgerard: The Rise of Netflix Competitors Has Pushed Consumers Back Toward Piracy: BitTorrent usage has bounced back because there's too many streaming services, and too much exclusive content.  
    • @evolvable: On messaging in #microservices: • Message brokers don’t “decouple” your services. • They remove temporal coupling - the need for downstream service to be available when upstream service  acts. • You still have *domain coupling*: one service needs to speak the other’s language.
    • Matt Schrage: The medium becomes its Message, its ideal, its promise. The Message of a novel medium is not immediately apparent; though it is endogenous, it is not discernible right away.  Like an embryo that from inception contains the promise of adulthood, the medium contains the germ of its Message.
    • Paul Johnston: So maybe start to think about a serverless backend application as a combination of two things: Serverless compute and serverless data. And of the two, I would suggest that it’s more important to get your serverless data right than your compute.
    • @rafaelcodes: ThreadLocals should really be auto-closable. To avoid leaks, one should generally clear any references in a finally block unless one controls the life-cycle of the thread what is often not the case and the reason to use thread-locals to begin with.
    • Joe: it looks like the collective downmarket moves of HPC hardware in the form of accelerators, along with tool sets that don’t necessarily require you learn the lower level programming (CUDA, OpenMP/OpenCL/OpenACC, MPI, …), enable many (more) people to be productive, faster. Add NVMe to the mix, and you have very high performance local storage, which you can use on your local machines. Which means that you can do effective work on your desktop.
    • Charles Fitzgerald: As I keep repeating, CAPEX is both a prerequisite to play in the big boy cloud and confirmation of customer success. Both IBM and Oracle are tens of billions of dollars in cloud infrastructure CAPEX behind Amazon, Google, and Microsoft. Oracle’s spending has at least ticked up, but their spending is not enough to keep pace, much less to have any hope of catching up to the infrastructure of the big three...TL;DR: Thanks for playing IBM and Oracle.
    • @johncutlefish: Kind of funny that we use the word “build” w/software product development. I wonder how that came about.
    • @MarcJBrooker: I disagree with this in some ways. In practice, having multiple deployment units and clear APIs does decouple teams in technical choices, scale, and deployment practices. As orgs and services scale, the inside becomes the outside, and that's hard to do with a monolith.
    • Geoff Huston: The BGP update activity is growing in both the IPv4 and IPv6 domains, but the growth levels are well below the growth in the number of routed prefixes. The ‘clustered’ nature of the Internet, where the diameter of the growing network is kept constant while the density of the network increases as implied that the dynamic behaviour of BGP, as measured by the average time to reach convergence, has remained very stable in IPv4 and bounded by an upper limit in IPv6.
    • JON M. JACHIMOWICZ: First, we find that defaults in consumer domains are more effective and in environmental domains are less effective. Second, we find that defaults are more effective when they operate through endorsement (defaults that are seen as conveying what the choice architect thinks the decision-maker should do) or endowment (defaults that are seen as reflecting the status quo).
    • Carl Hewitt~ Deep correlation classifiers don't understand concepts. What are they lacking? Lacking in modularity. Doesn't have any kind of argumentation based on evidence. The proposal is to put deep correlation classifiers into MIRO (massive inconsistent robust ontology), to train the concurrently, to monitor the results, and debug them against each other.
    • @matthew_d_green: Maybe when/if the government opens again, we should scrape the NIST and CSRC websites, put all those publications somewhere public. It’s worrying that *every single US cryptography standard* is now unavailable to practitioners.
    • Chris Coyier: Over the last few years, we’ve started to see a significant shift in the role of the front-end developer. As applications have become increasingly JavaScript-heavy there has been a necessity for front-end engineers to understand and practice architectural principles that were traditionally in the domain of back-end developers, such as API design and data modeling.
    • Hillel Wayne: The question, then: “is 90/95/99% correct significantly cheaper than 100% correct?” The answer is very yes. We all are comfortable saying that a codebase we’ve well-tested and well-typed is mostly correct modulo a few fixes in prod, and we’re even writing more than four lines of code a day. In fact, the vast majority of distributed systems outages could have been prevented by slightly-more-comprehensive testing. 
    • @perrymetzger: Rust doesn't have a garbage collector, but the advanced type system checks to make sure that memory and other resources aren't misused. This is nearly unique in production programming languages. It is even safe to share bits of memory on the stack in multithreaded code, because the type system will make sure you didn't make any mistakes leading to concurrency errors. This is _very_ useful, and _very_ unusual. 3/
    • Dropbox: The existing fabric design can scale close to 500 racks while operating in a non-blocking fashion. Making use of an ASIC that offers higher port density, say 64X100G, the existing fabric design can be scaled to support 4x the rack capacity, again non-blocking! As merchant silicon continues to evolve and produce denser chips, having much denser fabrics is very well a possibility. To accommodate relatively higher rack counts, a fabric may span multiple physical suites and outgrow the maximum supported distance on a QSFP-100G-SR4 optic which would require the need to explore potential transceivers: Parallel single mode 4-channel (PSM4) or coarse wavelength division multiplexing four-lane (CWDM4) or any future specifications to achieve connectivity across a fabric spanning physically separated facilities.
    • Darxide-: I have found my best hires have come from giving code review tests as opposed to programming challenges. Especially senior hires. Write some sh*t code with common gotchyas and some hidden gotchyas (race conditions etc etc) in the language they are interviewing for. Have them code review it. That shows you 3 things... do they know the language well enough to find the issues, how much attention to detail do they have and how good are they at articulating the issues to a lower level developer. As a senior that's a large amount of the job.
    • Ahmed Mahdy: Lucy was not alone. Another team’s service also lives there – and theirs had a memory leak around the same time the 99.9% latency hiked, as the graph shows. When there’s a memory leak, and the kernel runs low on available physical memory, it tries to free up memory by paging other services. In this scenario, Lucy was the victim and most of its working set was written to the page file.
    • Sabine Hossenfelder: This situation is unprecedented in particle physics. The only reliable prediction we currently have for physics beyond the standard model is that we should eventually see effects of quantum gravity. But for that we would have to reach energies 13 orders of magnitude higher than what even the next collider would deliver. It’s way out of reach. The only thing we can reliably say a next larger collider will do is measure more precisely the properties of the already known fundamental particles. That it may tell us something about dark matter, or dark energy, or the matter-antimatter symmetry is a hope, not a prediction. 
    • @QuinnyPig: The outage sequence: 1. I get alerted for something weird with my site. 2. Twitter lights up with reports of broken things.  … 86. A postcard arrives from my mother telling me the internet is acting weird.  87. @awscloud updates their status page.
    • rseymour: I wouldn’t say HPC is dying, it’s just diffusing from “hard physics” (mostly rote floating point vectorized calculations) into the worlds of bio and data science that have different needs and are often just as much about data processing as anything. Physics like astronomy and particle physics have been dealing with scads of data already and have their own 20-30 year old data formats and processes. 
    • @rbranson: Either I'm missing something (a large possibility) or it would appear that explicit row locks aren't sufficient to guarantee linearizable commits in MySQL:
    • @danveloper: If I ever recover from this last week of work, I'll tell y'all about how a Kafka outage caused a Kubernetes cluster to self-destruct, resulting in the production of 10s of 1,000s of nodes into a Consul gossip mesh that took down ended up taking out an entire infrastructure. Spoiler: the moral of the story is, "you should just run " doesn't work at large production scale. Sorry in advance to all the folks who thought this was a "Kafka fails on Kubernetes" story. I'm afraid it's not that simple...Something I couldn’t articulate in this write-up without completely de-railing the post is how we were able to rebalance the workloads from the affected Kubernetes clusters to smaller, less-stressed clusters.
    • @pdehlke: "Use boring technologies" "Run all your production systems on kubernetes! It's the future!" Pick one. People have to understand that k8s is still, to steal a phrase from @mipsytipsy, green as the f*cking shire. 
    • @betsythemuffin: Performance work is, hands-down, the single most impostor-syndrome-inducing area of development work I do. I'm pretty good at remediating perf issues within normal business applications, but it's hard to remember that. Because of how perf work has traditionally been taught. 
    • @JeffDStephens: I’ve had similar thoughts. AWS is killing entire lines of business. Yet, each service they lay on top of the cloud adds another layer to decipher. The windows sys admin today will morph into the AWS admin tomorrow.
    • @stevesi: PS/ This is a perfect situation where the first movers end up disadvantaged in the long run. It doesn't pay to be first because you  work through all the problems (first as a supplier or customer) and get all negatives. Second mover gets to have a clean polished "first" effort.
    • Tal Bereznitskey: Serverless, because we like to sleep at night.
    • @mipsytipsy: Dashboards and preaggregates are very useful for other things!  But for debugging software in distributed systems, you need it to be oriented around the request as it hops across your system. This is how your user experiences it, so this is what matters.
    • @behrangsa: There was a time when many Web apps had only one or two dependencies: the database (e.g. MySQL) and the runtime environment (e.g. Tomcat orJBoss). It was possible to run a fully functional version of the app on our laptops. /1
    • @adrianco: Here’s a related thought. Many people on developer job codes in big enterprises (especially banks) don’t write code, they manage and configure vendor software packages. Count the number of people who check code in every month as real devs.
    • @paulg: If Newton's Principia were published today, it would have 4 stars on Amazon. There would be one cluster of 5 star reviews by people saying it had revolutionized their thinking, and another cluster of 1 star reviews by people complaining it was pointless and hard to read.
    • iaabtpbtpnn: We use a graph database at my company (OrientDB, having migrated from Neo4J). I don't really know what we need it for, aside from giving the graph database guy something fun to do. We also use Postgres as our main DB. As far as I can tell, we could implement our graph as standard Postgres tables, and it would work just fine if not better. Plus, there'd be more than one person who knows how to query it.
    • @ajaynairthinks: #serverless tip 7: Your mental model of functions should be that of OS processes rather than instances. Both play within the resource space you provide;have well defined isolation/communication constructs; can be composed into higher constructs; (ideally) cost nothing when idle.
    • @joshroppo: "YAMLgineering" -- @mweagle YAML really arrived at a very opportune time to be cargo culted into every corner of the web stack
    • @orKoN: I build a SaaS product with the pay-per-use model so I simply cannot afford to add 69 KB or else I will be losing money and will have to raise prices
    • Mark Wallace: The Air Force team writing this wartime code draws its inspiration from pop culture and the need to move fast. But its broader mission is more serious: to do away with the red tape that keeps the armed forces from moving past things like whiteboards and plastic cards, even for critical missions. The tools it has worked on were developed and deployed in a fraction of the time and at a fraction of the cost of most contracts that take on similar work. But can it get the Air Force, and the Department of Defense more broadly, to undertake the badly needed update to the way they conceive of, acquire, and deploy the software—and, just maybe, the hardware—that are the tools the armed forces rely on in war?
    • Marc Verstaen: Not so far ahead is quantum networking. With so much data being generated and so much of it traversing networks, I believe it is inevitable that we overhaul networking technology as it's known today. I won't even try to dig into the intricacies of the underlying science, but once it is widely available, consequences will be gigantic: instant network connections. No latency. Even the Einsteinian speed of light limit will be gone. All computers can be considered as one system, regardless of where they are. Computer science as we know it will have to be completely revisited. Science fiction? Not really, there are at least two networks that I know of based on these principles -- one in the Netherlands, the other one in China. Today, their  "limited" use case is around encryption key, the quantic networks are used to exchange quantic keys in a secure way. This article talks about the work that is being done.
    • Michael Segal: A large part of neuronal activity is ongoing cross talk among different brain areas, and sensory input sometimes only appears to play a modulatory role in this internal activity. That’s a very different perspective than the one that you have usually in deep neural networks, in which neurons only get activated, basically, when they are provided with input. So, both anatomically and in terms of functional properties, the brain seems to operate very differently from a deep neural network. There’s still a considerable gap between real intelligence and so-called artificial intelligence.
    • Undrinkable Kool-Aid: Formal methods on the other hand are not code and they de-risk the code portfolio by offloading parts that would be code into something that is not code and hence not a liability.
    • Geoff Huston: Despite this uncertainty, nothing in this routing data indicates any serious cause for alarm in the current trends of growth in the routing system. There is no evidence of the imminent collapse of BGP. None of the BGP metrics indicate that we are seeing such an explosive level of growth in the routing system that it will fundamentally alter the viability of the BGP routing table anytime soon. 
    • websh*t weekly: A Facebook reports that a stranger got a pet killed. Hackernews decides that a software company can make process changes to keep dogs from dying. Some Hackernews suggest maybe checking ahead of time to see if you're handing your dog to someone with a track record of not killing dogs, but this idea is dismissed as wasteful luddite nonsense. Hackernews digs up similar stories of phone app vendors not giving a shit about their customers or employees, and much outrage is expressed. Despite the fact that "not giving a sh*t about customers or employees" is the founding principle of the gig economy, Hackernews thinks these problems can all be solved with fatter databases.
    • Andrew Fikes: We have this this blend between OLTP and OLAP, and we are still, as you said, asking: Is there one database to rule them all? We sort of go back and forth on that. I think the pure OLAP workloads, from an efficiency point of view, do work better on things like BigQuery, which have been more multipurposely designed for them. You do see things like external databases out there that do take advantage of some tricks to blend the two of them. But within Google, we have seen pretty strong adoption across the board for Spanner, from the big to the small to the expensive to the not expensive to the SQL to the point queries. We have been working for quite a while filling in all of the gaps that that show up in the different workloads.

  • Music is not in the notes. Programs are not in the code. Programming is in the mind of the programmer, like music is in the mind of the musician. AWS For Everyone: New clues emerge about Amazon’s secretive low-code/no-code project. All these attempts ultimately fail because all they usually do is systematize a particular problem space. We'll see if AWS is doing anything different. Good discussion at @forrestbrazeal

  • Google had a busy year: Looking Back at Google’s Research Efforts in 2018. So did Ben Thompson: The 2018 Stratechery Year in Review.  

  • The state of hard drives is good. Backblaze Hard Drive Stats for 2018: In 2018 the big trend was hard drive migration: replacing lower density 2, 3, and 4 TB drives, with 8, 10, 12, and in Q4, 14 TB drives. In 2016 the average size of hard drives in use was 4.5 TB. By 2018 the average size had grown to 7.7 TB. The 2018 annualized failure rate of 1.25% was the lowest by far of any year we’ve recorded.

  • What happens when you try to serve StackOverflow levels of traffic (66 million pages per day, 760 pages/sec on average) solely from a Amazon, Microsoft, and Google serverless functions? Serverless at Scale: Serving StackOverflow-like Traffic. As a comparison you can read about SO's actual architecture in a long series of articles that starts here. Cost information would have been interesting, but from performance perspective it may not be as good as SO, but it may be good enough. The results: All cloud providers were able to scale up and serve the traffic. However, the latency distributions were quite different. AWS Lambda was solid: Median response time was always below 100 ms, 95th percentile was below 200 ms, and 99th percentile exceeded 500 ms just once. Google Cloud Storage seemed to have the highest latency out of the three cloud providers. Google Cloud Functions had a bit of a slowdown during the first two minutes of the scale-out but otherwise were quite stable and responsive. Azure Functions had difficulties during the scale-out period and the response time went up to several seconds. .NET worker appeared to be more performant compared to Node.js one, but both of them show undesirable spikes when new instances are provisioned.

  • I wonder what happens when your military AI is pot committed and decides to bluff?  Poker-Playing AI Tapped For Military Use In $10M Pentagon Deal. 

  • Takeaways from decades of systems thinking: Improving the performance of the parts of a system taken separately will necessarily improve the performance of the whole. False. In fact, it can destroy an organization; Problems are disciplinary in nature. Effective research is not disciplinary, interdisciplinary, or multidisciplinary; it is transdisciplinary. Systems thinking is holistic; it attempts to derive understanding of parts from the behavior and properties of wholes, rather than derive the behavior and properties of wholes from those of their parts. Disciplines are taken by science to represent different parts of the reality we experience. In effect, science assumes that reality is structured and organized in the same way universities are; The best thing that can be done to a problem is to solve it. False. The best thing that can be done to a problem is to dissolve it, to redesign the entity that has it or its environment so as to eliminate the problem. 

  • K8s. Kafka. Consul. Scale. Upgrades. Connectivity problems. Marathon debugging slogs. Loaded CPUs that cause the control plane to do stooopid things. All characters in a story from Target's excellent experience report: On Infrastructure at Scale: A Cascading Failure of Distributed Systems. Lessons: My mantra “smaller clusters, more of them” is affirmed. The workloads we had in the smaller development Kubernetes clusters were not affected the same as the big one. Same goes for prod. And we would have been hosed if dev and prod were on the same cluster; Shared Docker daemon is a brittle failure point; I’ve had concerns about the sidecars in the past, however after this event I am convinced that having each workload ship with their own logging and metric sidecars is the right thing. If these had been a shared resource to the cluster, the problem almost certainly would have been exacerbated and much harder to fix. Also, Video: Kubernetes the very hard way at Datadog - Building and Scaling K8S Clusters to Power Datadog

  • What's next for database indexing? Do the right thing. Given performance goals and constraints on hardware and efficiency then find the best configuration of the best index structure while adapting as workloads change. Optimal configurations for an LSM and more

  • We were not paid to write this post. Oracle, AWS, and Azure Benchmarking Shootout: For pure cost-effectiveness, Oracle is pretty compelling, especially given the amount of memory and SSD you get compared to other cloud providers. The I/O performance is also excellent. For pure speed, AWS seems to provide the fastest processors, although it’s unclear why Azure’s offering wasn’t more competitive in this area - on paper it seemed like they should be nearly equal.  jaffee1: I know people are going to be skeptical of anything Oracle, which is pretty reasonable given their history. I can only say that their startup program has been helpful and the cloud product has been surprisingly decent.

  • Why do people fall downhill instead of uphill? Because that's the easiest way to fall. That's a huge reason why building secure systems is hard. This is also part of the story—The Reason Software Remains Insecure: "Basically, software remains vulnerable because the benefits created by insecure products far outweigh the downsides. Once that changes, software security will improve—but not a moment before." But that's not the whole of the story. Certainly, if the incentive structure encouraged secure software we would have more secure software, for the same way we have fewer bank robberies because of the incentive structure. Is that really the best to create secure software? As a programmer, if you want to create secure software, how do you do it? Go figure it out. I dare you. It's not easy. Make building secure software easy for programmers and you'll get secure software. Until then we'll have jails full of programmers who miss the mark and get dumped on by people who have no idea how hard it is to do.

  • What happens after software eats the world? You have compost. Which is great, because compost helps grow the next crop, but make no mistake, the previous harvest has been fully digested.  The Attention Economy Is a Malthusian Trap

  • San Diego added 3,200 smart streetlights, this is the IoT everyone talks so much about, and even with such a simple change they learned things they thought they might learn and things they did not expect to learn: We thought we were using [parking] spaces 60 percent of the time,” he said, “but data we from the streetlights says we are using them 90 percent, which is overutilized. So we are thinking about pricing, whether we should have more parking in certain areas, whether we should expand the metered network...What is revolutionary is to have near real-time information about how people are moving,” Caldwell says. “That gives us the ability to see things we haven’t seen before. We can look at a baseball game at Petco Park downtown, and see how that affects people’s ability to move through the environment. We can see how a law enforcement event, like a traffic stop, impedes the ability to move...Seeing all this information, he says, city managers have been inspired to try to help police and fire vehicles get to emergencies faster.

  • What's next when your homegrown system built on Celery, RabbitMQ, and Cassandra no longer suffices? Scaling Klaviyo’s Event processing Pipeline with Stream Processing: Flink alone stood out to us because it possessed all the attributes we wanted in a framework. Even better, the Flink community was growing fast and the documentation of the software was reasonably detailed. 

  • How do you handle 100 M RPS, 4.000 M users, and ±40 M communities, and ±20.000 servers? Scaling to one million RPS. While sharding communities into separate independent pods was the best option at one time, is it anymore? It's scalable, but administering sharded architectures is a nightmare. We now have truly global scale databases that might be a better solution. There's Google Spanner and Microsoft CosmoDB that handle master-master, geo-replication with transparent horizontal scaling. AWS doesn't have a direct competitor yet, but does have Aurora mult-master in preview. You get a single view of your system and the database does all the partitioning and data shuttling for you. All it costs is money.

  • Need to search for a data set? There's Google Data Search. And here's how it works

  • Why we built CockroachDB on top of RocksDB: If, on a final exam at a database class, you asked students whether to build a database on a log-structured merge tree (LSM) or a BTree-based storage engine, 90% of your students would probably respond that the decision hinges on your workload. “LSMs are for write-heavy workloads and BTrees are for read-heavy workloads...The main motivation behind RocksDB adoption has nothing to do with its choice of LSM data structure. In our case, the compelling drivers were its rich feature set which turns out to be necessary for a complex product like a distributed database. At Cockroach Labs we use RocksDB as our storage engine and depend on a lot of features that are not available in other storage engines, regardless of their underlying data structure, be it LSM or BTree based. Also, CockroachDB's Consistency Model. Also also, Spanning The Database World With Google. Also also also, AWS Aurora MySQL – HA, DR, and Durability Explained in Simple Terms

  • Why we use Ruby on Rails to build GitLab. The usual reasons and problems with Rails. They do use Go when performance is critical. sytse: GitLab is using a lot of memory for various reasons, all of the big ones (like adding multi-threading which I proposed 3 years ago https://gitlab.com/gitlab-org/gitlab-ce/issues/3592 ) can be solved without a rewrite. I've failed to make this a priority within the company. We had people work on it for some time but the day to day of shipping features, keeping GitLab secure, and other tasks took over. I'll advocate for a dedicated performance team to solve this. 

  • Most software has tons of these and we leave them in for pretty much the same reason—nobody knows what they really do and are too afraid to pull them out. Did you know the human body is full of evolutionary leftovers that no longer serve a purpose? These are called vestigial structures and they’re fascinating.

  • Last years trends according to Network Break: we are seeing brands being eroded by whitebox solutions, we're not turning to brands for solutions as much as we used to; open source might have been the gateway to the cloud. Why go to open source when you can just go to the cloud?; death of the firewall via perimeterless networking. You can't build a perimeter, control the edge, and watch traffic that goes across it. As we see SaaS adoption and public cloud service adoption, youd don't have a perimiter anymore? What about people using VPN? How do you monitor and secure all that? Security and monitoring moves to the edge. Rules are moved into the devices, closer to the workload. Identity management becomes even more important without boundaries. Who owns the chokepoint of identity?; Quality of IT management and leadership is sorely lacking, IT managers need to be better at communicating, better at adapting to change, and able to transform IT into a profit center instead of feeding the perception of IT as a cost center. IT should not be outsourced as a cost center. It should generate valuable returns and establish business control.

  • Total outage time was kept at around 40-60 seconds (in the worst case scenario).  WePays High Availability Solution: Orchestrator, Consul, HAProxy, and pt-heartbeat. Orchestrator to detect failure and complete role transition. Two layers of HAProxy. The first layer of HAProxy sits on the client machines and connects to the remote (second layer) of HAProxy. The second layer of HAProxy is distributed across multiple Google zones that connect to the same set of MySQL servers . Hashicorp Consul for the KV store. Hashicorp consul-template for managing dynamic configs. pt-heartbeat to run on all slaves.

  • Easy to understand example of how to solve a problem on Azure. Scaling Azure Functions to Make 500,000 Requests to Weather.com in Under 3 Minutes. They key as always is parallizing and partitioning. Odd that Weather.com doesn't have a batch interface. Also, How To: Use SNS and SQS to Distribute and Throttle Events

  • With NVMe  (Non-Volatile Memory Host Controller Interface Specification), JBUF—Just a Bunch of Flash—is the high speed network datastore of the present. Learn the story in AWS Outposts, Choosing An NVMe Fabric, Parallel NFS Cautions. Accessing flash directly takes 70 or 80 microseconds. With NVMe over RDMA you can access flash remotely in 125 microseconds. With the new datacenter NVMe over TCP standard you can access flash in 160 or 170 microseconds. Compare that to  the best EC2-to-S3 transfer time across all regions, with 81ms. Plus, it makes the datacenter modular. You can attach and reattach flash to different nodes.

  • Consistent Hashing with Bounded Loads:  In this paper, with n clients and n servers, we get a guaranteed max-load of 2 while only moving an expected constant number of clients for each update. We take an arbitrary user specified balancing parameter c=1+ϵ>1. 

  • Mark Callaghan says he's on the "useless" team. Use less CPU. Use less storage. Use less DRAM. Use less network. LOL. Geek code for LSM trees.

  • facebookresearch/LASER (article):  a library to calculate and use multilingual sentence embeddings. Zero-shot transfer across 93 languages. 

  • Balanced Partitioning and Hierarchical Clustering at Scale: We apply balanced graph partitioning to multiple applications including Google Maps driving directions, the serving backend for web search, and finding treatment groups for experimental design. For example, in Google Maps the World map graph is stored in several shards. The navigational queries spanning multiple shards are substantially more expensive than those handled within a shard. Using the methods described in our paper, we can reduce 21% of cross-shard queries by increasing the shard imbalance factor from 0% to 10%. As discussed in our paper, live experiments on real traffic show that the number of multi-shard queries from our cut-optimization techniques is 40% less compared to a baseline Hilbert embedding technique. This, in turn, results in less CPU usage in response to queries.

  • A HIERARCHICAL MODEL FOR DEVICE PLACEMENT: We introduce a hierarchical model for efficient placement of computational graphs onto hardware devices, especially in heterogeneous environments with a mixture of CPUs, GPUs, and other computational devices. Our method learns to assign graph operations to groups and to allocate those groups to available devices. The grouping and device allocations are learned jointly. The proposed method is trained with policy gradient and requires no human intervention. Experiments with widely-used computer vision and natural language models show that our algorithm can find optimized, non-trivial placements for TensorFlow computational graphs with over 80,000 operations. In addition, our approach outperforms placements by human experts as well as a previous state-of-the-art placement method based on deep reinforcement learning. Our method achieves runtime reductions of up to 60.6% per training step when applied to models such as Neural Machine Translatio

  • Learning Memory Access Patterns: We focus on the critical problem of learning memory access patterns, with the goal of constructing accurate and efficient memory prefetchers

  • The Case for Learned Index Structures: Our initial results show, that by using neural nets we are able to outperform cache-optimized B-Trees by up to 70%% in speed while saving an order-of-magnitude in memory over several real-world data sets. More importantly though, we believe that the idea of replacing core components of a data management system through learned models has far reaching implications for future systems designs and that this work just provides a glimpse of what might be possible.

  • Consistent Hashing with Bounded Loads: We can think about the servers as bins and clients as balls to have a similar notation with well-studied balls-to-bins stochastic processes. The uniformity objective encourages all bins to have a load roughly equal to the average density (the number of balls divided by the number of bins). For some parameter ε, we set the capacity of each bin to either floor or ceiling of the average load times (1+ε). This extra capacity allows us to design an allocation algorithm that meets the consistency objective in addition to the uniformity property...When providing content hosting services, one must be ready to face a variety of instances with different characteristics. This consistent hashing scheme is ideal for such scenarios as it performs well even for worst-case instances...After making an earlier version of our paper available, engineers at Vimeo found the paper, implemented and open sourced it in haproxy, and used it for their load balancing project at Vimeo. The results were dramatic: applying these algorithmic ideas helped them decrease the cache bandwidth by a factor of almost 8, eliminating a scaling bottleneck. 

  • Explainable Reasoning over Knowledge Graphs for Recommendation: In this paper, we contribute a new model named Knowledgeaware Path Recurrent Network (KPRN) to exploit knowledge graph for recommendation. KPRN can generate path representations by composing the semantics of both entities and relations. By leveraging the sequential dependencies within a path, we allow effective reasoning on paths to infer the underlying rationale of a user-item interaction.

  • F1 Query: Declarative Querying at Scale: F1 Query is a stand-alone, federated query processing platform that executes SQL queries against data stored in different file-based formats as well as different storage systems (e.g., BigTable, Spanner, Google Spreadsheets, etc.). F1 Query eliminates the need to maintain the traditional distinction between different types of data processing workloads by simultaneously supporting: (i) OLTP-style point queries that affect only a few records; (ii) low-latency OLAP querying of large amounts of data; and (iii) large ETL pipelines transforming data from multiple data sources into formats more suitable for analysis and reporting.