Polymath Engineer Weekly #81
Go vs Rust, Time-series Decoders, Nice Software, TLA+, Automated Deploys, Muscle Building and Philosophers are Idiots
I have created a survey to get feedback from you. It is really short and anonymous. I would really appreciate if you take 2 minutes to help me make this site better.
Hello again. Have a good week 😎
Links of the week
A bone-chilling wind blows and you hear: "Why Go and not Rust?"
You start feeling less good. Well, you could answer that Go is what you know, so that's what you used to solve your problem, but that's probably not going to be a satisfactory answer. You were pridefully talking about how fast your tool was in the first place, and it's obvious that the stranger is going to counter your simplistic excuse with all the great benefits that Rust brings over Go.
I’ve written about Go in the past. I think this article is a good addition to those thoughts.
To that end, in “A decoder-only foundation model for time-series forecasting”, we introduce TimesFM, a single forecasting model pre-trained on a large time-series corpus of 100 billion real world time-points. Compared to the latest large language models (LLMs), TimesFM is much smaller (200M parameters), yet we show that even at such scales, its zero-shot performance on a variety of unseen datasets of different domains and temporal granularities come close to the state-of-the-art supervised approaches trained explicitly on these datasets.
If you were to criticize blockchain technology from a purely technical perspective, you might point out the flaw that proof of work requires an exponentially increasing amount of computational power, and thus electricity, in order to keep the blockchain database alive over time. You might point out how inefficient of a database it is. But you would be missing the point. Blockchain technology excites investors precisely because of how wasteful it is. Even if we had fusion (!!) it would eat up all that energy and more. It's difficult to express the magnitude of how wasteful this is, and the fact that it's built into the system intentionally is sinister.
TLA+ is a formal method tool used for model checking. Learning TLA+ is challenging because we need to understand a problem well enough to be able to write its specification, know the TLA+'s language, and abstract the problem effectively.
If you believe that writing helps with reasoning, then TLA+ is the tool for you. Writing the spec helps to find gaps in your understanding, and a spec can help debug your design. However, TLA+ is not a substitute for testing.
So we need to move fast – and we do move fast. We deploy from our Webapp repository 30-40 times a day to our production fleet, with a median deploy size of 3 PRs. We manage a reasonable PR-to-deploy ratio despite the scale of our system’s inputs.
This video sheds light on CNS fatigue, how it affects strength and muscle building.
Book of the Week
Do you have any more links our community should read? Feel free to post them on the comments.
Have a nice week. 😉