Polymath Engineer Weekly #52
Better inputs to maximize your outputs
Links of the week
We've all experienced that feeling of excitement starting a new project. The first few weeks you can't wait to get on the computer to work. Then slowly over time you get distracted or make up excuses and work on it less. If this is for real work, you forcibly slog your way to the finish line but every day is painful. If this is for fun, you look back years from now and remember what could've been.
Many engineering organisations aspire to have a culture of writing – where decision making and communication happens primarily through writing, versus through synchronous means such as meetings. There are good reasons for this – writing and reading are generally asynchronous processes, writing can contain visual aids such as diagrams, and most importantly is an artefact.
Artefacts are essential. Think of books in a library, or research papers in a journal. They can be referred to, can be discovered (given correct organisation, which is the domain of library science) and for engineers, is greppable.
Other systems, such as Apache Kafka, have followed a different approach in addressing the limitation of ZooKeeper: KRaft, introduced in KIP-500, has introduced an option to remove the dependency on ZooKeeper.
We feel that this approach replicates the same ZooKeeper/Etcd architecture without significant improvements and does not remove any complexity from the system. Instead, the existing complexity of ZooKeeper has been transferred to the Kafka brokers, replacing the existing battle-tested code with new, unproven code that would just do the same job.
Designing a new system or component provides a good opportunity to examine the problem and past approaches and focus on designing a solution for the current operating environment.
There's a strange number system, featured in the work of a dozen Fields Medalists, that helps solve problems that are intractable with real numbers.
Back on the Moon however, the spacecraft overflew a 3km high cliff — which meant that the sensor was absolutely correct when it reported a sudden height change! But since this anomalous (and correct!) report broke the “altitude changes in proportion to our speed & acceleration” rule, the spacecraft computer steadfastly ignored further reports from the altimeter and used other, less accurate, methods to estimate how high it was above the surface.
Grasping AI's potential doom and devising strategies to mitigate its risks, especially those unknown and unpredictable, is a complex task. However, addressing present AI risks before AGI comes into play seems prudent. I appreciate how The Exponential View challenges the predictive ability of AI researchers concerning societal outcomes and notes the absence of specific academics in recent AI risk petitions. This publication demands substantial evidence to existential risk claims and urges a focus on tangible issues instead of sensationalism. "If there's no evidence, then today's models and techniques don't pose an extinction risk within a reasonable timeframe. Let's invest energy in addressing real benefits and harms, rather than stirring public fear against beneficial technology for online engagement and potentially diverting the regulatory agenda away from matters of proven importance."
Book of the Week
Do you have any more links our community should read? Feel free to post them on the comments.
Have a nice week. 😉