Hello again.
Links of the week
The only way to find the limit is by crossing it. Cultivate a sensitivity to the quality of the work you're doing, and then you'll notice if it decreases because you're working too hard. Honesty is critical here, in both directions: you have to notice when you're being lazy, but also when you're working too hard. And if you think there's something admirable about working too hard, get that idea out of your head. You're not merely getting worse results, but getting them because you're showing off — if not to other people, then to yourself.
The Cadence: How to Operate a SaaS Startup
The Cadence is most needed when scaling from 50 to 500 employees. This is when a pivotal transition in the way the startup operates is required. Before then, all of the employees fit into a single room (either physically or virtually), everyone knows what everyone else is working on, and founders can easily run around telling everyone what to build and what to do.
ADHD: It sucks sometimes, actually
If I could snap my fingers and make myself neurotypical, would I do it? I don't think so. Much as it frustrates me, my unique brain is uniquely me. Moreover, I suspect that it has been a net asset in achieving what I've achieved in work and life, no matter what I had expected and was told throughout my younger years.
The Art of Pooling Embeddings 🎨
Sure, the token embeddings of a sequence represent much more information but does the sentence embedding capture the same information? Or does it grasp information that relates to the sequence as a whole rather than the individual constituents? Spoiler alert: it’s the latter.
Journey Platform: A low-code tool for creating interactive user workflows
The Journey Platform empowers non-technical and technical users to create complex stateful workflows through a simple drag and drop interface. By leveraging a generic workflow definition DSL, along with action store, event store, and attribute store, the platform facilitates the creation of workflows that respond to real-time events, streamlining communication, and enhancing user experiences.
Understanding Encoder And Decoder LLMs
Fundamentally, both encoder- and decoder-style architectures use the same self-attention layers to encode word tokens. However, the main difference is that encoders are designed to learn embeddings that can be used for various predictive modeling tasks such as classification. In contrast, decoders are designed to generate new texts, for example, answering user queries.
Book of the Week
Super Gut: A Four-Week Plan to Reprogram Your Microbiome, Restore Health, and Lose Weight
Do you have any more links our community should read? Feel free to post them on the comments.
Have a nice week. 😉
Have you read last week's post? Check the archive.