How I Quit Email (and You Can Too)

CC0 Public Domain

Today, email turns 44 years old.

If that doesn’t already sound odd, consider this: We upgrade our smartphones and laptops every few years, yet we’re using those very devices to communicate via a crusty old protocol that’s barely changed in half a century.

But there’s a more important, more existential problem: email consumes us. Adobe surveyed 400 American white-collar workers in 2015 and found that on average, we use email six hours a day (or 30+ hours a week).

Several months ago, I decided it was time to pull myself out of this quagmire. Today, on the 44th anniversary of its birth, I am declaring email dead. At least to me. If you’re willing to jump over a few hurdles, you too can free yourself from its clutches.

If you’re not already convinced that it’s time to say goodbye to email, here are a few reminders of why it sucks:

1. It’s not secure (and simply never can be)

Most email travels around the internet in clear text. Even when message bodies are encrypted, which is rare, the metadata still have to be sent in clear text.

Because it’s so prevalent, and because it’s easy, spearphishing attacks have caused dozens of major crises over the years: Sony, the DNC/Podesta and Hillary were all victims of simple, un-sexy email password theft. More recently, Reality Leigh Winner (an NSA whistleblower who allegedly smuggled classified documents out of a SCIF and snail-mailed them to The Intercept) was recently apprehended in Trump’s first major bust-the-leaker case. Why? Traces left behind by emails sent to the media from her work computer.

2. It’s chatty (and the chat logs live forever)

One email touches dozens of servers as it travels to and fro, leaving a digital trail a mile wide across the internet. The sender and the recipient have no way of knowing who has seen, captured or even altered the state of an email while in transit. Neither party has any control over the security of any of the logs, something that varies substantively from one data center/network to another.

3. It’s overrun by spam and near-spam

Despite heroic legislative efforts (e.g. CAN-SPAM) and heroic technical efforts (e.g. Gmail’s spam filters), we still get unsolicited email.

Even if we don’t get actual spam, we often inadvertently (or not) sign up for mailing lists and notifications while shopping online, reading news, etc. leaving our inboxes cluttered with junk, much like snail mail.

4. It’s a CC mishap waiting to happen

We’ve all been on email threads from hell where 20 people somehow end up on the CC line. We’ve all said the wrong thing, had it CC’d to the wrong person and had it come back to bite us. But it gets even more insidious: People can seamlessly add or remove other people from the CC line, either hastening the spread of foot-in-mouth disease or leaving key people out of an important conversation.

Even when we think we know who we’re communicating with, let’s not forget about the endless wonders of BCC.

Even when we’re aware of everything on the TO and CC lines, we have no way of authenticating that sending to someone’s email address will actually result in that someone receiving the message. (Perhaps not, because someone just fell victim to a phishing attack.)

5. It’s the worst possible way ever to share living documents

There are dozens of better ways to collaborate, yet somehow people still send documents as email attachments asking for feedback, creating untoward madness.

Email is a never-ending, relentless time-sink in which the important gets drowned out by the worthless screaming, “Look at me!”

Believe it or not, it wasn’t the above that pushed me to do away with email; rather, it was a conversation I had with my then-10-year-old daughter. At the time she was (and still is) an avid iMessage user. (I’ve never seen so many emoticons!) When I tried to describe email, she asked, “Why is it better than txt?”

And—despite my self-proclaimed mansplaining prowess—I didn’t have a good answer for her.

Why not? Because it’s not better than iMessage. In fact, it’s far, far worse.

On that day I started the process of moving away from email. Fast-forward several months and I’ve reduced my inbox to a healthy, manageable non-urgent notification queue filled up entirely of things I actually want to see, put there almost entirely by bots, some of my own design.

If you fancy the same or something similar, consider the following steps:

1. Verify your digital identity

Set up Keybase. It’s super geeky, so it might not be clear what you’re doing, but do it anyway. In laymen’s terms you’re “signing” your digital identities (e.g. Facebook and Twitter) so that people have a way of knowing that when they’re talking to you, they’re really talking to you and not someone (or something) else.

2. Embrace a secure messaging app

Any of these send encrypted messages: iMessage, FaceTime audio (or video), WhatsApp, Facebook Messenger, Google Phone/Messenger, Skype, Twitter DM or Slack. There are hundreds of others. Of course, YMMV based on how much you trust the companies responsible for these apps not to get hacked.

I’m trying to make Signal (by Open Whisper Systems) my goto messaging app. The UI is a little rough around the edges, but the emphasis on security, disappearing messages and a really slick device onboarding flow more than makes up for it. Give it a try.

As an added benefit, your conversations remain organized by person and not by message, which more accurately models the way people communicate IRL.

Ironically, you might get email notifications that you’ve received messages on some of the above platforms, which is okay (see #5).

3. Use Google Docs to Collaborate

Like with your choice of messaging app, you’re putting your trust in a vendor. Google, from any angle, is a pretty safe bet, especially if you’ve enabled TFA (Two-factor Authentication) for yourself and all your collaborators.

4. Set up an auto-responder

The auto-responder covers the edge case of someone actually trying to write me an email in the traditional sense. They get a short note asking them to find me on: 1. Facebook, 2. Twitter or 3. Signal (by phone number). That should work for, respectively: 1. people I know, 2. people I don’t know and 3. people who are close enough to me to already have my phone number. Of course nearly all of the auto-responders will end up getting sent to bots — and they certainly won’t mind.

5. Fine tune your notifications

I use IFTTT to filter out popular stories from the New York Times and email them to me (usually about five a day, unless Trump forgets to take his medications). I also get daily briefings from the Guardian and the WaPo. I get some mass emails from my daughter’s school, from the lindyhop community and from a few editorial sites I really enjoy (Tasting Table, Urban Daddy, Bold Italic and a few others).

Aside from communicating with bots (e.g. shuttling a NYT article delivered by IFTTT to Pocket so I can read it later), I’ve sent no more than two dozen emails this year. My inbox has become a dumping ground for notifications, none of which is urgent or terribly important. I can keep up with them most of the time. Once in a while, I get behind and I mass-delete everything in my inbox, something I can do with a high level of confidence that I haven’t missed anything important.

I’ve ceased using email for all important (and human!) communication and at the same time turned my inbox into a bespoke, bot-generated “daily briefing” of sorts.

Real conversations need authenticity, reliability and privacy. Bots don’t care about those things, so they get relegated to my once-sacrosanct inbox.

Let’s hand email over to the bots. Humans deserve a better way to communicate.

In Explaining Why He Sacked Comey, Trump Borrows From Mein Kampf

Be forewarned: I’m going to compare Trump to Hitler, again. Before accusing me of violating Godwin’s Law, please understand that his “law” refers to the odds of a Hitler reference approaching 100% in comment threads. Godwin doesn’t mention anything about the opening lines—let alone the entire premise—of a blog post.

So why Hitler? Why again? And why now? Pundits have already jumped on the liar-liar-pants-on-fire bandwagon, but they’re missing something crucial to understanding the latest balderdash to come from Trump, a literal font of nonsense and duplicity.

This time, he lied so bigly, so obviously and with such brazen impunity that his words qualify as a “big lie,” as defined by the  Führer himself in Chapter 10 of Mein Kampf:

“All this was inspired by the principle—which is quite true within itself—that in the big lie there is always a certain force of credibility; because the broad masses of a nation are always more easily corrupted in the deeper strata of their emotional nature than consciously or voluntarily; and thus in the primitive simplicity of their minds they more readily fall victims to the big lie than the small lie, since they themselves often tell small lies in little matters but would be ashamed to resort to large-scale falsehoods.

It would never come into their heads to fabricate colossal untruths, and they would not believe that others could have the impudence to distort the truth so infamously. Even though the facts which prove this to be so may be brought clearly to their minds, they will still doubt and waver and will continue to think that there may be some other explanation. For the grossly impudent lie always leaves traces behind it, even after it has been nailed down, a fact which is known to all expert liars in this world and to all who conspire together in the art of lying.”

[Emphasis mine.]

On a number of occasions, I’ve heard the claim that a lie becomes true if repeated often enough. Some even quantify this: It must be repeated at least seven times, they say. Often the qualified and/or the quantified version of this sentiment get attributed—incorrectly—to Hitler.

Hitler never said anything about the importance of repeating the lie, to the best of my knowledge, though repetition surely also had to be part of his strategy (in an epoch before instant mass communication). His description of the evil genius of a “big lie” merely states that the lie’s likelihood of being believed grows proportionally with the level of said lie’s intrinsic preposterousness.

Hitler adds that “the grossly impudent lie always leaves traces behind it.” For evidence of this, one need not look further than Trump’s other attempts at big lies. He had a hand in the infamous birther lie, a big lie whose “traces behind it” literally birthed a movement unto itself. Others that come to mind? The size of the inauguration crowds. The alleged Obama wiretapping stunt. Now this.

Trump’s lie that Comey’s firing had something to do with Clinton’s emails is yet another “big lie.”

If Hitler was correct in his analysis of the efficacy of a “big lie” (and I’m afraid he is), then this lie—Trump’s biggest and most “grossly impudent” to date—is even more dangerous than all the others. Because “in the primitive simplicity of [our] minds” we are inclined to believe it.

Whether we believe it or not, we’ll be stuck with the “traces left behind it.”

Where will we find those “traces” this time around? In the selection process for the new head of the FBI. In the process—and eventual outcome—of the pending investigation into Trump’s alleged Russia connections. In his many, many conflicts of interest, not the least of which is firing the person investigating him. In more investigations of the Clintons, even.

After all, if Comey did get fired for bungling the Clinton email server investigation, we will of course want to know how exactly it was bungled so that the Clintons will finally be “brought to justice,” right?

That, of course, is a trap. If we fall into it, then we help manufacture the many “traces left behind” that will haunt us indefinitely.

An Unlikely Cure for Procrastination

“It always seems impossible until it’s done.” —Nelson Mandela

We all have tasks that—for whatever reason—we just don’t want to do.

They might be as mundane as organizing the garage or as grandiose as building the next Facebook. Small or large, easy or complex, self-rewarding or based on the obligations to others; regardless of what needs doing, I noticed something recently that consistently helps me break through cycles of procrastination and stay focused on the tasks that matter.

My “ah-ha” moment of introspection about procrastination came when a coworker said, “I’m addicted to working on this project.”

I didn’t doubt that he was telling the truth. People have been addicted to far stranger things than software projects. But the remark made me wonder: Can I improve my productivity by channelling my inner addict?

The answer was a resounding yes. I use and re-use “addiction training” (for lack of a better term) any time I find myself resisting some task that I don’t want to perform.

In order to understand why this works for me—and may also work for you—we need to understand how someone becomes addicted. The word addiction carries with it some serious baggage. Everyone knows how dependence on hard drugs or alcohol can lead to financial and emotional ruin, the destruction of relationships and sometimes even death.

Most people also know that addiction is not a character flaw; rather a person’s brain chemistry changes related to how “rewards” get processed. A shallow dive into neurology explains the chemical nature of addiction, beginning with the prefrontal cortex, a region of the brain associated with logic and decision-making. At first, we consciously set “goals” of getting drunk or high (or working out or having sex) because those things feel good. After a relatively short period of time—with some drugs, just a few doses or with “good” habits, some say 21 days—the motivation to continue the nascent behavior moves from a logical, conscious place to a more Pavlovian one. A new part of the brain takes over: the anterior dorsolateral striatum, wherein we process rewards-based learning.

“In rats seeking cocaine, additional evidence supports the hypothesis that seeking behavior is initially goal-directed, but after extended training becomes habitual and under the control of the anterior dorsolateral striatum (aDLS).” [source]

Once the aDLS has taken over, addicts will feed their addiction at all costs, even if they can knowingly reason that “smoking is unhealthy” or “alcohol is ruining my life.” It’s literally beyond their logical control.

The chemistry of addictive drugs, stimulants in particular, facilitates the transition of using drugs from “goal-based” to “habitual.” But how does this apply to my software project—or cleaning my garage?

Here’s what I do when I find myself procrastinating:

  1. Set up an extremely small reward challenge (to trigger the aDLS), e.g. “I’m going to install RVM/ruby and create my Rails project, then I’m going to have a bowl of ice cream.”
  2. Do the extremely small task. (Okay, that was easy and it took less than five minutes.)
  3. Eat the ice cream. (That felt good.)
  4. Go back to procrastinating.
  5. Repeat.

By associating the smallest level of effort with a reward, we can begin to trigger the reward processing module of our brain, effectively feeding our nascent addiction. (Bonus points for substituting “eat a bowl of ice cream” with “go for run” or some other healthy habit.) After repeating these steps several times, you’ll likely find yourself autonomously attracted to the work you logically don’t want to do. There’s a lesson to agile product owners here too: Stories reduced to the smallest atomic parts can give developers little “slam dunks” wherein the reward is baked into the process of moving the story along the agile board.

It’s important not to create additional negative addictions during this process—and equally important to keep the aDLS on its “toes.” Give yourself a huge reward for doing very little. Then give yourself a small reward for doing something huge. Sometimes, give no reward. Or flip a coin and if it’s heads, eat the ice cream; tails: Go back to work! This “random” nature of the rewards helps cement the working addiction using ideas from something (anecdotally) more addictive than cocaine: gambling.

This method for training an addiction might work better for some than others. One study claimed that 47% of the population carried a genetic marker for addiction. Even so, we all have an aDLS and we can all learn to train it to our advantage.

Having trouble exploiting your addictive tendencies to become more productive? What other techniques have you tried when you need to break out of a procrastination rut?

The Quest for Better Tests

Over the past twenty years, I’ve written my fair share of unit tests, mostly just covering the happy path and sending in some bogus inputs to test the edges. Typically following a fat-model-thin-controller method (often recommended by me), I failed to understand the point of integration tests. I tried TDD at the beginning of several greenfield projects, but I was never successful in making it sustainable. Similarly, with Selenium, it worked at first but quickly proved to be too brittle to keep up with rapidly changing UIs. (In retrospect, bad CSS architecture on those projects probably deserved the blame more than Selenium per se.)

Despite my somewhat lackluster attitude toward testing, my employers and customers knew me as a big advocate for test automation—who always insisted that we never release anything to QA without at least a layer of unit tests. Oftentimes I was overruled by more senior leadership. As expected, from time-to-time, we all got burned by bugs that would have been easily caught by more comprehensive tests. We swore we’d write better test next time. But for a million reasons—”speed to market” being the peskiest of them—testing never became a consistent priority.

In February of 2016, I joined Lab Zero. The very first observation I made after starting on a project—a financial services application three years in the making—was the sheer volume of test code. Nearly everywhere I looked, I found at least a 10:1 ratio of lines of test code to lines of “real” code. Shortly after starting on my first story, it became readily apparent that at least a 10:1 ratio of developer effort was required to continue this pattern. We joked about developers who reported their status during daily standup by saying, “I’m done with the code and just need to write some tests,” because we knew that was a euphemism for being less than 10% done with the story!

It didn’t take long before realizing how much catching up I needed to do. In fact, the project leader told me it would take me “a year” to learn how to test properly. After first thinking that he sounded condescending, I came to realize that he was just being realistic. Testing is hard; testing effectively is even harder.

Ten months into my Test Quest, here are some important lessons I’ve picked up about automated testing.

Note: I used Ruby, Rspec and Cucumber to create my code samples, but the lessons learned will likely apply to other ecosystems.

The myth of 100% code coverage

Sure code coverage an important metric, but one that only tells part of the story. Test coverage is not the same as good test coverage. It’s remarkably easy to write tests that test nothing at all, that test the wrong things or that test the right things—but in ways that never fail.

Consider the following example, wherein the remove_employee method has a glaring error, one that will easily be caught by a unit test. Or will it?
class Company
  def initialize
    @employees ||= Set.new
  end
  def add_employee(person)
    @employees << person
    @employees.size
  end
  def remove_employee(person)
    @employees.size - 1 #danger: incorrect implementation!
  end
end
RSpec.describe Company, :type => :model do
  let(:subject) { Company.new }
  describe 'managing employees' do
    let(:person) { double(‘person’) } 
    it ‘removes an `employee`’ do
      employee_count = subject.add_employee(person)
      expect(subject.remove_employee(person)).to eq(employee_count - 1)
    end
end

Because the test for removing employees naively compares only the outputs of the add and remove methods, it passes with flying colors even though the remove_employees method internals are totally wrong.

And this why it’s a good idea to…

Test internals instead of just inputs and outputs

In most—if not all—programming languages, there are many more ways to produce “outputs” than just the return values of method calls.

C/C++ developers can optionally pass primitives to functions by reference (e.g. int &param1), morphing those inputs into potential outputs. More modern languages restrict everything to pass-by-value, but most of the time what’s being passed “by value” is actually a reference to an instance of an object. As a result, it’s possible—and quite commonplace—to mutate the object instance itself in the context of a method, providing another sneaky way for methods to have unexpected “outputs.”

Unfortunately, testing internals can be challenging, but it doesn’t have to be.

Design and write testable code

A previous version of me believed that only a very limited set of circumstances should trump writing elegant code. I recently relaxed this constraint, adopting the belief that it’s okay to over-decompose code (and make other code design compromises) in order to serve the goal of writing code that’s more testable.

For example, I might replace a simple, elegant call to a setter with a method that wraps it, e.g.:

shape.color == :blue

vs.

def is_blue?(shape)
  shape.color == :blue
end

In the past, code like this would make my eyes bleed. However, it’s really easy now to stub out is_blue? so that it returns a mock object or performs some other test-only behavior.

This is a contrived example, but if figuring out if a shape is blue required a database read or a call to an underlying service object, then over-decomposition like this is small price to pay to make the code testable.

Test incrementally

I’ve found TDD (specifically a test-first methodology) to be overly prescriptive, usually leading to diminishing returns as the project gets more complex. If it helps clarify the specs and define edges more easily, then by all means, write tests first! However, I’ve found more productivity (and less head-scratching) comes from writing tests not necessarily first, but in short iterative bursts.

Every time I finish an “idea” in code (for lack of a better term), I switch over and edit the test, usually already open in a split-screen view next to the code. If the “idea” is too complex, I take a step back and flesh out more tests to help me clarify what I’m trying to accomplish in the code.

In the past I’ve also worked in a pairing setup where I wrote the code and switched back-and-forth with another developer writing tests. Though I haven’t done this recently, it’s another technique that’s worked well for me.

DRY code, wet lets

Don’t Repeat Yourself (DRY) is a great rule-of-thumb for writing code, but it can be disastrous  when memoizing test data, e.g. through calls to rspec’s let or let!

With the exception of some truly global concepts (e.g. user_id), all test data should be initialized in close proximity to (read: immediately before) the tests that use it and should not be reused between unrelated tests.

Thinking I was helping, I tried to DRY-up some lets, soonafter realizing that I had no idea what test data was getting passed to what tests. Even it feels cumbersome to repeatedly initialize the same data over and over before each test, it’s the right thing to do.

Re-use Cucumbers with Scenario Outlines

Unlike lets, some parts of the test ecosystem are actually designed for reuse. One example: Scenario Outlines. I recommend using these whenever possible.

With Cucumber, Scenario Outlines represent the “functions” in an otherwise functionless DSL. In addition to the obvious reduction in code bulk, thinking about how I can turn several tests into one test “template” helps me write more thoughtful, self-documenting tests.

Vary only what needs to be varied

It’s tempting to cut corners (and make tests run more efficiently) by favoring randomizing test data over creating different tests for different values. Often this practice is harmless, especially if the specific values—as long as they’re in range, e.g. a person’s age—are inconsequential. (If specific values matter, e.g. people 65 and over get medical benefits, they should of course get their own explicit tests.)

Randomizing test data can also be a trap. For example, a test for a get_birth_year method might start to “flicker” or “flap,” meaning that it passes and fails non-deterministically between test runs—all because of the decision to randomize ages.

To protect against this, it helps to treat each test as a controlled experiment, i.e. by keeping the scientific method in mind. Try to control everything that can be controlled and vary only the specific inputs getting tested. Of course, there are things we can’t control, like the system clock, the speed of the network and the availability and behavior of upstream systems. But whenever things can be controlled, control them.

Write meaningful, descriptive test names

Acknowledging the fact that I just recommended thinking like a scientist, I’m now going to suggest putting on a writer hat. When naming tests cases and writing Cucumber steps (which read like prose already), it’s super-important to be descriptive, concise and accurate.

In a place full of smart people like Lab Zero (#humblebrag), developers are not necessarily the only people looking at tests. Recently I had an agile product owner ask me how a certain feature handled different types of inputs. To answer the question, I walked him through my rspecs, reading each test name aloud and describing the expectations.

Writing coaches always say “show, don’t tell.” There is simply no better way to show—and prove—that a feature works than reading through the tests, which serve as the closest link between the specs and the code.

Putting the Science in “Computer Science”

One of my professors in college said that any discipline that has the word “science” in it is actually not a science. This is especially true for computer science, something that at some schools classify as a fine art (making it possible get a BA in CS). Writing code is a certainly a form of communication, at least to peers and future developers. Of course, they are not the customers. And the best way to “communicate” with customers is to provide something for them that works as designed.

How do we ensure that? With well-written tests.

Tests really put the science in computer science. Think of them as a series of carefully controlled experiments. The hypothesis is that the code implements the spec.

Without tests, there’s really no way to know if it does or not.

* * *

Originally published on Lab Zero’s blog.

Why We Shouldn’t Compare Vault 7 to Snowden’s Leaks

For seven years I worked as a government contractor developing software for CIA. Although I was not briefed into as many compartments as a systems administrator like Snowden, I held a TS/SCI clearance and had the same ability to access classified information as any “govie,” just with a different color badge.

Also unlike Snowden, I didn’t knowingly compromise any classified material. That being said, what Snowden did is ultimately good for civil liberties in this country. Moreover, the courage and bravery of his actions make him a true patriot, an American hero and the mother of all whistleblowers.

This is simply not the case for the anonymous leaker(s) behind Vault 7.

The reason for this lies not in the specific methods of cyberwarfare that were leaked today, but rather in who was the target and by whom were they targeted. In other words, CIA using cyber attacks against foreign nations is very different from NSA violating American citizens’ 4th Amendment rights with wholesale data collection from wireless carriers.

Spying on Americans is simply not in CIA’s charter. We have plenty of ways to fuck with Americans: NSA, FBI, DOJ, IRS, state and local police, metermaids and a million other authorities. But unless you’re communicating with ISIS, CIA could care less about what’s happening in your living room.

What CIA does care about is gathering intelligence around the world to keep Americans safe at home and abroad. Of course there are boundaries. Sometimes those boundaries get crossed. Cyber attacks, however, do not violate the Geneva Conventions or any other rules of engagement. It’s 2017, ffs. If our country wasn’t exploiting hostile nations’ computer networks and systems, I would be disappointed in us. If Alan Turing didn’t “hack” the Enigma code during WWII, this post would probably be written in German.

There are two big arguments against this, two reasons why people are saying this release of information is good for America and her freedoms.

The first argument is that CIA did us a disservice by not sharing these exploits with the private sector, thereby leaving the doors open for bad guys.

That is true, but only in part. Hackers would need to independently find these same vulnerabilities and find ways to exploit them. It’s not like they’re gonna call CIA’s helpdesk for virus installation instructions. Furthermore, we in the open source community have a long history of whitehat hacking, the process of finding and reporting vulnerabilities back to vendors to make the digital world more safe and secure.

The second (and related) argument is that viruses and other malware could fall into the wrong hands. This is also true, just like it’s true for assault weapons, hard drugs and prostitution. They’re all illegal af, yet the bad guys still have ways to get them. This doesn’t mean we should stop cyber espionage, any more than it means we should stop making military assault rifles. Like with all our spying activities—and with spying activities in general—we should just do a better job covering them up, in much the same way we protect the real identities of (human) assets in the field.

In sharp contrast with what Snowden did, this release will have a net negative impact on our intelligence-gathering capabilities, weakening our ability to engage with potentially dangerous foreign powers.

 

Perhaps the worst part of this disclosure is that it further undermines CIA and erodes confidence in the intelligence community, already under fire from the so-called Trump Administration. It also comes, conveniently, just after Trump claimed he was inappropriately wiretapped.

Technically, this leak has no bearing upon wiretapping, but it’s safe to assume that Trump will take this as an opportunity to further belittle CIA and the intelligence claims about Russian interference in the election.

We will probably never know, but I strongly suspect a Russian source provided some if not all of these leaked materials. Let’s not forget: even though Snowden lives in exile in Russia, he’s as American as apple pie.

Good on You, Good Eggs

Ordering is a piece of cake using Good Eggs’ responsive web site or iOS app

Even the most saintly among us have experienced schadenfreude, the act of taking pleasure in someone else’s misfortune. More often than not, however, I find myself seeking a way to empathize with someone’s achievements.

Unfortunately, the American English lexicon falls short in this capacity. We’re fraught only with the phrase “Good for you” which is as likely to carry authenticity as it is sarcasm, envy or ridicule.

To properly express myself under these circumstances, I must turn to British English and their lovely idiom “Good on you,” which leaves little room for misinterpretation.

This foray into the subtleties of English dialogue might seem silly and off-topic, but I assure you it’s the only way I can possibly reflect my feelings about this matter, namely: There is quite literally nothing that isn’t good about Good Eggs, the online grocer that has returned to my daughter’s elementary school for a second joint fundraiser.

As they did in the fall, Good Eggs plans to offer, for a limited time, 10% of gross sales back to participating Bay Area schools. At Hidden Valley in Marin County’s quaint town of San Anselmo, those funds go directly to the school garden. To participate, just sign up and use the code HIDDENVALLEY at checkout. As an added bonus, Good Eggs will also apply a credit of $15 at the outset—and another $15 for customers who place orders before March 15th.

Good on you, Good Eggs. And good on all of us who participate in this amazing program that benefits local farmers/producers and local schools while putting great food on the table with unparalleled convenience.

Good Eggs offers same day grocery delivery (for orders placed by 1pm) or next-day delivery (for orders placed by midnight). They have a web site and an iOS app that make ordering a breeze. Their extensive catalog of products makes it possible for them to be the sole-source of groceries for even the most discerning families of foodies.

A Good Egg carefully inspects some dino kale before packing

I recently had the pleasure of touring the Good Eggs facility in San Francisco. While soothing music played through the warehouse PA, I marveled at the discipline applied to each food product from the four different temperature zones at it gets hand-inspected before packing. They reject any item with even the slightest imperfection and relegate it to the Good Eggs kitchen, where master chefs repurpose it into lunch for fellow staff members. This virtuous cycle results in food waste numbers of about 4%, besting most grocery stores by a factor of ten, according to my host.

Their packaging department demonstrates a comparable concern for Mother Earth by using compostable, reusable and recycle-able packaging where-ever possible. Customers can leave their packing materials at their door; when the next delivery comes around, they’ll get retrieved and repurposed.

Master chefs at work in the Good Eggs kitchen

As I was treated to a revitalizing turmeric, ginger and almond milk “tea” from the Good Eggs kitchen, I learned how they intend to enter the market for school lunches and pre-packaged meals with minimal preparation and that they plan to start selling alcohol in the near future.

Good Eggs offers pricing similar to a high-end grocer like Whole Foods with free delivery for orders over $60. They also carry speciality items like Tartine bread and Bi-Rite ice cream, for which they charge a premium.

Small price to pay for not having to queue up for two hours for a loaf of bread or a scoop of ice cream.

There I go with my British English again.

I Made my Wife a Bot for Valentine’s Day

This morning I rolled out Tink, a simple interactive chatbot I wrote for my wife as a gift for Valentine’s Day.

Every few days, Tink will text my sweetie a randomly-selected yes-or-no question from a list of questions I wrote, e.g. Would you like to take hip-hop classes? At different random times, it will also text me random questions from the same list. When we both reply “Y” to the same question, it will notify us of that happy coincidence and suggest that we, say, finally enroll in those hip-hop classes.

Basically it’s Tinder, but for couples. But not in the way you’re thinking (you dirty dawg).

Instead it’s a fun way for two romantic partners (or just friends?) to discover shared interests they didn’t know they had. I suspect Tink will also become a motivator to actually do the things it suggests. (We’ve been meaning to sign up for hip-hop classes for months, but haven’t yet.)

The questions I wrote for Tink’s inaugural run mostly revolve around ideas for fun dates, outdoor activities, new restaurants we want to try, etc. However, there’s no reason why Tink questions couldn’t cover religion, politics, sex—or even topics actually fit for the dinner table.

With G-rated questions, Tink could serve families or even small friend groups, but right now it’s only a bicycle built for two.

Wanna take a peek under the hood? I made Tink opensource under the MIT license.

Five First Impressions of LabZero

I recently joined Lab Zero as a software developer. My friend Brien Wankel, one of their Principal Engineers, had been encouraging me to interview here for more than a year. I hesitated because, to put it bluntly: What’s so special about another boutique software development agency? There are hundreds—if not thousands—of them in the Bay Area. Plus, I was still trying to strike gold playing the startup equity game and I had already run my own boutique software development agency for a decade.

At long last I took the plunge, and I’m really glad I did. Ten weeks in, these are my first impressions of Lab Zero.

1. “We pay for every hour worked, no exceptions.” —The CEO

Lab Zero’s culture in three words: “Life, then Work.” Everyone here, myself included, is a W-2 hourly employee. To prevent people from worrying about using PTO when they’re sick (which eats into vacation time), we’ve done away with the concept altogether. We get paid for every hour we work—and we don’t get paid when we’re not working. That also has the side benefit of discouraging people from coming to work when they’re contagious. As a substitute for PTO, we accrue personal/family sick time, bereavement and jury duty time.

Employment here includes all the usual benefits, but without the attached expectation of working 60-80 hours/week (or more) and getting paid for 40. I surf every Wednesday morning (if the weather conditions cooperate) and I volunteer at my daughter’s school in the afternoon. I might only bill for 4-5 hours on a Wednesday. I might put in a few more hours after dinner—or not.

I haven’t put this to the test yet, but I may need to scale back my hours at Lab Zero by 50% or more to run tech for another political campaign or to get more involved in the farm-to-table movement or maybe to start a side business—or not.

2. “We follow software best practices.” —Everybody

So we put life first and work second. But does that mean that we don’t care about what we do? Hells no!

Lab Zero embraces a documented set of methodologies that make great software development possible, if not pleasurable. We have 100% or near-100% test coverage on all our projects; we write unit tests, functional tests, automated UI tests—to the tune of roughly ten lines of test code for every one line of “real” code. We practice continuous integration; we have a stringent pull-request review process and we reject pull requests for even the slightest blemish, e.g. a typo in a commit message.

This culture of doing things right at all costs may sound too onerous to be practical, but what I learned after a couple weeks here is that the effort we put into rigorous testing pays us back in spades, measured by the very small number of issues that slip through the cracks, eventually needing to be caught by QA or found in production. Plus, as long as I can keep the test suites passing, I can refactor without fear that I’m going to break something.

And if I do break something incidentally, it usually just means I need to write a better test, which in turn will help overall quality in a virtuous cycle.

3. “We do Agile really, really well.” —Our Customers

Agile prides itself on being agile, per se. (How deliciously meta is that?) Take what you want, leave the rest. As a result, there are infinitely many ways to do agile well—and an equally-indeterminate number of ways to do it badly.

Last week, I heard a senior executive at one of our customer sites tell us (in front of a room of twenty people) that we were the gold standard for agile projects at their organization. Enough said.

4. “We care about having a beautiful, functional workspace.” (And it shows.)

We have top-shelf coffee, great snacks and drinks, a loaded kegerator, automatic standup/sitdown desks (each with four presets), Apple Cinema Displays, an office sound system, massive TVs, stylin’ chairs and Fluid Stance boards. If you need anything, within reason, it just shows up at the door.

In addition to that, Nicole Andrijauskas just finished painting an amazing mural spanning the entire south wall of the office.

We have catered lunch-and-learn sessions every other Friday. On the alternating Fridays, we descend in a hungry mob to a local restaurant (like Barbacco, this past Friday) and Lab Zero picks up the tab. In addition to Fridays and the regular bevy of snacks and beverages, there are also bagel Wednesdays, eclairs one day, coffee cake another, etc.

As much as I love our office, I also love my half-time Wednesdays working from home (and/or the beach). Which is totally fine, of course. I’ve even been finding a leftover bagel or two on Thursday morning for me.

5. “Diversity is woven into the very fabric of our culture.” —Me

The notion of full-time employment does not preclude hiring people who rawk at things besides their profession, but employers don’t explicitly benefit from it either.

At Lab Zero, where life comes first—and turnover is near nil—we’ve built an eclectic mix of developers, designers, writers, agile product owners and bizdev folks who double as parents, recovering chemists, musicians, surfers, teachers, artists, marathoners, photographers, LGBTQ folks, future real estate moguls and one of the world’s leading experts on tiki.

There’s no better testament to Lab Zero’s people than this: I could do my job almost exclusively at home. I could also bill an extra two hours instead of commuting to downtown SF from the North Bay. But I actually want to come to the office.

Ten weeks in. Zero regrets. Can you say this about your job? If not, maybe you should join us for lunch.