Breck Yunits' Blog

Pricking my finger then measuring the level of ketones in my blood.

April 16, 2024 — I pricked my finger and moved a disposable ketone measuring stick into the newly formed drop of blood. I was checking my "blood ketone" levels. If the result came back higher than 0.8 mmol/L, I would be in a state of "ketosis".

The meter showed "1.0 mmol/L". Success! I was still in ketosis.

*

It was day 174 since, at wits' end, I started a therapeutic ketogenic diet as a treatment strategy for bipolar disorder.

Before hearing about keto for bipolar I had tried and failed nearly all of the major bipolar treatment strategies.

One, lithium, is often called the "gold standard" bipolar treatment.

*

I tried lithium. Three times.

I'll admit, lithium did seem to stabilize my energy, but there was a serious downside: the lithium slingshot, I call it.

The side effects and annoyances of taking lithium are constant. Some parts of my brain and body are objectively worse on it. Brain fog. Weight gain. That sort of thing.

So, even if I took lithium correctly for 300 days, on day 301 parts of me would still be voting to stop.

And so, inevitably, at some point parts of me would find an excuse to stop. Then, within weeks, I would enter a state of hypomania or mania. As the lithium tapered out of my system, my brain energy would shoot forward way past the starting point. The lithium slingshot.

*

I looked at the ketone meter again: 1.0 mmol/L.

I paused. There was something familiar about that number.

Then I remembered. Years ago my doctor said we were aiming for a lithium blood level of 1.0 mmol/L.

To be in therapeutic ketosis I needed ketones to be around one millimole.

To be in therapeutic lithium I needed lithium to be around one millimole.

Both molecules have neuro effects when they are around one millimole.

What are the odds? We can measure chemicals at 100 nanomole levels, 10 nanomole levels, 1 nanomole levels, and at levels below that and everything in between.

Was it just a random coincidence that these 2 different chemicals had overlapping therapeutic windows, or is this a clue to some common mechanism(s)?

Unfortunately, it had been 22 years since high school chemistry class, and I couldn't even remember what a millimole was. This was going to take me some time.

But I had to know the answer. You could even say there was a chance my life depended on it.

So, I dove in. This is the story of coming up with an answer to the question: what are the odds that the therapeutic windows for 2 different chemicals would overlap?

*

Lithium

Taking lithium was annoying.

I had to swallow pills each day. I am a competitive person, but in a pill swallowing competition I would lose to pretty much anyone.

I suck at swallowing pills. If they are larger than an M&M I'm going to need 3 because I'll gag and cough up the first two.

I don't know why I'm worse than others at swallowing. Maybe it's from that summer on Cape Cod when I was a kid and left the kitchen with my mouth full and almost choked to death on a piece of undercooked bacon. I made it back to the kitchen seconds before I passed out, my dad saw what was happening, gave me a pop, and saved my life. Maybe ever since that day my brain has turned every unchewed thing in my mouth into a piece of undercooked bacon.

Now, if it was just discomfort from swallowing pills, I'm sure I could focus on that and learn.

But it's not just swallowing the lithium. You also have to get the lithium.

I can go to a liquor store and buy enough alcohol to supply a bar for a decade, but god forbid I be allowed to keep a year's supply of a supposedly "gold standard life saving" medicine on hand. Instead I had to go to a pharmacy every month or so, often getting a different variant of lithium pills (extended release; different shapes; different sizes, etc) which I could never predict beforehand, and never being quite sure what I had to pay until I was at the register.

More than that, to keep the lithium prescriptions coming I had to maintain an expensive relationship with a psychiatrist or nurse practitioner. The system tells us the worst thing you can do is stop taking your meds but then is designed to make it as easy as possible to stop taking your meds. As far as I can tell bipolar meds can only be managed successfully long term if you are the type of person as predictable as the moon, and bipolar people decidedly are not.

Of course, there are real safety concerns with lithium. At high enough concentrations it can damage your kidneys and even kill you. So taking lithium came with another annoyance: laboratory blood tests. Measuring lithium in your blood is harder than measuring other things, such as glucose and ketones. (I say this now as if I already knew that, but I knew almost nothing about any of this until I set out to answer that first question). I can tell you from experience at-home blood tests are a 100x better experience than lab tests.

To sum up, not even talking about side effects but for logistical reasons alone, taking lithium was annoying.

*

Keto

Keto is different.

There are no pills to swallow. There are no prescriptions to perpetually refill. There is no relationship with a licensed provider I have to maintain to keep getting those prescriptions. I don't need to worry about toxicity levels1. And I can do the blood tests myself, at home.

The "side effects" from eating healthier whole foods include weight loss, better teeth and better skin.

(I do need to note that there are still many long term unknowns around keto and like anything, if done improperly, there can be serious health consequences1. And it is sad to say no to real bagels, pizza and pasta).

But here's the thing about keto for bipolar disorder: it is basically brand new. Although the idea of the therapeutic ketogenic diet has been out for over a hundred years and well studied for epilepsy, how keto could be used for bipolar disorder has just barely been looked at by science. In fact, if it weren't for a few curious pioneers2 and philanthropists, it would remain barely looked at by science.

Now there's a serious effort underway to test and better understand keto for bipolar. In a few years, we will know a lot more.

Who knows, maybe the expanded science may cause ketones to pass lithium as the new gold standard in bipolar.

Meanwhile, while we wait for the professional scientists to come to confident, big data backed answers, amateur scientists, like yours truly, have gathered in online forums to share knowledge and try and help get to an understanding of keto for bipolar faster.

And so now I return to the one millimole question.

*

What is a ketone?

Ketones are tiny molecules. The ketone BHB (beta-hydroxybutyrate) accounts for around 70-80% of your ketones. Image Source: NIH

A ketone is small. Very small. The diameter of a ketone is 0.6 nanometers. Have you heard how small an atom is? A ketone is not much bigger. A ketone is only 10-15 atoms (hydrogen, carbon, and oxygen). Ketones are so small, that if you layed 20 million of them end to end, you would only make it across one penny.

There are 3 kinds of ketones. They have long chemical names. BHB (C_4H_8O_3), which makes up about 70-80% of your ketones. AA (C_4H_6O_3), which makes up around 20%. And Acetone (C_3H_6O), which makes up around 2%.

Source

BHB is the one we care most about. It is the one most abundant. When I prick my finger and measure my ketone levels, I'm measuring BHB.

Ketone test strips contain a molecule called β-hydroxybutyrate dehydrogenase. This reacts with the BHB. This reaction is correlated with a change in electrical current, which is measured by the handheld device, and communicated to me as my ketone blood level (a technique called Amperometry).

At home finger prick ketone blood tests are highly accurate. Source: Keto Mojo GK+ manual

Somewhere around 90% of ketone production happens in the mitochondria of the liver cells (hepatocytes)3. The liver is actually the largest organ inside the human body (after that is the brain, lungs, heart, ...). You can't live without a liver. Maybe that's why they call it the liver?

Ketones are small and water soluable. From the liver, ketones enter the blood plasma (not the blood cells) to reach other destinations in the body. Blood plasma makes up about 55% of the volume of the blood, is mostly water, and distributes important things like ketones throughout the body.

Ketones that are not used by cells for energy production leave the body through urine (60-80%), breath (15-25%), and sweat (<5%). To get in your urine, ketones are filtered from your blood by your kidneys, travel down your ureters, into your bladder, and then travel out into the world, emitting a fruity "ketone" smell.

Sidenote: It is interesting that humans are able to smell ketones. I wonder why we would have evolved an olfactory sensitivity to them. Is it a positive smell, or a warning sign?

For ketones to reach brain cells, they travel in the blood plasma from your liver to your brain, then cross the blood brain barrier (which has mechanisms to permit the entering of ketones), into the Interstitial Fluid of the brain, and from there are taken into the cells, where they are used in the Krebs cycle to make ATP.

*

Okay. So I can describe a ketone. What about a millimole of ketones?

Turns out, that is a lot of ketones.

One mole is ~ 6 \times 10^{23} . This means that one millimole of ketones is:

602,200,000,000,000,000,000 ketones

Wow! I forgot how big the small world is.

If I was able to pile up all the ketones in my blood, how much would that amount to?

The amount of whole blood (blood cells + blood plasma), varies depending on the size of the person. I am 5'10", 170lbs, which means I have roughly around 6 liters of blood in me.

If you took out all my blood, this is roughly what it would look like.

I will leave the math in the comments, but the final answer is 0.5mL worth of ketones.

This means, if you gathered all the ketones in my blood right now, despite it being an absolutely massive number of ketones, you'd have around a blueberry's worth of ketones.

If you gathered all the ketones circulating in my blood right it would be about the size of a blueberry.

I told you they were tiny.

Of course, the amount of ketones in my blood at any one moment is not the same as the amount of ketones my liver is producing. I mean, the purpose of ketones aren't to circulate in the blood but to be consumed by cells.

We measure ketones in the blood, because that's the easiest place to measure them, but they are in other places too, like your cerebral spinal fluid, sweat, urine, and 💩.

Estimates vary on how many ketones my body would produce in a day while in ketosis, but 100 grams of ketones per day seems to be a decent ballpark, which amounts to around one mole of ketons (1,000 millimoles), which, if we wanted to visualize it, would be about the size of 200 blueberries:

All the ketones made by a person in a day might be around 200 blueberries of ketones.

Alright, so now we have a better sense of what a millimole of ketones is. Let's turn our attention to lithium.

*

What is lithium?

Lithium carbonate is also a tiny molecule. Image Source: PubChem

There are around four lithium salt compounds in use for medicinal purposes. Lithium Carbonate (Li_2CO_3) is by far the most common, around 95% of the lithium prescribed in the world. Lithium orotate (C_5H_3LiN_2O_4) is the second most popular, accounting for around 3%-4%. Lithium Citrate and Lithium Aspartate are the least used. Lithium Chloride was also used medicinally in the past before a majority opinion arose that it was more toxic than the others.

Lithium in pill form. Lithium orotate (shown above), is studied much less than Lithium Carbonate, but one advantage it has is that you can buy it cheaply online, with no prescription. Unfortunately lithium at high levels is toxic and there is no at home lithium testing yet.

Because it is the dominant form, for the rest of this post when I say "lithium" I am referring to Lithium Carbonate.

Just like ketones, lithium is tiny. In fact, according to my calculations, a molecule of lithium is around 2.5x smaller than a ketone molecule.

Unlike ketones, you cannot check your lithium levels at home (though some people are working on that). Instead, to measure lithium levels your blood needs to be processed in a lab using techniques with names like Ion-Selective Electrode, Atomic Absorption Spectroscopy, Inductively Coupled Plasma Mass Spectrometry or Flame Emission Spectroscopy.

I can't really find a great reason why at home lithium tests aren't available. Some people write that it wouldn't be safe, because getting accurate readings is more difficult, but I think multiple at home readings, even if off a bit, would be better than no readings at all. If I had to pick an explanation, it would be that because of diabetes, there's a big market for glucose and ketone tests, but it's relatively rare to take lithium so there isn't a big enough market for a company to build a convenient at home lithium test at this time.

After you swallow your lithium the pills land in your stomach. There, fluids dissolve them. The lithium (Li+) splits from the carbonate. The next destination in your gastrointestinal tract is your small intestines, a place which happens to allow ~100% of the lithium ions to be absorbed into your bloodstream.

Like ketones, lithium crosses the blood brain barrier. Unlike ketones, lithium cross via passive diffusion. Ketones use active transport (well, not all ketones. Acetone can cross passively).

Once lithium is in your brain, it both remains in the extracellular space and also is able to enter neurons.

When a lithium ion enters a cell, it may hang around for quite a bit, but eventually when it leaves it is the same lithium ion. This is different than a ketone molecule, which undergoes metabolic transformations in cells.

Like ketones, your body excretes lithium in your urine and sweat. (Sidenote: Unlike ketones, humans have not evolved a sense sensitivity to lithium).

Another lithium test a lab can do is called an RBC test. This test measures the levels of lithium ions that have entered your cells (red blood cells specifically). To perform this test, your red blood cells are first separated from your blood plasma, then they are lysed (split open) so their contents can be measured. It seems lithium levels in blood plasma spike quickly after ingestion, then decrease quickly, but levels change more slowly in other places, like red blood cells, neurons, and CSF (Cerebrospinal fluid).

This seems to explain why my lithium slingshot did not happen immediately after stopping lithium but in the weeks after. Once you stop taking lithium, the lithium content in your blood runs out fast, but it takes more time for the lithium to leave all of your cells. Then, those cells, which had been working in a lithium rich environment, seem vulnerable to catching a manic fire. The lithium slingshot.

Can we see the amounts of lithium in the brain? Apparently we can, to some degree, using MRS. Ketones too. I looked into being able to take my own MRS pictures at home, but didn't have a place to put one of the machines. Oh yeah, and they cost $500,000!

I also want to note something interesting about lithium blood levels vs ketone levels. We mentioned earlier that when in ketosis your body is constantly producing and consuming ketones, so many that it would amount to over 100 blueberries throughout the day. But lithium is exogenous and you usually take it just once or twice a day, so your body is exposed to just a few blueberries of lithium a day. But the blood levels are similar. So where are all those ketones going? It seems that your cells must be grabbing those ketones from the blood far faster than lithium is grabbed from the blood.

Lithium seems to move more slowly into cells compared to ketones. Perhaps the response time to ketones might be 2x-10x greater than to lithium. Again, this is probably the reason for the delayed lithium slingshot.

*

Whew. That was a lot to learn.

Now, armed with these basic model of ketones and lithium, I can start returning to the main question of this post.

*

One millimole: what are the odds?

This whole post began because I noticed that 2 very different treatments for bipolar disorder, a ketogenic diet and lithium, use blood tests to determine if someone is in a therapeutic range, and it just so happens that even though each is testing for the presence of something different (ketones vs lithium), if your level is 1.0 mmol/L then you are in the therapeutic range in both treatments.

Now of course, the ranges are not identical. For lithium, the therapeutic range is generally given as between "0.4 and 1.2 mmol/L". Lithium levels above 1.5 mmol/L are considered toxic and above 2.0 mmol/L can be life-threatening.

The range for nutritional ketosis is usually given as between "0.5 and 3.0 mmol/L". Some say ketosis begins above 0.8mmol/L. Some say nutritional ketosis requires levels above 1.5mmol/L.

Admittedly then my "one millimole" is a slight simplification, as the target therapeutic ranges are not identical, but there is overlap4.

*

We can measure a ton of chemicals in the body. Different chemicals have different effects in different concentrations. Should we be surprised at all that two different chemicals both seem to treat the same condition at similar concentrations?

Or is it just really common in human health to find chemicals with a therapeutic range around the one millimole per liter level?

If lots of chemicals have a therapeutic range around one millimole, then the intersection of therapeutic ketone and therapeutic lithium levels would be less surprising.

If few chemicals had a therapeutic range around that region, then it would be more surprising to see that intersection occur, and it would then be worth probing whether that is a clue to some biological mechanism. (Perhaps that is a common saturation level where enough neurons are exposed to the chemicals to prevent any breakout manic wildfires?)

To answer this question, we need to know how likely it is that 2 randomly selected blood chemicals would have overlapping therapeutic ranges.

*

How many blood tests are there?

To know the odds that any two random blood tests would have an overlapping target range, we first need to know how many different blood chemistry tests there are.

Just how many chemicals are there? According to the WHO, humans have now labeled over 160,000,000 chemicals! If you measured your blood levels for each one of these chemicals with one drop of blood per test, doing one test a second, you would be dead by tomorrow night and only have done 00.00075% of the tests.

So, let's narrow our list of chemicals to just the ones commonly found in human blood.

An example besides ketones and lithium is glucose. For glucose, the normal blood level range is between 3.9 and 6.1 mmol/L. Levels below 3.9 mmol/L start to be considered hypoglycemic, with levels below 2.8 mmol/L can be dangerous. Levels above 7.8 mmol/L when fasting start to be considered hyperglycemic, or above above 11.1 mmol/L 2 hours after a meal.

Glucose, C_6H_{12}O_6, is another tiny molecule. A carbohydrate, it is the primary fuel of metabolism. Glucose blood tests can easily be done at home. Image Source: NIH

*

My first day looking for a good dataset of chemicals found in human blood with reference ranges did not go well. I spent a couple of hours without success. So I started making my own.

On day 2, while continuing to build my own blood test dataset for this post, I found a page on Wikipedia that already has a list of the most common blood tests with reference ranges, and some Wikipedians had even made a visualization already. After cursing myself for missing this on the first day, I switched to being grateful to the Wikipedians and started enhancing my own dataset with this information. I also found this page on Wikipedia that lists 240 human blood components.

I spent a couple of hours building my own dataset to make a chart like this, then found out it's already been done on Wikipedia.

I was thrilled to find this information on Wikipedia, but still it seemed like a small number of chemicals. Are none of the other 160,000,000 chemicals relevant? I kept searching and finally found a team who built an amazing website called HMDB that lists 3,126 chemicals that would be relevant to my question. I felt good knowing that I could do this initial analysis using the Wikipedia data, and if it seems worthwhile in the future, the HMDB data can be used to do it again at a 10x scale.

Finally I had enough raw material to start building my first structured, clean, tabular dataset to answer my question. It was a bit of data cleaning grunt work, and v1 is pretty ugly and I'm sure is packed with loads of errors, but finally I had a dataset of the reference levels for around 240 chemicals found in human blood.

So now I can finally answer: one millimole, what are the odds?

*

My Answer (v1): 1 in 20

The reference levels of only 14 of the 241 chemicals in my dataset have an overlap with the therapeutic ranges of ketones and lithium. The only other chemicals that have overlap with these levels are listed below:

  • Urea
  • Glucose
  • Potassium
  • Creatine Kinase
  • Lactate Dehydrogenase (LDH)
  • Calcium
  • Triglycerides
  • Alkaline Phosphatase
  • HDL Cholesterol
  • Phosphate
  • Magnesium
  • Albumin
  • Cortisol
  • Uric Acid

If someone called me at this moment, and offered me even odds as to whether it was just a random coincidence that there is overlap in the therapeutic ketone range and the therapeutic lithium range, or whether it was because of some related biomechanisms I would bet heavily that it is not a coincidence. This would be great, because I would be able to honestly say I did my best in the time allotted, made the best decision with the information I had, and could stop work on this frighteningly long blog post.

Alas, the phone did not ring. So I can't "call it a day" just yet.

Especially because I got an answer that makes me look good. This is often a sign of wishful thinking. Imagine if it turned out that half of all chemicals in the blood had target levels around 1 mmol/L, and that this was a commonly known fact to everyone in medicine and chemistry. If I found that answer, then I might feel silly for spending so much time on such a naive hunch. The fact that my hunch seems more interesting now, makes me think I may have biased my dataset.

I must walk around my work and do another inspection.

*

The biggest flaw in my current research is that I haven't made an extra effort to include other chemicals taken by bipolars in my dataset. Let's do that now.

*

I went back and added 8 drugs by hand to my dataset including the common bipolar medications Valproate, Lamictal, Zyprexa, and Abilify. Of the 8, only one has a bit of overlap: Valproate (~.2 - .5). Interestingly enough, the bipolar medication that usually ranks #2 (after lithium), is...Valproate!

Valproate (aka Depakote) is often ranked 2nd among bipolar medications and also has overlap with the ketone/lithium range. It also crosses the blood brain barrier by passive diffusion. Image Source: NIH

So I expanded my dataset yet still found it unlikely that 2 random chemicals would have therapeutic level overlaps. At this point, I am confident concluding this leg of my research journey. With this additional data I would not change my initial bet. My answer is: the slight overlap in the therapeutic blood level ranges of the 2 top bipolar drugs with the therapeutic level of ketones is a strong clue to some underlying biomechanisms.

*

My research won't stop here. But this blog post will.

This post is already maybe my longest, and if it gets much longer not even the author would want to read it.

Also, given that my background in chemistry consists of the past 1 week of Internet searches combined with 1 year of high school chemistry 22 years ago, there could be some really naive, glaring mistakes in this post, such as flaws in my mental models or huge errors in my datasets (largely assembled with copy/paste and LLMs), that radically alter the conclusions. It's better that I publish what I have now, and maybe a reader could alert me to suboptimal models or paths I've followed.

This post may not describe my future understanding, but does describe my current understanding, and my journey getting here.

I am looking forward to researching the next logical question: what might those common biomechanisms be? How might ketones and lithium work in bipolar? Perhaps there will be a Part II to this post.

But for now, if you'll excuse me, I've got fats to eat, and fingers to prick.


Notes

1 There is a dangerous condition involving ketones called ketoacidosis affecting mostly diabetics which is associated with around 200,000 cases and 600 fatalities per year in the US.

2 A good starting place to start reading about some of the keto for bipolar pioneers is Metabolic Mind.

3 I should note that there is recent research showing that astrocytes (a kind of brain cell) may also produce ketones. I did not dive deep down that thread yet, but thought I should mention it since it involves ketones and the brain, which is ultimately what I'm interested in.

4 I am using the one millimole number to simplify the writing a bit, but when you visualize things keep in mind we're thinking about overlapping ranges rather than a specific point.


Thanks to RGM for feedback on this post.

View source

April 10, 2024 — Now that I am writing more about Bipolar Disorder, and even have a category page for the term, I thought I should write a brief note on what I think about the term itself.

In short, I predict in the long run, as our understanding increases, the phrase "Bipolar Disorder" and its sub-phrases (Bipolar I, Bipolar II, Bipolar NOS, and Cyclothymia), will fall out of use and be replaced by a larger set of more specific terms clustered not by symptoms but by biological causes.

*

This is not an uncommon opinion to have. Kay Jamison, in a 2012 talk said "It's a bit misleading to talk about bipolar disorder/manic depressive illness because we are really talking about twenty to twenty five different disorders that we don't know yet."

*

What might these new terms look like? One interesting new term from Hannah Warren is "neurometabolic dysfunction". It is based on the hypothesis that for at least one subset of people currently given the label "bipolar disorder", it may be more accurate to label their condition as a metabolic problem.

Even neurometabolic dysfunction is still a fairly broad term, but it heads in the direction of finding terms labeling conditions by their root biological causes, rather than by their symptoms.

*

What's wrong with naming conditions after their symptoms rather than their causes?

Imagine if instead of having more specific diagnoses like influenza, strep throat, malaria, UTIs, and colds we just had the term "Fever Disorder". Treatment outcomes would likely be a lot worse.

In a way this is kind of where we are at with the term "Bipolar Disorder".

*

Terminology has always been evolving. The two major classification systems are the DSM and ICD. Newer ideas include RDoC and HiTOP.

ICD-6 (1948) had the term "manic-depressive". DSM-1 (1952) had the term "manic-depressive". DSM-III (1980) introduced the term Bipolar Disorder and the two categories of Bipolar I and Bipolar II. ICD-10 (1992) also switched to the Bipolar Disorder terms.

Newer classification systems include RDoC (2010) and HiTOP (2015) have interesting new approaches for describing things (which I myself have not fully gotten up to speed on yet).

An interesting visual of HiTOP, from Wikipedia.

*

About that word Disorder

Once we understand the biological mechanisms driving the energy cycles of people currently diagnosed with Bipolar Disorder, we might come up with new therapeutic approaches that are so effective that we then come up with terms that don't end in a word like "Disorder" or "Dysfunction".

Perhaps there are natural energy cycles that serve a purpose that we just haven't identified and figured out yet, and only because of that ignorance are these cycles problematic.

Perhaps once we're finally able to fully understand the biology, it might not make any sense to call these patterns "Disorders", just as it would not make sense to diagnose lobsters with "Molting Disorders."

*

Reification Fallacies

Knowledge cultures commit Reification Fallacies often enough that the term has a Wikipedia Page. This just means coming up with a term to label a discrete pattern that does not quite actually exist. People might correctly identify that there is a pattern(s), but it can be a mistake to be over confident that they've correctly identified whether it is one pattern or many, and if the latter, where to draw the lines between patterns.

If you give too much weight to a model just because it has been reified, it may lead you astray. All models are wrong; some are useful; some can be harmful.

There is a proverb "it is difficult to find a black cat in a dark room, especially if there is no cat".

Likewise, it may be difficult to find Bipolar Disorder, especially if there is no Bipolar Disorder (and there are instead dozens of different smaller patterns instead).

*

Present Usefulness of the term

I've now presented my case for why I dislike the term "Bipolar Disorder" and why I predict its eventual demise.

Despite all this, in the present moment, it is a very useful term for coordinating people who are afflicted by these conditions, people who are treating these conditions, and people trying to figure out what these conditions really are.

I expect it will be many years, if not decades, before we have better terms, and so until then, I will keep tagging posts like this one with the term Bipolar Disorder.

View source

April 5, 2024 — Have you ever examined the correlation between your writing behavior and sleep?

I've written some things in my life that make me cringe. I might cringe because I see some past writing was naive, mistaken, locked-in, overconfident, unkind, insensitive, aggressive, or grandiose.

I now have a pretty big dataset to identify my secret trick to write more cringe: less sleep.

For this post I combined 2,500 nights of sleep data with 58 blog posts. A 7 year experiment to see how sleep affects my writing.

Interactive version.

~7 Hours is the Cutoff

Most posts above 7 hours of sleep do not need a sleep disclaimer. Most posts below 7 hours do. Not to say there is no value in the posts made with under 7 hours of sleep, it's just less rigorous writing (and thinking). On the plus side, writing with little sleep can be more concise at times. It might exaggerate the key ideas, but nevertheless identify them fearlessly and concisely.

Interactive Table

Static image of table above.

More Posts. Similar Word Counts. Higher Scatteredness. Similar IQ. Higher Confidence.

I actually post slightly more when I sleep less (Pearson correlation coefficient of -.14), but fewer words per post, which is indicative of a more "scattered" thinking state. I was surprised to see that I don't generally generate a whole lot more words in deprived sleep states. I perceive my writing to be smarter during those times, but looking back it's clearly not.

Other Social Media

Besides this blog, I have long written and posted content to HackerNews, Reddit, other discussion forums, and at times Twitter, Instagram, Facebook, YouTube, and LinkedIn. I haven't done the data grunt work, but if my memory serves me correctly I am confident my publishing behavior on those platforms mirrors the same patterns as my blogging behavior, with regards to sleep.

Public vs Private Writing

There have been stretches where I published little publicly but was generating a similar amount of tokens, just in private groups. My writing patterns in private groups also mirrors my patterns on this blog, with regards to sleep.

Tangent: when I've been lucky to be a part of brainiac private organizations (such as Microsoft, YCombinator, Our World in Data, academia, and so on), I got to read so much brilliant writing by people who rarely post publicly, and every time I think about that I am humbled. There is so much well written content on the public web, and to think it is only a fraction of the great content ever written, is humbling.

Sleep Disclaimers

I realize I already have an unofficial "sleep disclaimer" policy. I have de-indexed (but kept published) at least a couple of sleep-deprived posts, and added a disclaimer/correction to at least 2 others. Now with this dataset I am sure I will append a few more sleep disclaimers.

With sleep disclaimers, I can say, "hey, might be interesting ideas here, but don't train too heavily on this".

Grateful for Git

I am happy with my decision to use git for this blog so that I always keep an honest history, while still being free to down weight sleep deprived content and try and keep my more thought-out out ideas front and center.

Benefits of Peer Review

I don't have a column for it (yet), but it does seem my better posts often were the ones where I took the time to get friends and/or colleagues to review, IRL. Sleep deprived posts I would generally blast out without talking to anyone.

Peer review is a great filter, and a great forcing function to put more effort in.

On the other hand, because the importance of ideas varies by so many orders of magnitude (there are "black swan" ideas), you could make an argument that spending too much time in one area of ideas isn't the optimal strategy, and publishing things as you go, improving them later, is an approach with merit.

Writing data reflects the current phenomena in your brain

It seems when I sleep less, my brain is in more of a pleasure seeking state, has a bias to action ("don't think, just do"), and feels less pain than in a more rested state. Less sleep means less critical thinking. Less sleep seems to make me less willing to invest the time in rewiring my brain to correct mistaken thought habits.

Grateful for FitBit

I started wearing a Microsoft Band when it first came out in November 2014. Then a Band 2, then FitBit Charge, Ionic, Versa, and now Sense 2. I am grateful for all the people involved with creating these things. I think continued progress in the wearable sensor field is the best bet for improving human health.

View source

In lobsters, a steroid hormone called ecdysteroid spikes during the pre-molt phase and declines sharply after molting. Source: J. Sook Chung

April 3, 2024 — I just saw Dune 2 at the theater, but far more noteworthy is this YouTube of a lobster molting. I can confidently claim that before that video I had never spent a minute of my life thinking about lobsters molting. To be honest, if you had asked me last year if lobsters molt I probably would have said "No". But, I mean, watch the video (variable speed is fine). What a fascinating slash beautiful slash disgusting slash painful slash magical slash moving thing to watch. Can you imagine going through something like that over and over again in your life? I guess humans should be thankful for our endoskeletons.

Why do lobsters molt?

Lobsters molt so they can grow. Lobsters molt so they can repair damaged or diseased shells. Lobsters grow so they can reproduce. And lobsters molt to enhance their sensory perceptions.

How often are lobsters molting?

Lobsters molt far more frequently when they are young (more than ten times per year) than old (sometimes once every 3 years). They are generally in 4 phases: Pre-molt, Molt, Post-molt, or Inter-molt. The critical molting days are always brief, but in their later days lobsters spend less time in the Inter-molt phase and instead more time in drawn out Pre-molt and Post-molt phases.

Why do I think lobster molting might be relevant to understanding human brains?

In my quest to find a more accurate model of bipolar disorder, I was wondering if human brains go through a similar process to lobsters that we haven't properly understood yet. Like lobsters, humans brains are contained inside a skeleton (the skull). While we clearly don't molt our skulls, I wonder if there is some similar natural transformative cyclical process designed to keep our brains growing, shed damaged mental models of the world, improve reproduction, and enhance sensory perceptions. Could the cycles of bipolar disorder be a natural phasic phenomena experienced by all humans, and those labeled "bipolar" just experience more intense molts than others, for some reason?

It seems like lobsters have no choice but to keep molting (except for the ones moved to captive, controlled environments designed to stop molting, like supermarket tanks where they go before being boiled to death). Maybe they are really stoked with their current shell, but nature says "sorry, too bad, time to grow", and gives them the painful boot. Similarly, maybe brains go through cycles where even if you were comfortable with your current interface to the world, nature has designed it so you will have to molt it anyway. It is a painful and vulnerable process (the mortality rate of a lobster molt is 10%), and no guarantees your new shell will be better than the last, but apparently, with lobsters at least, it is a risk that pays off in the game of natural selection.

Notes

While researching lobsters (for the first time), I also came upon this interesting post on a different topic releated to lobsters and brains.

View source

April 2, 2024 — It has been over 3 years since I published the 2019 Tree Notation "Annual" Report. An update is long overdue. This is the second and last report as I am officially concluding the Tree Notation project.

I am deeply grateful to everyone who explored this idea with me. I believe it was worth exploring. Sometimes you think you may have discovered a new continent but it turns out to be just a small, mildly interesting island.

Hypothesis

Tree Notation was my failed decade long research project to find the simplest universal syntax for computational languages with the hypothesis that doing so would enable major efficiency gains in program synthesis and cross domain collaboration. I had recognized all computational languages have a tree form, a 2D grid gave you enough syntax to encode trees, and maybe different syntaxes of our languages was holding us back from building the next generation of programming tools.

Results

The breakthrough gains of LLMs in the past eighteen months have clearly demonstrated that I was wrong. LLMs have shown AIs can read, write, and comprehend all languages across all domains at elite levels. A universal syntax was not what we needed for the next generation of symbolic tools, but instead what we needed was the transformer architecture, better GPUs, huge training efforts, et cetera. The difference between the time of the last report and now is that the upside potential of Tree Notation is no longer there. Back in 2019, program synthesis was still bad. No one had solved it. Tree Notation was my attempt to solve it from a different angle.

Reflection

The failure of this project will come as no surprise to almost everyone. Heck, in the 2019 report even I say "I am between 90-99% confident that Tree Notation is not a good idea". However, we kept making interesting progress, and though it was a long shot, if it did help unlock program synthesis that would have had huge upside potential. I felt compelled to keep exploring it seriously. Back in 2019 I wrote "No one has convinced me that this is a dead-end idea and I haven't seen enough evidence that this is a good idea". I have now thoroughly convinced myself, in large part to the abundant evidence provided by LLMs, that Tree Notation is a dead-end idea (I would call it mildly interesting, it's still mildly useful in a few places).

Maintenance

I am not ending work 100%. More like 98%-99%. I will likely always blog and am writing this post in Scroll, an alternative to Markdown built on Tree Notation, which I personally enjoy and will continue to maintain. Someday AI writing environments may become so amazing that I abandon Scroll for those, but until then I expect to keep maintaining Scroll and its dependencies. I feel bad the PLDB project has deteriorated, and if someone is keenly interested in taking that over send me a message.

Financial Losses

I feel good about this effort from society's perspective as the world got a mildly interesting idea explored and the losses were privatized. I effectively lost all my money pursuing this line of research, at least in the hundreds of thousands in direct costs of failed applications and more in lost salary opportunity costs. But also, this effort did lead me on a path with certain temporarily lucrative side tangents and maybe I would have had less to lose had I not taken it on. Who knows, maybe the new 4D language research (see below) will lead to future gains.

Status of Long Bet

After someone suggested it, in 2017 I made a Long Bet about Tree Notation. My confidence came from my hunch that Tree Languages would be far easier for program synthesis, which would lead to more investment into Tree Languages, which would have network and compounding effects. Instead LLMs solved the program synthesis problem without requiring new languages, eliminating the only chance Tree Languages had to win. So, I now forecast a 99.999% chance the first part of that bet will not win.

My bet did have two clauses, the second predicting "someone somewhere may invent something even better than Tree Languages...which blows Tree Notation out of the water." This has sort of happened with LLMs. At the time of the bet I felt we were on the cusp of a program synthesis breakthrough that would radically change programming, and that happened, it just happened because of a new kind of (AI) programmer and not a new kind of language.

The bet was not about a general breakthrough in programming, but specifically about whether there will be a shuffling in our top named languages. So I see 99.X% odds I will lose the second clause of the bet as well. There remains a chance LLMs make another giant leap and who knows, maybe we start considering something like Prompting Dialects a language ("I am a programmer who knows the languages Claude and ChatGPT"). But I don't see that as likely, even if we are still on the steep part of the innovation curve.

The Other Reason I liked Tree Notation

LLMs have eliminated the primary pragmatic reason for working on Tree Notation research--they solved the program synthesis and cross domain collaboration problems. But I also enjoyed working on Tree Notation because it gave me an attack vector to try and crack knowledge in general. Now, however, I see a far better way to work on that latter problem.

Future Explorations: 4D languages

Looking back, I recognize I had a strong bias for words over weights. The mental resources I used to spend exploring Tree Notation I now use to explore 4D languages (with lots of 1D binary vectors for computation). Words are merely a tool for communicating thoughts. Thoughts compile to words and words decompile back to thoughts. I am now exploring the low level language of thought itself. Intelligence without words. The 4D language approach seems to be an orders of magnitude more direct route than Tree Notation to finding the answers I am looking for.

Conclusion

I called the first status update an "Annual Report", which was optimistic thinking. It took me years to get another one out. And it turns out this will be the last one.

It would have been great personally to have been right on this long shot bet, but in the end I was wrong. I absolutely gave it everything I had. I poured much blood, sweat, and tears into this effort. I was stubborn and persistent to figure out whether this had potential or was just mildly interesting. I had a lot of help and support and am deeply grateful. I am sorry the offshoot products were not more useful (or good looking).

It took me a while to let Tree Notation go. Even after LLMs destroyed the potential upside of pragmatic utility of the notation, I still liked it because it gave me an interesting way to work on problems of knowledge itself. It wasn't until I had some insights into 4D languages that I finally could say there was no longer any need for Tree Notation. I am grateful for the experience and have now moved on to a new research journey.

View source

S = side length of box. P = pattern. t = time. V = voxel side length.

March 30, 2024 — Given a box with side S, over a certain timespan t, with minimum voxel resolution V, how many unique concepts C are needed to describe all the patterns (repeated phenomena) P that occur in the box?

As the size of the cube increases, the number of concepts needed increases. An increasing cube size captures more phenomena. You need more concepts for a box containing the earth than for a thimble containing a pebble. As your voxel size--the smallest unit of measurement--decreases, the number of concepts needed increases. As your time horizon increases, the number of concepts needed increases, as patterns combine to produce novel patterns in combinatorial ways, and some patterns only unfold over a certain period of time. Although, past a certain amount of time, maybe everything just repeats again. In fact, it seems likely that the number of concepts C would grow sigmoidally with each of these factors.

Why are there any patterns at all? Why isn't the number of concepts zero? Why doesn't every box just contain randomness? There could be ~infinite random universes, but this one, for sure, contains patterns.

What is a pattern? A pattern is something accurately simulatable by a function. A concept really is just a function that simulates something, with inputs and outputs. A concept could be embodied in different ways: by software, by an analog computer, by an integrated circuit, by neurons, et cetera. If all you had in your box was a rock, you could have a concept of "persistence", which is that a rock is persistent in that it will still be there even as you increase the time.

Brains are pattern simulators, and symbols are a way for these simulators to communicate simulations.

You can classify patterns into natural patterns and man-made patterns. Mitosis is a natural pattern. Wheels are a simple man-made pattern and cars are a man-made pattern made up of many, many man-made patterns.

Science is the process of discovering patterns, mostly natural, and tagging them with concepts.

It's fairly easy to tag new man-made patterns with concepts. There are max eight billion agents generating these (in practice, much less), and we can tag them as we create them.

But by the time we arrived on the scene with our tagging abilities, nature had already developed a backlog of a mind-numbingly large number of untagged patterns.

The box at the element size is well described. Scientists have identified all the natural elements and the only new ones are man-made.

Scientists had a lot of low hanging fruit patterns to tag.

If you put a box around the earth, what percentage of nature's patterns have been tagged? Are we 50% done? Are we 1% done? Are we 0.0001% of the way there? Are we something like 45% of the way there, with greatly diminishing returns, approaching some hard limit of knowability? Or will be there a point where we've successfully uncovered all the useful patterns here on earth?

Both nature and man are constantly creating new patterns combinatorically. How many new microbes does nature invent everyday? Who is inventing more patterns nowadays on earth: nature or man? How has that race changed over time? What about if you extend your box to contain the whole universe?

If you put a box around our universe, what were the first patterns in it? How is it that a box can contain patterns that evolve to be able to simulate the very box they are in?

A decreasing voxel size allows for identifying concepts that can generate predictions impossible with a higher voxel size, but also increases the number of untagged patterns.

Something being unpredictable after much effort means it is either truly unpredictable or just that the true pattern has not been found yet. That might be because the box is too small; the box is misplaced; the voxels are too big; the needed measurements cannot be taken; the measurements are not being taken enough over time. It does seem like the process of finding the right formula is not so hard, once the right data has been collected.

We often have a lot of Misconcepts. A concept that doesn't really explain a pattern. Maybe it is correlated with some parts of a pattern, but it is very lacking compared to some concepts that are far more reliable. You could also call these bullshit concepts.

If you put a box around a bunch of bricks, we seem to have a pretty good handle on all the useful concepts. Put it around a human brain though, and we still have a long way to go. Though, if you think about the progress made in the last 50 years, you can imagine we might possibly get far further in the next 50, if diminishing returns aren't too strong.

Can we make empirical claims about how many concepts C can we expect to describe patterns P given a box of size S, voxel size V, over time T? Perhaps it is possible to use Wikipedia to do such a thing. Maybe if you plotted that we would see the general relationship between these things.

Why might answering this question be useful? If we consider an encyclopedia E to contain all the useful concepts C in a box, then we might be able to make predictions about how complete our understanding of a topic is, regardless of the domain, by taking simple measurements of E alongside S, V, and T.


Sorry if you were expecting some quantified answers, as I haven't done that hard thinking, yet. This post is early explorations of trying to think of what science is from first principles. Scientists and engineers and craftsmen have done absolutely amazing things in so many fields, but I'm very interested in an area where science has so far failed, and I try to think of possible root causes of that failure. In those situations, it might just be a matter of time (waiting for technologies to decrease voxel size, or take previously impossible measurements).


If you are interested in the concept of the evolution of patterns in nature, I recommend checking out assembly theory.

View source

March 8, 2024 — What is open-mindedness, from first principles? Here are some musings.

A Brief Formal Discussion

First I state the obvious, that open-mindedness, OM, is a measure of a mind, M. I will assume M can be modeled as a society of agents, A, each occupying some neural space N. The formation of new A in space N is done by learning process L. Some A can act as learning control agents and can act, with discretion, to block the M from learning new agents.

The rest of this essay I'll spell out the full terms but stick to the list of concepts above.

Open-mindedness vs Closed-mindedness

A mind can be in various levels of open-mindedness both globally and locally.

With Artificial Neural Networks (ANNs) the current paradigm is to be open-minded during training and then closed-minded during inference (learning is stopped). In ANNs the open-mindedness state is generally global, whereas in biological neural networks it seems to almost always be in a mixed combination with both global and local levels of open-mindedness. I expect in the future engineers will get better at making ANNs that are able to keep certain areas open-minded.

Benefits of Open-Mindedness

A mind capable of developing new agents can build teams of cooperative agents capable of better exploiting the organism's environment. Also, a mind capable of developing new agents can replace under-performing agents with better ones.

Beyond this, I will take it as an assumption that open-mindedness is beneficial by default1.

Costs of Open-Mindedness

Unfortunately time and resources are scarce and open-mindedness has costs.

1. Energy Costs

There are metabolic energy costs to developing new agents. There are also opportunity costs as investing time and energy into developing any specific agent comes at the cost of not developing other possible agents.

2. Vulnerability Costs

If an organism is open-minded and engages in learning it often involves making mistakes visible to other organisms. In competitive environments, other organisms can exploit this information against the open-minded mind.

3. Internal Competition for Neural Resources

It is possible that neural agents themselves are entities in a game of survival (like species, individuals, and genes), competing against other neural agents in the same mind, and so open-mindedness poses a threat to existing agents.

3a. More "Mouths" to Feed

It could be that each agent has a metabolic power draw greater than undeveloped neural material, against a relatively fixed power supply, and so adding new agents decreases the power supply to existing agents.

It could be that the supply of neural "materials" is largely fixed and to build themselves new agents must literally take materials from existing agents (such as molecules for axons and dendrites).

These costs would make existing agents opposed to new agents globally by default. It could be that in the beginning when unused neural materials are high, it is advantageous to be globally open-minded. Once agents have claimed a lot of space, the downsides of open-mindedness increase.

3b. Internal Neural Warfare Between Opposing Agents

Each agent has a functional territory over which it reigns. When a mind encounters problems related to that territory the problem is routed to the appropriate agent. An agent may only get energy if it is used. Without receiving any energy, an agent may die. It seems like it would be evolutionarily advantageous for agents to develop defense mechanisms that can discourage open-mindedness in areas related to its territory.

Hence, an agent might want open-mindedness in areas orthogonal to its own, but push for closed-mindedness in its specialization.

It also seems that alliances among factions of agents may form. If one agent resists a superior newcomer in its territory the rest of the mind would be worse off, so other agents might fight the agent promoting closed-mindedness.

Survival of the Stubborn Gene

A stubborn strategy could often payoff. Imagine a mind has a 33% chance of surviving a choice. A closed mind will act fast and make a mistake 67% of the time. An open mind might spend significant time and resources and make the incorrect choice only 10% of the time. However, the open-minded survivor might be wrongly smug, as they wouldn't observe that 80% of the time they failed to make a choice at all, and if they could have observed the global multiverse they would have realized their open-minded strategy actually only survived 10% of the time, versus the closed-minded's 33%.

Honesty

It is easier for a mind to be open-minded to opposing ideas in a domain where there are few existing neural agents. It is harder to be open-minded in a domain with strongly established neural agents. Honesty requires being open to developing neural agents in opposition to existing agents in order to make a genuine judgement about which is better. Honesty can be hard because it sometimes involves not just superficially steel-manning an opposing idea, but being genuinely open-minded to letting that opposing idea take over a domain from existing agents, if it turns out to be a better idea after all.

Conclusions

As I said in the beginning, these are mostly musings at the moment. I have many more questions than answers, including the questions below.

  • Is there such a thing as "blank" neurons?
  • If so, do "blank" neurons have a lower power draw?
  • If it exists, what would the "weapons" be in internal neural warfare? How do neural agents fight each other/fight learning?
  • Are there substances that increase global open-mindedness states? Decrease? What about local effects?
  • Would it be good practice to regularly (annually, perhaps) take such substances?
  • Are there patterns in what kinds of ideas people are open/closed minded about?

Notes

1 Philosophically by some measures you could argue that having a mind is not clearly advantageous to objects in general. For example, organisms with minds make up a small percentage of the biomass on earth. Or you could say that some rocks last billions of years, whereas minds are gone much faster than that.


Open-Mindedness, Part II

April 5, 2024 — It's a few weeks later, and I find myself wanting to take another look at open-mindedness.

Why would a brain fight to stay closed-minded?

Let's go over the same ideas but starting from the slightly changed perspective of the question above.

A brain could fight to stay closed-minded as a form of agent NIMBYISM, where existing neural agents don't want to compete against new agents.

It could be simply an energy conservation strategy, where your brain, by default, doesn't want to burn resources rewiring.

It could be a logical default--where your brain is trying to avoid the mistake of giving up on a way of thinking too early. Imagine the reward for a contrarian idea doesn't come until your 10th year of following it. If you give up on year 9, you pay 90% of the costs and get 0% of the reward.

It could be because your brain is trying to "save face", and doesn't want to suffer social penalties from being wrong. You can postpone feeling shame by keeping your mind closed, and hope that either your committed path will someday, somehow, finally payoff, or maybe something else happens so you don't have to face the social pain of admitting mistakes.

It could be because your brain legitimately has a dislike for the other idea(s), or doesn't trust the source.

It could be because your brain enjoys having a pastime where it can just repeat old patterns and relax. It might want to be closedminded in an area because it has to be openminded in other areas and does not want to be challenged at all hours of the day.

It could be out of respect for a social group and/or traditions, and one would rather be respectful to their groups than have the best mental model on every topic.

Or it could be simply a waiting strategy, where a mind is shut only to incrementally better ideas, and no new ideas are significant enough to be worth opening one's mind for.

View source

February 24, 2024 — In the near future AI will be able to generate an extensive list and rating of all of the skills in someone's brain.

The ugly prototype I made at a hackathon in 2023 to explore this idea.

I'm a big fan of Minsky's conceptual model of the mind as a society of agents. A collection of neurons wire together in a certain way to form low level and higher level "agents". You have "agents" for everything you've learned: walking, crawling, standing, hugging, riding a bike, driving, slicing vegetables, chopping wood, reciting the capitals of countries, computing derivatives, et cetera. (You have lower level agents as well, but I'll leave those out in this post). A human brain might contain millions of "agents". Technology is a long way away from being able to scan a brain and map out where every agent is located. However, I wondered if technology was close to being able to at least list all the agents in someone's brain? How close are we to being able to make a map of someone's mind, identifying the "agents" in their mind along with the "strength" of those agents?

Last year I was at a hackathon and had about 10 hours to make something utilizing AI, so that's what I tried to do.

I explained the concept to an OpenAI API and asked it to generate a taxonomy of agents. It gave me a large list of possible agents such as WritingAgent, MusicalAgent, and PetOwnershipAgent. Then I fed it a large body of my writing and asked it to give it a score to estimate if that agent was present in my mind and how strong it was.

I got promising results on my very first run, and it is easy to envision how this would scale by expanding the agent list, adding multi-modal data, et cetera.

I was in my twenties when I first learned the simple writing technique of "mind mapping". I think in the future there might be a new use of that phrase. AIs will be able to create a mindmap of a person's mind near instantly, when given access to their data. Eventually that could combine with brain scans to be able to correctly identify the 3D spatial positioning of neural agents, but well before that we may have an interesting new kind of brain visualization that is far more extensive than a resume or personality test.

View source

February 21, 2024 — Everyone wants Optimal Answers to their Questions. What is an Optimal Answer? An Optimal Answer is an Answer that uses all relevant Cells in a Knowledge Base. Once you have the relevant Cells there are reductions, transformations, and visualizations to do, but the difficulty in generating Optimal Answers is dominated by the challenge of assembling data into a Knowledge Base and making relevant Cells easily findable.

Activated Cells in a Knowledge Base.

A Question has infinite possible Answers. Answers can be ranked as a function of the relevant Cells used and the relevant Cells missed. Let's say when a Cell is used by an Answer it is Activated.

So to approach the Optimal Answer to a Question you want to maximize the number of relevant Cells Activated.

You also want your Knowledge Base to deliver Optimal Answers fast and free. You don't want Answers where relevant Cells are missed but you want your Knowledge Base to find and Activate all the relevant Cells in seconds, not days or weeks. (You also don't want Biased Answers where some relevant Cells are ignored to promote an Answer that benefits some third party.) You want to be able to ask your Question and have all the relevant Cells Activated and the Optimal Answer returned immediately.

To quickly identify all the relevant Cells, your Knowledge Base needs them Connected along many different Axes. Cells that would be relevant to a Question but have few Connections are more likely to be missed.

So you want your Knowledge Base to have many Cells with many Connections. This Knowledge Base can then deliver many Optimal Answers. It has Synthesized Knowledge.

Wikipedia is a great Knowledge Base with a lot of Cells but a relatively small number of Connections per Cell. Wikipedia has Optimal Answers to many, many Questions. However, there are also a large number of important Questions that Wikipedia has the Cells for but because the Cells lack in Connections the Optimal Answers cannot be provided quickly and cheaply. Structured data is still lacking on Wikipedia.

My (failed) attempt

My attempt to solve the problem of Synthesizing Knowledge was TrueBase, where large amounts of Cells with large numbers of Connections could be put into place under human expert review. But ChatGPT, launched in November 2022, demonstrated that huge neural networks, through training matrices of weights, are an incredibly powerful way to Synthesize Knowledge. My approach was worse. Words are worse than weights.

Expanding Knowledge

There are many Questions where the best Answers, even after synthesizing all human knowledge, are still far from Optimal. Identifying the best data to gather next to get closer to Optimal Answers to those Questions is the next problem after synthesizing knowledge.

Today that process still requires agency and embodiment and is done by human scientists and pioneers, but I expect AIs will soon have these capabilities.

View source

February 20, 2024 — A lot of people, including me, are excited about an ambitious new research effort to see if bipolar disorder is best modeled as a mitochondrial disorder. I've started writing about it, and expect to write more about it in the future. But that's not what I'm writing about today.

Today I want to explore a model of bipolar disorder that I've wondered about for a few years, after reading about Marvin Minsky's "Society of Mind" model of the brain. In the model I explore today, mania and depression are not the result of a chemical imbalance, nor the result of a metabolic disorder, but instead are two neural circuits that are learned over time and persist in the brain, whether active or not, like learned skills. This post explores the brain pilot model of bipolar disorder.

Brain Circuits

The gist of Minsky's theory is that you are not a single "I", but instead a large collection of separable neural circuits working together. Your brain starts as a raw collection of neural resources and groups of neurons wire and fire in different ways to form circuits (aka "agents" or "resources").

Circuits that prove useful become stronger and survive. Some of these circuits are very low level, like a circuit for blinking. Some circuits are higher level and learn to control lower level circuits to achieve their goals. For example, you can think of learning how to ride a bike as developing a "bike riding circuit" that is capable of coordinating your legs, arms, center of gravity, et cetera, to successfully steer and propel the bike.

To learn how to ride a bike, your body experiments with a lot of different circuits. The circuit that does the best job is active for a longer period of time, out-competing other possible bike riding circuits, receiving more resources, strengthening and persisting over time.

Brain Pilots

The circuits at the highest level, the ones that you might say experience consciousness, I call brain pilots. Brain pilots are neural circuits that compete against each other for root level control of a brain. You might say the brain pilot in control is the one that experiences consciousness. To a brain pilot, the well being of their host is not the primary measure of success. Instead, the primary measure of success is how long that brain pilot is in control.

Learning how to mania

Children learn to crawl without knowing what they are doing. In learning to crawl, a circuit in the child's brain experiments with various combinations of contractions and relaxations of legs and arms. So it may be with learning to go manic.

At some point a circuit in your brain might start experimenting with various contractions and releases around brain regions like the amygdala, hippocampus, and prefontal cortex, involved with things like mood, fear, anxiety, and executive function. This network, let's call it M, might at first be competing against 10 other possible brain pilots. The positive feelings associated with the combination that M is hitting upon keep M piloting for longer.

In that person's brain is a new lifetime "skill". Alongside crawling, they now know how to go manic. They now have a manic brain pilot they can switch to.

Why would someone learn how to mania? Perhaps it is a "necessity is the mother of invention" situation. Depression hits first, and a person's brain starts subconsciously prototyping new circuits to try and recover. Maybe MDDs and bipolars are the same, except the brains of MDDs never figured out the subconscious manic skill.

Bipolar brains may be less chaotic than normal brains

A recent paper that looks at bipolar disorder through the lens of chaos theory suggests that counter-intuitively it is not that there is more chaos in a bipolar brain, it is that there is less of it: "a more chaotic pattern is present in healthy systems". In the brain pilots model, the problem with someone with bipolar is not that they experience brain pilot switching--that is normal--it is that they have a manic agent which is a brain pilot very skilled at staying in power. The problem is less brain pilot switching, not more.

Avoiding Brain Pilot Switches

The manic pilot "learns" that certain behaviors, while detrimental to the host, keep its time in control going.

Sleep is perhaps the ultimate brain pilot switcher. The pilot that goes to sleep in control does not know if it will be the pilot that wakes up in control. In the brain pilot model of bipolar, the manic pilot likes to avoid sleep because the less the host sleeps the less pilot switching that goes on, meaning the manic pilot's expected reign is longer.

The manic pilot could use spending as a way to bribe other brain circuits to keep it in power. Under the manic pilot, all brain circuits get what they want, and so those circuits in turn support the continuation of the manic pilot's reign.

The manic pilot triggers paranoia, and wariness to medication, for good reason. Friends and family that are worried about the person experiencing mania are indeed trying to get the manic pilot to give up control. While taking a medication won't kill the host, to the manic pilot, it is a matter of life and death, and so that pilot will deploy the resources at its disposal accordingly.

At some point, by remaining in control for so long via its selfish actions, the manic pilot will have scorched the host's resources, and will retreat into hiding. But, like riding a bike, that neural circuit will remain in the brain, ready to pilot again if it gets its chance.

Despite the harm to the host, in terms of ranking brain pilots by their time in control, mania is a very good strategy.

Depression as a Pilot

Depression, like mania, is also a strong strategy for a brain pilot, when you rank them by time in control.

The depressed pilot discourages all effort. Any effort might lead to a positive chain of events that lead to a different brain pilot taking control.

The depressed brain pilot learns to stop its host from doing almost anything at all. The less the host does, the less the chance of a pilot switch.

Being in social settings often requires a lot of pilot switching. The depressed brain pilot steers its host away from those.

Perhaps the rumination the depressed pilot engages in is another way of keeping control and preserving its reign.

The negative self-talk, hating on all the other brain pilots in a person, could also be a way of keeping other pilots from taking control.

Thoughts on this model

I don't think the model explored above is a leading contender for finally explaining bipolar disorder, but I do think it is worth consideration.

  • It explains why mania and depression are life long: they are actually subconsciously learned skills that, once learned, will persist in one's brain for a lifetime, like learning to walk or ride a bike.
  • It explains why bipolar disorder has yet to be "solved". Bipolar disorder would be a brain agent disorder, and we can't yet "see" neural agents. Minsky's model of the brain is still conceptual. The biomarkers of the root cause of bipolar haven't been identified because it is not an organ or organelle or metabolite issue but is instead a brain circuit issue.
  • It explains why if you don't develop mania or depression relatively early in life, you are not likely to develop it later. Later in life your there is more established competition so forming a new brain pilot might be more difficult.
  • It explains certain behaviors present in the extremes as being logical actions for the respective brain pilots to maintain control of the host.

Unanswered Questions

  • Are there people who go manic (and hypomanic) but never depressed? It seems like this is a question we should be able to answer now with the vast amount of sleep data collected by FitBit, Apple, Garmin, Whoop, Samsung, Oura Ring, et cetera.
  • In what percentage of bipolars does mania develop first? In what percentage depression? It seems like to answer this question we need to wait until a large number of children wear wearables for many years.
  • Are brain pilots really a thing? Will Minsky's model conceptual model be proven true by a factual one? It seems like if so, Jeff Hawkins and his team at Numenta are a team that might prove that.

View source

February 14, 2024 — The color of the cup on my desk is black.

For any fact exists infinite fictions. I could have said the color is red, orange, yellow, green, blue, indigo, or violet.

What incentive is there in publishing a fiction like "the color of the cup is red"? There is no natural incentive.

But what if our government subsidized fiction? To subsidize something is to give it an artificial economic incentive.

If fiction were subsidized, because there can be so much more of it than fact, we would see far more fictions published than facts.

You would not only see things like "the color of the cup is red", you would see variations on variations like "the color of the cup is light red", "the color of the cup is dark red", and so on.

You would be inundated with fictions. You would constantly have to dig through fictions to see facts.

The color of the cup would stay steady, as truths do, but new shades would be reported hourly.

The information circulatory system, which naturally would circulate useful facts, would be hijacked to circulate mostly fiction.

As far as I can tell, this is exactly what copyright does. The further from fact a work goes, the more its artificial subsidy. The ratio of fiction to fact in our world might be unhealthy.

I've given up trying to change things. I have a different battle to fight. But here I shout into the void one more time, why do we think subsidizing fiction is a good idea?

View source

February 11, 2024 — What does it mean to say a person believes X, where X is a series of words?

This means that the person's brain has a neural weight wiring that not only can generate the phrase X and synonyms of X, but the wiring is strong enough to guide their actions. Those actions might include not only motor actions, but internal trainings of other neural wirings.

However, just because a person is said to believe X, does not mean their actions will always adhere to the policy of X. That is because of brain pilot switching. The probability that any neural wiring will always be active and in control is always less than 1.

The strength of a belief is how often that neural wiring is active and guiding behavior and also a function of the number of other possible brain pilots in the population.

It seems brains can get into states where the threshold for a belief to become a brain pilot decreases, and lots of beliefs get a chance at piloting a brain during a period of rapid brain pilot switching.

For a belief to exist means it had to outcompete other potential beliefs for survival in a resource constrained environment and provide a positive benefit to its host. If a host had the ability to simply erase beliefs instantly, it seems like too many beneficial beliefs would disappear prematurely. So beliefs are hard to erase from a person's neural wirings. However, people could add a new neural wiring NotX, that represents the belief that X is not true. They can then reinforce this new neural wiring, and eventually change the probability so that they are far more likely to have the NotX wiring active versus the X wiring.

View source

February 9, 2024 — It is estimated 2% of the population is bipolar. Sunday I explored: what if that was 98%? And today I explore, why isn't it 0%?

Why does a condition that is 60-80% heritable, deemed a severe, chronic disorder, persist in society? Is this a case purely of selfish genes manipulating their host to reproduce? Is it a case of society changing in a way that previously useful traits are now harmful? Is it the case that society preserves bipolar genes because it can actually be a positive condition, hyperthymia, and there is a conspiracy to restrict that information for competitive reasons? Is it simply inevitable that any variable attribute will have outliers, and we are sure to have 2% mood outliers as we are to have 2% height outliers? Or is it the case that bipolars play a unique positive role in society and societies that have a small percentage of them do better than societies that don't?

First a disclaimer. I am not an evolutionary biologist and I know many have published empirical work on this topic already. If you want the latest and greatest, head to Google Scholar. If for some reason you'd prefer my thoughts on the matter, which are influenced from my first-person experience, then continue on.

Model 1: Bipolars as Selfish Gene Passers

Dawkins' book The Selfish Gene taught me that the main characters in the Darwinian game are not individuals but genes. Individuals can live a long life but if they fail to reproduce and pass on their genes they lose the game. So strategies that maximize the passing on of genes, where the survival of the individual is of secondary importance, are superior.

This seems to fit the data on bipolar really well. It is commonly thought the extremes of bipolar begin after puberty. Hypersexuality is a very clear characteristic of manias. Genes that cause people to go into a mode where they are simultaneously hypersexual and also highly energized, social, with charisma and grandiose ambition very clearly can lead to increased odds of reproduction.

Someone with bipolar genes is optimized to have a volatile, shorter life, but with higher odds of reproducing. Although their exceptional energy levels are not sustainable, they can sustain them long enough to give the appearance that they are exceptional individuals worth mating with. This model logically explains a lot of the behavior of bipolars. They lack insight into their manic states because if they knew that their high energy was temporary and not their normal, the cognitive dissonance would negatively affect their interactions with potential mates. They are prone to overspending because their genes more easily get them into a paranoid state where they think death is imminent, and living life to the fullest and reproduction is urgent. Bipolars have much shorter life expectancy because that's what their genes design them for. Bipolars are designed to live fast, breed, and die young. Their decisions are actually more logical than they first appear if you assume they are expecting an early death.

If this is the best fitting model bipolars might rightly be seen as detrimental to a society that values productive, long lives. Society might be better off moving toward 0% bipolars. That might be difficult, however, as bipolar genes can adapt in ways to avoid detection, such as by using depression to hide their host when the energy inevitably fades, and by the already mentioned genuine lack of insight during manic episodes.

This is a sad model that views bipolar disorder exclusively as an exploitative genetic strategy, and I hope it is not true.

Model 2: Bipolar Genes Only Currently a Bad Fit in Society

The modern definition of bipolar disorder didn't appear until the late 1800's. Perhaps today's bipolar genes were not a problem until the 1800's. In other words, perhaps 0% of the current bipolar population would have been bipolar in the old days. Maybe new technologies changed society and those with bipolar genes simply have genes maladaptive to the new environment.

Let's list some possible examples.

Disordered circadian rhythms is a prime indicator of bipolar disorder. Perhaps it is all of the artificial lighting that affects some people more than others. Or maybe new abilities to quickly change your location on earth affects some worse than others.

Metabolic factors are prime indicators of bipolar disorder. Perhaps the rise in processed foods, sugar, and other new substances have affected some more than others.

In this model it simply turns out that some genes that used to be helpful or harmless have turned out to be detrimental in this new world of the past 200 or so years. Perhaps the reason for less bipolar in the Amish is that their society lacks technologies that exacerbate bipolar traits.

Things could change again. It could even be that future technological changes will be such that today's "bipolar genes" will not be tomorrow's bipolar genes. In other words, which genes lead to an extreme energy cycle might be different in future societies.

Model 3: Keeping Hyperthymics Bipolar

There's no question that in hypomanic states bipolars have a number of positive qualities, like increased energy, IQ, charisma, and decreased need for sleep. What if the ability to get into these high energy states is actually a gift, they are sustainable long term without downsides, and this truth is just suppressed by those successful bipolars to limit their competition in society?

In this model, the 2% bipolar population could actually all be hyperthymic, but instead a slim fraction (say 1% of the 2%, or .02% of the total population) of this cohort obtained power at some point and has used it to suppress information about the true positive nature of this condition to limit their competition for power in society.

I would love to believe this conspiracy theory, sadly I haven't seen evidence for it. I do think it is worth listing though, as I really am curious if hyperthymic people really exist. I hope one day we'll have huge explorable population level datasets of biomarkers like sleep and can conclusively answer the question if hyperthymic people are out there.

Model 4: The Neutral Model (All distributions have a 2%)

Perhaps bipolar disorder is simply a name for the 2% outliers in brain energy cycles. In other words, even if everyone currently diagnosed as bipolar were to disappear, suddenly the people who were next in the percentile rankings would be deemed to have a disorder.

If you defined the tallest 2% of your population as suffering from "Height Disorder", and then deported them all, by definition you would still have the same number of people with Height Disorder, they would just be slightly closer to the mean. In other words, the 2% of society is bipolar might just be a societal construct.

In this model, a society can't get to 0% bipolar disorder unless you had perfectly uniform energy level variations.

Positive Models

Above I listed models where having bipolars was detrimental or neutral to society. What about models where bipolars add value to society? In other words, are there models where societies with ~2% bipolars are better off than societies with 0% bipolars?

Model 5A: Creatives

Some claim bipolars are outliers not only on the brain energy spectrum, but also on the creativity spectrum. It could be possible that the creative contributions of bipolars to society outweigh the negative effects of their volatile energy levels.

Think about creativity like mining diamonds. Finding a diamond is hard, but once you've found it you've added to society's supply of circulating diamonds for all time. Similar for creative works. Coming up with a novel, useful combination is hard, but once it is mined passing it around for all time is relatively easy.

There's a non-linearity to finding diamonds and creating novel, useful creative works. Getting 99% of the way toward finding a diamond has the same payoff as getting 1% there: zero. Therefore, people who over-commit succeed at a higher rate, but also lose more when they fail.

Bipolars are likely to over commit to ideas while in high energy states. This leads to a lot of hard failures, but also leads to more successes than a group of average committers would have. However, if this is the reason for the excess creativity of bipolars, it might diminish in the future as the amount of "low hanging fruit"--diamonds that could be found within the duration of a manic episode--could perhaps decrease over time.

Bipolars naturally look at the world through 3 different perspectives: in a low, medium, and high mood state. This may provide more novel insights that can be mined into successful creative works.

It could also be that bipolars are more willing to take risks because their energy cycles prevent them from enjoying the comforts of normal societal rhythms anyway, so they have less to lose.

Model 5B: Warriors

In the past it could have been that bipolars made great warriors, and so a society that had a percentage of them was better off than a society with 0% bipolars. A bipolar in a manic state has high energy, needs little sleep, is goal directed, has faster response times, less risk averse, assumes death is coming soon anyway, and can relatively easily get into a state of rage. It seems like a perfect combination for battle. In the old days they could fight at their best during hypomanic season and rest during depressed season.

Nowadays wars are more professional affairs, won not so much by emotion but by long term engineering, economics, and training. It might be that maintaining a percentage of bipolars was previously helpful for a society, but now there is not so much use.

Model 5C: Change Agents

Bipolars have a unique feature set. They are at least average, if not well above average, in being sensitive, observant, and critical of themselves and society. They have huge self-confidence during manic episodes. And they often have less to lose.

This might be a good source of people who might stand up to society if it goes astray.

Most people go with the crowd and don't spend too much time exploring the morality of issues. Humans are very vulnerable to herd behavior. They often get in the mode where they question no actions by their own team. Maybe because bipolars, at times, are not afraid to go it alone, they can sometimes successfully stop bad behavior by a herd.

Perhaps societies with a percentage of bipolars are better off than if they had 0% because they end up being more equitable places with a larger population of empowered agents. Bipolars might not be moral role models, but maybe they are like canaries in the coal mine and can call society out on specific dangers.

It seems unlikely that they can effectively lead teams better than average. But perhaps they are better than average at triggering cascading changes in behavior. Maybe they can be effective early revolutionaries.

Model 5D: Rule-breakers

The survival of rules requires people that break them. A rule that is often broken is often abandoned. But also, a rule that is never broken is abandoned as well, as it is not necessary. Perhaps having a certain population that is prone to at times break the rules is beneficial to a society, because it provides a primary benefit of having rules, which then might confer positive secondary economic and coordination benefits. Perhaps the short term volatility of these characters leads to a more robust long term government and confers a survival advantage to those societies with a percentage of bipolars.

Model 5E: Happiness Experiments

No one is as happy as someone in a pleasant state of hypomania. It is a peak feeling. If positive life events coincide with a hypomanic state, there is nothing better. So far, it is not known how to get that state to last, and it appears to always go to too much happiness, followed by depression.

But having people who experience too much happiness might be a risk worth taking by a population. A population with 0% bipolars would be a population not exploring the happiness state space.

To develop the genetic capability of sustainable happiness, you have to risk some experiments with unstable happiness.

Permanent, healthy hypomania might not ever be possible for some existential or mathematical reasons, but oh my god, is it something for a society to explore.

Before it goes wrong, healthy hypomania is an internal, utopian state. A society without bipolars would not be aware that such a state could exist. Nature is mostly dark, cold, and unforgiving. The world can be very depressing. But the happy times, though fleeting, can keep us going. Maybe a society without bipolars also gives up its genes for happiness and is thus worse off.

Having bipolars might be the price society pays for also having happiness.

If this is the case, then maybe in studying bipolars, particularly during elevated states, society might unlock new methods for healthier levels of happiness for everyone.

Recap

Bipolars have a natural low frequency, high amplitude energy cycle more extreme than most people. The biological mechanisms of that energy cycle have not yet been solved by scientists, even after over 100 years of research. We also don't yet have very good ways to control that energy cycle, though it should be noted that lithium has proven it is not unalterable. It certainly seems difficult, if not impossible, to remove the energy cycle without causing secondary negative effects. It is currently estimated that around 2% of the population has this irregular energy cycle. Why does society have such a percentage of people with this energy cycle?

It is possible, though extremely unlikely, that these energy cycles don't have to be cycles at all, that the elevated hyperthymic state is indefinitely sustainable, and information on how to do that has been suppressed.

It is possible that these abnormal energy cycles confer no benefit to society, benefiting only their own selfish genes. If that were the case, eliminating these energy cycles and eventually filtering out their genetic propagation seems reasonable.

It is possible that these cycles were once helpful to society, but no longer, and so the same conclusion above applies.

It is possible that having these outliers provides a check on societal herds, and bipolars have a beneficial, if sadly martyr-esque, role to play. A society without these outliers standing up for independent lines of thought perhaps might go in a wrong direction and end up somewhere bad.

Finally, it could be that in the study of the positive parts of the bipolar energy cycle, new things can be learned about happiness, leading to a better society for all.

Conclusion

Personally, as a bipolar, with the bipolar energy cycle in my body, I am unsure of what the best role for me is in society. I want to do right by everyone. I hope my listing of some of these theories it is clearer that the answer is not as straightforward as some people make it out to be.

If you have the energy cycle, should you embrace it and aim to be a creative, a warrior, a change agent? Or should you try to tame it? What if aiming for a long life expectancy, given your energy cycle, is not in your best interests or society's best interests? What if that is not what you were made for?

I am not sure.

The only thing I am sure of is that it would be great if we figure out the biology of what this energy cycle is.


Limitations of this essay

A productive exercise would be to get a large dataset on objective rates of bipolar disorder in different regions. Then these, and other models, could be applied, and we could learn what fits best. A major obstacle however is that no good large objective dataset on bipolar disorder is readily available for even one country yet, never mind multiple. That should change soon, as I will explore in future posts.

View source

February 4, 2024 — In our universe, an estimated 2% of humans have bipolar disorder. Imagine a universe where that ratio is flipped.

A 98% Bipolar, 2% Stable World

In this alternate universe, nearly everyone experiences order of magnitude cyclical energy shifts. Society evolved very differently.

Work

There are no long term companies--all work is project based.

There is no concept of weekdays, weekends, holidays or vacations. People work during their energetic phases and save leisure for their low energy phases. A "normal" work schedule is something like 16 hour days for 200 days straight, followed by 200 days off.

Workers view their current project as the most important endeavor in the world, and focus intensely to not only get it done but smash records along the way.

Everyone has different periods. Some people have high energy periods that last 3 months, others that last for 12 months. When a person's energy is up, they are said to be "in-season". Someone who feels their energy level declining is said to be "going-out" of season. When their energy level returns they are said to be "coming-back" in season.

All projects have a human resources head who is constantly managing the departure of going-outs and intake of coming-backs. The HR person also goes in and out of season themselves, of course, and is constantly replaced like any other role.

People only work when they are in season. The idea of working when you are out of season would be laughed at.

There is no concept of careers or retirement. It is intense work seasons, on and off, for life.

Leisure

Off seasons are often spent drinking and doing drugs. It is also common to see pairs of people walking around slowly, spending seemingly endless hours discussing existential philosophical questions.

Specialization

It is common to alternate between two or more specialties. Someone might do one work season as a dentist, the next as a chef, a third as something else, and repeat that cycle indefinitely.

School

Like work, schools lack continuity. There is not regular school years, unlike our world. Instead you have staggered groups of students coming-back in season, paired with a teacher coming-back, starting new cohorts throughout the year. Like work, school seasons are very intense and students spend over a hundred hours a week mastering their current subjects.

Bureaucracy

Bureaucracy is minimal. People in low energy phases have little energy for it, and people in high energy phases have little patience for it. Information and databases are widely used, they are all just public with no red tape.

There is little zoning and licensing. Everything is be more fluid. Offices often double as inns, retail stores, schools, and clinics.

Health

Risk taking is celebrated. Life expectancy is be shorter, but people experience more by an earlier age.

Age is talked about not by how many orbits you had made it around the sun, but how many energy cycles you went through. For some people this is the same number; some are on their 50th energy cycle by age 30; others are only on energy cycle 20 at age 30. You don't talk about your "10th grade experience", but instead talk about your "10th energy cycle".

Stable Mood Disorder

This world operates differently, but smoothly. However, in a world that values the highs and lows of big energy cycles, around 2% of the population struggles to fit in. They are diagnosed with "Stable Mood Disorder". All jobs are designed for people who are in a high energy state. People with Stable Mood Disorder struggle to match the intensity of their coworkers. To them, work projects don't seem so urgent, and they wish they could have less extreme lifestyle options.

Their inability to match the intensity of normal people at work, and their lack of desire to participate in long, existential conversations when off work, causes them to experience social isolation and difficulties in personal relationships.

Doctors try a large number of different amphetamines to try and get their Stable Mood Disorder patients to be employable, but have not yet found long term success.

View source

February 3, 2024 — Approximately every eighteen months, I start transitioning from a low energy person to a high energy person. No substances or triggering events are at fault. It is a natural cycle, as inevitable as the tides.

Your lifetime odds of getting hit by lightning are 1 in 15,000. Your lifetime odds of getting hit by a manic brain energy surge, like the one I was hit by here in 2022, are ~1 in 100. Like lightning strikes, brain energy surges can differ by orders of magnitude. You might just feel a slight shock, or you might become a very, very high energy person--temporarily. If you get hit by one large surge, you are likely to get hit again.

Life is much easier as a high energy person. I leap out of bed in the morning and can easily handle the morning chores. Work is exciting, I excel at my job, and raise the level of energy among my coworkers. Exercise, healthy eating, and parenting are all easier.

Life is much more fun as a high energy person. I have time and energy for family and friends, and make loads of new ones. Moments feel more joyous.

Each transition is different. Sometimes the shift is gradual and mild. Sometimes it is more sudden and severe, like a power surge. The latter are hard to handle. It is like I wakeup to find myself on top of a bucking bull, and struggle to get this powerful force under control.

Me on top of a bull during a power surge. On a couple hour layover in NYC I left the airport, ran miles to Wall Street, got a picture on the bull, and made it back just in time for my flight.

My New Reality

At first I think the energy is probably fleeting, like the temporary buzz from an energy drink or a long run. I go to bed expecting tomorrow you'll revert. But that doesn't happen. Instead each day I seem to have more energy than the last. Weeks go by. Months go by. The energy keeps flowing. It was not just something I ate. My brain starts to accept the new reality. I am a high energy person now! My plans for life change. With all of this energy I will be able to do a whole lot more than I expected. The world is my oyster.

The Sputtering

At times the energy spikes are too high. I get annoyed that my coworkers can't keep up. I sometimes snap at people. I feel like my energy is too great for working on what I start to perceive as small problems. I start to worry that a higher power in the universe gifted me with this high energy, and the morally correct thing to do is to direct my new energy gift to more important causes. My paranoia circuits have extra energy too, and start overestimating threats.

Eventually the power fades, and like a plane with a sputtering engine, my life crashes to the ground.

A Disorder?

I have been told that my energy fluctuations are a disorder. I don't dispute that I have not handled my energy well.

But I also know the underlying cause of the energy fluctuations are not understood. I can feel a radical difference in energy levels, but we don't have a way to measure those directly yet. My sleep fluctuates with the energy, but what is causing the energy shifts? Is it a change in ATP, mitochondrial biogenesis, mitochondrial swelling, dopamine, serotonin, adrenaline, or dozens of more potential factors? The relevant biomarkers are still a mystery.

In the future, if the underlying causes can be understood, perhaps the ability to be high energy will be seen as a gift, not a disease.

Consistently High Energy People

Meanwhile, I've occasionally come across high energy people whose energy levels are consistent. They have high energy, handle it, and never lose it. How do they do it? Is there a secret to being a consistently high energy person? Is there some secret club of high energy people, and if they like you they will approach you, and whisper in your ear the secret to harnessing that energy efficiently and keeping it flowing?

Lithium

Some tell me to take lithium. It supposedly flattens energy spikes. And it does seem reduce future energy surges from happening with 60% to 40% frequency, from what I can tell.

So maybe I could become a Medium Energy Person. That's not bad. Being a temporarily High Energy Person comes with the great downside of also being a Low Energy Person, which is a terrible state that you don't want to endure.

If you stop Lithium though, or miss a few does, you can experience an energy surge greater than any you've experienced before (ask me how I know). Also, long term, it will probably cause severe kidney damage. And you are near guaranteed to have some inconvenient side effects, such as weight gain.

It is probably the best choice, but there is no clear best option.

Taking Big Risks During the Highs

If I don't think the fluctuations are controllable, and accept that my high energy periods will always be fleeting, one life strategy is to strive to solve a hard, high paying problem for society during a high energy period. If I succeed, I can save enough to make it through the inevitable low energy periods. This is kind of how I made it in life, not so intentionally, while I continued the quest of trying to unlock how to be permanently high energy. It sort of worked, until it didn't.

The Dilemma

Once you've experienced being a High Energy Person, it is hard to give that up. There is no accurate understanding yet of this "illness". Might some people out there have the secret to living a consistently high energy life? Could the information be out there?

Some charlatans have long hawked mindfulness books and programs promising the secret to permanent high energy, but they fail upon close inspection of their models or when tested in trials. If a solution is out there, it is kept a secret or has remained below the radar.

I must admit, I do not think it is out there yet.

I believe science is the way to figure this out, but it sure is taking a while.

View source

January 30, 2024 — I have kept this blog going for 14 years, through good times and bad. One thing I've noticed, particularly recently, is that, after getting a post out, my mind feels calmer the rest of the day. I also feel like each post helps me develop some actionable insight, however small, that I can use going forward in the future.

Words considered harmful

However, I also think solo cognitive grappling might be harmful long term. I undeniably get a short term boost, but perhaps long term blogging is cognitively harmful.

Blogging is like training your own internal LLM. Having an LLM in your brain could be a blessing and a curse. It's a blessing because you can indeed sometimes successfully generate words to describe the problem you feel, and then ponder over those words until you've found a solution. But it's a curse because once you've grown your model you can't shut it off, and it comes up with an endless stream of hypotheses that you feel compelled to ponder.

So blogging might be, in a sense, a trap. You think it might help your mental health, but devoting more of your brain to resolving questions also makes your brain generate more questions. You might end up opening more issues than you resolve.

Black Swan Thoughts

Or maybe blogging is like playing the lottery. I've written a number of times about the importance of "black swans" in life: low probability, high impact events. I've generally only thought about external black swans. Small things in the real world that have big impact on society. But maybe I've neglected the impact of internal black swans. Small thoughts that can have disproportionate impact on one's internal society of mind. Perhaps enlightenment is really a thing. A thought, that if hit upon just right, has a huge impact. If that were the case, then maybe the expected value of blogging (or therapy) is higher than I thought, because it is like black swan thought hunting.

Vice or not, I'm too far committed

Regardless of whether blogging is a productive use of time or a narcissistic time suck, I think at this point my brain has been set to continue on. The die has been cast. I am nearly 40 and a lot of my brain is devoted to my LLM.

Let's call it a hobby

Now that I've thought about it, I think it is probably just a hobby. I don't do crosswords, or fish, or knit. I blog. It's a fun, open ended hobby. I never really know what puzzle I'll try next, or where I'll end up.

Anyway. I'm going to publish this one now and try and get a double post out today. Luckily blogging is like free therapy.

View source

January 29, 2024 — This is a post about delusions. In society and in myself.

My definition of delusion

A delusion, D, is a theory, in the mind of a thinking agent that meets 4 criteria:

  1. has a low probability (~<1%) of being true
  2. is perceived by the agent to have a high probability of being true (~>50%)
  3. is slow for the agent to update its perceived probability
  4. is acted on by the agent

When society went delusional

In February 2020 I was living in Hawai'i with my family and witnessed society there (and around the world) go delusional about Covid-19.

I had been following the early stories about Covid with some concern, but after the first big data dump was published on February 17th, 2020, it seemed clear to me that Covid would not be a huge danger. Most importantly, the first data dump indicated strongly Covid was not a significant danger to children. Every additional large data dump from then on only reinforced the more mild view of Covid. Covid was not deadly for nearly everyone (awful flu like experience aside), except those who were also at risk of dying from the flu. Thus, it shocked me as society locked down increasingly harshly over the next two years. The precautionary principle is a fine argument that would have justified a strong response measured in weeks, but the duration of the response put this in the delusional category.

It was easy to ballpark from the data what the disease would do. It was much harder to predict what society would do.

Let's see how some of the things that unfolded meet the criteria of delusion as defined above.

Years after this dataset came out, at least in Hawai'i, my 3 year old daughter was still being required to wear a mask outdoors due to the perceived threat of Covid to kids. This was a societal delusion. The theory was that Covid was a high, avoidable danger to kids (given the data, this was <1% true). Society (or at least, "leadership"), perceived this probability to be ~50x-100x higher than it actually was. Society took years to update their perceived probability. And society took actions (requiring masks, even outdoors; closing beaches, parks, and playgrounds; "vaccination" requirements; severe limits on social gatherings) based on this delusion.

Another delusion was that recovering from Covid somehow did not provide protection as well as the "vaccine" would. Given all we know about the immune system, this was a very low probability hypothesis, treated like a high probability theory, updated extremely slowly, and acted upon.

There were a lot of other related delusions going on at that time. Eventually however, society did update their perceived probabilities and the Covid delusions faded.

Sidenote: I chose two delusions that were both held by a number of people. However, I'll admit there were other groups with other delusions, like one group with the delusion the vaccines caused mass death.

Anyway, society went delusional. Society acted like it was living in the movie Contagion, even though the math did not back that up.

My Delusions in Response to Covid

I really struggled with society's reaction to the pandemic. Because my day job was focused on pandemic data, I was exceptionally aware of how delusional society was acting about both the actual lethality of Covid and the ability to contain it (running sims showed everyone was going to get it eventually).

I developed two delusions of my own.

My first delusion was that the copyright system was a root cause of society's delusions. I had long ago formed circuits in my brain that were against copyright law. These circuits believed copyright led to a less healthy information circulatory network. Now I was seeing the information network that our copyright system molded spouting delusions. Meanwhile, people were shouting "trust the science" while Sci-Hub, the preeminent site for sharing scientific research, was frozen by legal attacks from the copyright lobby. I really wish I could say that I was not delusional about this idea, but now I think that I likely greatly overestimated the impact of copyright in our world (positive or negative) and that whether we have copyright or not is probably not very impactful. So, I had a low probability theory (copyright significantly harms society) that I viewed with high probability for a long time. I also acted upon this theory, resigned from my job and formed a new corporation. So this meets all my criteria for a delusion.

My second delusion was that I had exceptional capability to solve this problem for society. I believed that I had world class talent and resources to spread a different kind of virus: a movement to abolish copyright law and replace our information circulatory network with something healthier. The theory that I was super talented (low probability), which I acted upon with great confidence, was a delusion.

Lessons Learned

Society is impressively logical but always has delusions somewhere

Programmers learn early that when there's a bug in their program, it is extremely rare for the bug to be upstream in the compiler or hardware, and much more likely for it to be in your own thinking. The infrastructure we build on is very impressive. I mean, just look at modern airplanes!

That's why I really struggled in this case. I worked with the Covid data day in and day out and could not find a mathematical justification for society's actions on Covid. This time the bug was not actually in my thinking but was upstream in society.

At times this enraged me and I wasn't sure how to deal with it. It felt like an exceptional time to me, and I felt maybe I needed to stand up and do something exceptional.

Years later I see how society's delusions eventually subsided. I now realize that society will always have pockets of delusions--and that's okay! One should not get so worked up about it. Society will eventually update its probabilities, even if stubbornly slow.

What causes delusions?

For me, being in a hypomanic or manic state leads to a huge increase in delusions. I start acting on low probability theories as if they had high probability and I am slow to update my probabilities. To compound the problem is the number of delusions that I act on--perhaps some of my delusions would actually have higher probabilities of coming true if I did not pursue so many delusions at once! It seems during my elevated states far more brain circuits than usual are energized, so naturally I'll have more theories rising to consciousness. Then it also seems inhibitory processes do not compensate, so lots of theories get acted upon.

But what causes these excitatory states? Could it be fear?

It does seem that in society, fear enabled the delusional behavior. Maybe in a state of fear, confident agents are king. Societal fear during Covid was fed by the media.

Perhaps, in individuals like myself, a fear state can arise at any time simply by an uncontrollable unconscious thought of the inevitably of death, and then a high energy state kicks off, and confident neural agents are able to take control.

Can one forecast when and where society will go delusional?

It might be hard to predict black swan events, but maybe delusional states would be easier to predict. I know on average my brain gets into a delusion happy state every eighteen months. Perhaps regions of society also have a regular rhythm of delusion susceptibility.

The difficulty in countering delusions

Society eventually did update its probabilities on Covid, but for years I found it incredibly frustrating trying to get society to change course. The agents I could get to admit the truth weren't the agents in power. If you think about the multi-agent theory of the mind, this makes sense. A delusion is an energized, specialized collection of neurons that is able to pilot the ship for a while. If the delusion was bendable, it would always simply bend to society and never be in a position of bending society. So the trick to countering delusions is to be content with communicating truth with out-of-power agents, realizing that eventually the currently powerful delusional neurons will eventually lose control (eventually the truth of physics catches up to you).

People have told me when I'm in a hypomanic state I can't be reasoned with or talked to. I think this is a harmful attitude to have. Even when I am focused on one particularly delusional idea, I still experience many brain pilot switches in a day. I may be very angry and confrontational one moment, but even in those days I have a large number of moments when other agents are piloting and listening to feedback.

Attacking Dissent

It is not easy to dissent from delusional agents that are in power. I learned this during Covid, and people around me learned it during my later mania. Name calling does not work. Calling an agent "delusional" does not work. The agents in charge are very simple neural networks evolved specifically for their particular delusion without the ability to self-introspect. These networks live for their delusion. Attacking them head on is thus a life or death threat to them, and can backfire. It is important instead to listen, to find common ground, and to appeal and strengthen the other orthogonal agents around.

Thriving Alongside Delusions

Maybe the trick is to have some sympathy for the delusional agent. Probabilities for rare events are hard to get right. I got so mad at society for being off on Covid by 100x. And then I went and was off myself on a few things by a 1,000x. Individuals and societies--both multi-agent thinking systems--will have delusional times. Perhaps the trick isn't to attack specific delusions as they come but to better understand what delusions are and how to create an environment that benefits from them, and is not harmed when they arise.

View source

January 26, 2024 — I went to a plastination exhibit for the first time last week. I got so much out of the visit and highly recommend checking one out if you haven't. I salute Gunther von Hagens, who pioneered the technique. You probably learn more anatomy walking around a plastination exhibit for 2 hours then you would learn in 200 hours reading anatomy books. There are a number of new insights I got from my visit, that I will probably write about in future posts, but one insight I hadn't thought about in years is how much humans and animals look alike once you open us up! And then of course I was confronted again with that lifelong and uncomfortable question that I usually like to avoid: humans and animals are awfully similar, is it morally wrong to eat animals? I thought now was as good a time as any to blog about it, thus forcing myself to think about it.

A picture from a human plastination exhibit. That's not animal meat!

The Morality of Meat Eating

Part of me strives to be a moral person. Other parts of me think that is bull. Maybe the main case against my "morality" is that I am not a vegetarian. I probably eat as much chicken, beef, pork, lamb, etc, as the average American, which means I am responsible for eating dozens of animals per year. I know millions of animals are slaughtered out of sight everyday in this country and I do nothing about it. Imagine if restaurants were required to have onsite butchers--picture the endless stream of live chickens and cows you would see heading into your nearest McDonald's! How can one claim to be "moral" and not stand against the slaughtering of billions of animals a year?!

The Cruelty Line

If I were to force myself to put in words my policy then, I might say that on my map of the tree of life there is a cruelty line dividing the branches of life into ones I think should be treated with fairness and ones where cruelty is permissible. Now I'm not saying I encourage cruelty to animals—on the contrary I do hope that cruelty can be minimized and I do try and direct my purchasing votes to those businesses who make that a priority—but I would be lying if I claimed I thought there was not an inherent cruelty in the meat supply chain. So my root principle is perhaps to act according to "survival of the fittest"—allowing the killing of loads of animals—and then, for a fraction of the tree of life, argue for policies of fairness.

The turkey problem. Even if there is no cruelty in the first 1,000 days let's not sugar coat day 1,001.

A year as a vegetarian

I did go vegetarian one year. I had slightly less energy that year but it wasn't so bad. I might even have stuck with it, but I accidentally ate some bacon at Christmas (I thought it was a fake vegan bacon) and it tasted too good to go back. Now I am trying a keto diet, which would be quite hard to do (but not impossible) as a vegetarian.

Everyone has a cruelty line

So what if I have a cruelty line? Plants are living things too. So vegetarians still have a cruelty line on their trees of life, just shifted to a different location. If you don't have a cruelty line, you will starve. Every living person thus has a cruelty line.

The circle of life

Just as I can't deny that a lot of animals die to bring me my bacon and steak bites, animals cannot deny that at some point I will pass on, and they will feast on me (or my redistributed atoms). Also, many of these animals would not have lived at all had their not been such a plan for their lives. In a sense, there is likely a plan for all of us. Ideally we should continue to strive for a world where life forms are all treated with dignity and respect, but we should also recognize that life needs to sacrifice for life, and that giving up one's body for the benefit of future life is a noble end.

In walking around that plastination exhibit, I was looking at humans that once fed on the bodies of animals, and then chose to donate their own bodies to feed minds like mine. A circle of noble sacrifices.

View source

January 24, 2024 — Assuming I keep blogging, which I hope I manage to do, I expect my posts will largely be about bipolar disorder. I've been blogging for fifteen years but never wrote publicly about bipolar disorder, even though I was diagnosed twenty years ago. I kept my diagnosis a secret.

Bipolar disorder is a condition not yet understood, with no cure, and is predictive of disruptive behavior. So I very much understand why society discriminates against those with the label.

I did not keep my diagnosis a secret maliciously, but genuinely remained unsure how to handle it, and ultimately, optimistically, believed I would figure the thing out. But—like a Greek tragedy—my efforts to avoid my fate perhaps led me faster to it. My mania in 2022 was at least twice as strong as anything I ever experienced. Things went KABOOM!

The past 7 years of mood swings, revealed accurately by sleep data. That sleep line nosediving in 2022 was bad. Real bad. HTML Version.

My Bipolar History

I will recap the long history of my bipolar disorder below. Though diagnosis happened at 20, I can remember hypomanic episodes at least as early as 12, when I remember extended periods of euphoria. But I will start at the diagnosis.

Diagnosis

Until I was 20, I had no clue something like "bipolar disorder" even existed. That year, 2004, was quite volatile for me. I started in a low mood and flunked out of my sophomore year at Duke University. Then, despite flunking out, I quickly reversed mood and felt better than ever. That lasted a number of months until the end of the year when I crashed again. Finally, reading about depression online, I came across a symptom checklist for this thing called "bipolar disorder". My jaw dropped. Here was a spot on description of the energy waves I had been experiencing all my life.

Family

After I got the label, I returned home with my tail between my legs. Luckily my family was incredibly supportive. Bipolar disorder was a new term for them all as well, but they faced it with determination and showed me then, and ever since, the meaning of unconditional love. Looking back I see how I always was an outlier among my siblings (for example, I am the only kid in the family to have the "distinction" of being suspended from high school and thrown out of college). No question I would not be here if they had not supported me back then or many times in the intermediate years.

First Treatment

In 2005 I saw a psychiatrist and therapist for the first time. My official diagnosis at the time was Bipolar II. My therapist, Steve, was great, and helped cheer me up. The first medication I was prescribed was Depakote. Soon I started noticing large amounts of hair coming out in the shower, which freaked me out, and my psychiatrist switched me to Abilify. I would remain on that for a number of months.

Military

Having gotten kicked out of college I explored joining the military. I had always admired soldiers. I also thought the discipline and physical challenges would be good for my condition. I sailed through MEPS but then hit a roadblock. The military does not take people with bipolar disorder.

Fortunately, I was able to get a steady job waiting tables and was able to get re-accepted to Duke.

The first time I "cured" my Bipolar Disorder

Unfortunately, the Abilify gave me terrible brain fog. I could barely do arithmetic, nevermind college level coursework. After a few weeks I realized there was no way I could stay on this medication and graduate. I viewed my choice as either keep taking the meds and flunk out for sure, or stop the meds and try and wing it. I stopped the meds.

Almost immediately my brain fog lifted and I was able to think and do schoolwork again. But I knew I was not supposed to quit meds cold turkey like that. So I spent a lot of time journaling, looking at my history, to try and figure out what I could do to not experience mood swings again.

I came up with a theory: compared to other people I seemed more able to quietly entertain myself. I got plenty of pleasure from just thinking and could easily waste hours day dreaming. I figured perhaps my problem was that my brain could generate its own neurochemical rewards by just imagining things, without actually accomplishing anything productive in the real world. I wrote "Be aware of brain chemicals. Be aware of dopamine". I came up with a mantra: "No pleasure from thinking, only physical action". I could take pleasure in a school task completed, or a friend helped, but would immediately cut off any imagined images in my mind.

I repeated this mantra to myself over and over again. No day dreams. No watching shows. No pleasure reading. I kept myself busy with actions--attending all my classes with perfect attendance, getting all my school work done, spending time with friends and in physical activities.

For a year, this worked! I excelled in college. The medication approach had failed, but I discovered a model and treatment that works! I even thought that my idea that bipolar disorder was caused by brains that could "self-pleasure" themselves could be a novel insight and maybe I could do an independent study and publish something. It's obvious now that my "cure" was just reinventing mindfulness.

Unfortunately mindfulness was not a strong enough cure and I went hypomanic again in the summer of 2006. This was two years after my last hypomania and my first hypomania since diagnosis. It was followed by a crash, but then I managed to graduate, partly due to another hypomania in summer of 2007.

The second time I "cured" my bipolar disorder

I had a new logic for the hypomania I experienced after graduation: my natural state was actually hyperthymic and it was simply because I was in school my whole life that I struggled. School was too constrictive for people with my kind of energy. This also turned out not to be accurate, and a big crash followed 2007's hypomania.

BPBio.com

In 2008 wearables were not yet a thing but Blackberries were. By this time in my life I had developed some programming and statistics skills and thought maybe I could build something to help solve this problem not only for myself, but for other people. I figured with modern tools I could track more data than ever, and maybe find a real cure for my condition. I built an email and web app called BpBio.com that allowed me to manually log anything 24/7 by emailing [any-measure]@bpbio.com.

Looking back, BPBio was actually pretty neat. It was minimal but very functional and better than many mood trackers out there today.

Around this time, some college friends invited me to live with them in San Francisco. I said yes and planned to turn BpBio into a startup out there. But when I got there, I chickened out. I didn't tell a soul about BpBio. Literally, this is the very first time I've ever mentioned it. I decided instead to try and do something more lucrative and keep my bipolar disorder diagnosis a secret.

A screenshot of BpBio.

The third time I "cured" my bipolar disorder

From 2008 to 2014 my mood swings were less severe than the prior college years. I used BpBio for a long while but did not attribute my improved stability to that. Instead, I thought it was the fact that I had "found my people" in the house I lived in and the startup ecosystem of Silicon Valley.

But in 2014 work brought me to Seattle and I experienced my worst depression since 2008. I guess I was not cured after all. For the first time in 6 years I again saw a therapist and took medication.

The fourth time I "cured" my bipolar disorder

In November 2014 I started wearing a sleep tracker. Among other things, the sleep data revealed to me that even a small amount of alcohol would negatively affect my sleep. Looking back, I also realized that some of my dumber decisions were made when drunk.

So I gave up drinking. I then experienced a prolonged period of stability. I felt like finally I had become the person I always wanted to be: not depressed and not too happy. Finally I had it all figured out: no debt, a good career, tracking my sleep, exercising, no drinking, a great girlfriend, great friends and family, a great life. Bipolar disorder was finally in the rearview!

Unfortunately, it was not cured and I had a mild manic episode in 2017. I wrote about this episode (originally anonymously) in the post Going Manic with a FitBit.

Medication

My 2017 mania was followed by milder hypomanias in summer of 2019 and winter of 2020. During this period a pattern emerged. I would go to therapy and on Lithium and Lamictal when depressed. I then would abruptly stop medications for various reasons. And I would quickly cycle up.

Looking back on it now, I see the incredible danger from rapidly stopping things like lithium. I simply did not have the knowledge of how lithium can prime your brain to launch into record setting manias if abruptly stopped. I expect I will write more about this in the future. But, for now I'll just mention that I stopped lithium in July of 2022.

The fifth time I "cured" my bipolar disorder

On August 20th, 2022, I wrote in my journal, "First he comes for your sleep. I can see how it sneaks up on you. Slept really poorly last night. Feel very tired right now."

Days later, at 4:34am on August 24th I wrote "Knocking on the door of hypomania. I am more vigilant this time, I hope."

By August 28th I was hypomanic and by September 1st I was in what would be the worst manic episode of my life. I have already written a bit about this in the post A Manic Startup and I'm sure I'll write more about it later.

For now, the thing I want to mention is that this time once again I thought I was over bipolar disorder. I decided that my real problem over the years was not believing in myself, and taking the word of doctors that I had this terrible condition bipolar disorder. I decided instead that I really was hyperthymic. I believed that the thing I screwed up before was my breathing. Someone with energy like me required a lot of oxygen and I needed to make sure I was increasing my lung capacity. I believed I could maintain my hyperthymic temperament indefinitely-never falling back into depression-if I just did strong breathing exercises to keep my lung capacity maximized. Once again, this "cure" turned out to be short lived.

Summarizing

So about twenty years after diagnosis, I have not gotten a handle on these energy swings. Five times I tried to pretend like I could stop worrying about it thanks to:

  1. Mindfulness
  2. Belief in Hyperthymia
  3. California/living with like minded people
  4. Sleep tracking + no drinking
  5. Breathing (and believing in hyperthymia again)

While some of these have been very helpful, none of them was a cure.

Episode Frequency

These are the major bullet points in my bipolar history. In the chart at the top of this page you can see 4 high episodes and 4 low episodes in 7 years. This frequency has been roughly the same since at least age 12. Thus at age 39 I've probably had 15-20 up spans and 15-20 down spans. It's been a roller coaster.

The period for me--the time for a complete cycle from hypo to depressed to hypo again--is about 18 months. The downside of these long cycles is that I could try a new treatment and have no major episodes for 2 years and that would be little evidence that the treatment worked, because it could have also just happened by chance.

Frustration with Science and Medicine

Bipolar disorder has been a constant, dominant term in the equation of my life. So although I am amazed by modern science and medicine, I am also disappointed in it, as twenty years after diagnosis expected prognosis hasn't changed.

Besides participating in as many studies as I can, I have spent a lot of my career trying to improve things. BPBio was openly aimed at bipolar disorder, but in fact nearly all my work has been secretly motivated by my battle with this condition and my attempts to try new things to help science and medicine solve this. My work on Ohayo was motivated by my belief that better data tools could help scientists and patients. My efforts on public domain issues is motivated by my belief (perhaps incorrect) that paywalled science slows things down. My work on TrueBase and the BrainDB prototype was motivated by thinking perhaps a new kind of symbolic model could help solve this.

The Latest

Personally I am now 3 months into trying a ketogenic diet as a treatment for bipolar disorder. I think it holds promise as a treatment, but more than that I think it may help us triangulate the biological mechanisms driving mood episodes. The metabolic theories of bipolar disorder seem to be making rapid progress in an area that otherwise hasn't seen much. Lately I have been trying to get caught up on all the research. I've been taking copious notes about everything from glutamate to the Krebs cycle to Oxidative Stress.

It seems like there is a new boom in funding for bipolar disorder research. BD², funded by the Brin, Dauten, and Baszucki families, is funding a large number of very exciting projects. I hope to write more about a number of them.

I recently was able to be a participant in one of the newer bipolar research studies going on. I got lots of cool data like the image below, and it only required time, many blood draws, and my first arterial line!

A still from a scan of my brain. Somewhere in there lies the problem...I think. In late 2023 I participated in a research study and for the first time got scanned with MRI, fMRI, MRS, PET, and EEG machines.

It gives me hope that there are so many smart, caring, hard working people trying to figure this thing out. I will do my best to contribute in any and every way I can. I've learned if we don't neutralize bipolar disorder, it will neutralize me.

Appendix: Table of start of elevated energy episodes

Year Month PeakSeverity Project SleepTracked MoodStabilizers? Psychedelics?
2024 No No
2023 No No
2022 August 4 PL Partial No
2021 Partial No
2020 November 1 PD Partial No
2019 May 1 TL Partial No
2018 Yes No
2017 June 3 TN No No
2016 January 1 OH No Yes
2015 No Yes
2014 No No
2013 May 1 IN No No
2012 No No
2011 March 3 NP No No
2010 No No
2009 December 1 BB No Yes
2008 September 1 PS Partial No
2007 July 3 SM Partial No
2006 July 2 SY No Yes
2005 Partial No
2004 June 3 IF No No
2003 August 2 MD No No
2002 October 1 AP No No

View source

January 23, 2024 — I started a ketogenic diet as a treatment for bipolar disorder 97 days ago, on October 19th, 2023, after learning about it on YouTube from MetabolicMind and Bipolarcast. So far, it seems promising.

But I was perplexed: after 20 years of reading about Bipolar Disorder, and eight health care providers, how had I not heard of keto as a treatment option before? Had I missed it in all the materials I had read?

So I embarked on a mini research project. I scanned every top book on bipolar disorder (46 self-help and medical books, and 10 biographies or other related books) for mentions of "keto".

Prior to 2020, there were zero mentions.

In 2020, "Understanding Bipolar Disorder: The Essential Family Guide" by Daramus has two sentences on it.

This year, in 2023, the 2nd Edition of "Take Charge of Bipolar Disorder" by Fast and Preston is the first to give it serious treatment with 3 pages explaining it.

For thoroughness, I extended my search to include books on "Manic Depression" to make sure I included older works. As a sanity check, I also scanned 5 books on Epilepsy, starting from 1996, and indeed all 5 of them included sections on the ketogenic diet.

So I had not heard of a ketogenic diet as a possible treatment for bipolar disorder because it was simply not talked about in primary sources until very recently. There were anecdotes in blog and forum posts but nothing at all in published books.

The Data Visualized:

Red = Bipolar book not mentioning keto; Green = Bipolar book mentioning keto; Blue = Epilepsy book mentioning keto.

Here is the raw data in a Google Sheet.

I am very grateful for all of the researchers who started seriously studying this. Perhaps the studies will show that a ketogenic diet is not a very effective long term treatment, but at the moment it seems like a very promising direction of research and the early results seem encouraging. It also seems like it is helping researchers triangulate the mechanisms of bipolar disorder.

I still have a ton to learn but I wanted to share this simple book scan in case anyone else was wondering why they hadn't heard of this option before.

View source

January 12, 2024 — For decades I had a bet that worked in good times and bad: time you invest in word skills easily pays for itself via increased value you can provide to society. If the tide went out for me I'd pick up a book on a new programming language so that when the tide came back in I'd be better equipped to contribute more. I also thought that the more society invested in words, the better off society would be. New words and word techniques from scientific research helped us invent new technology and cure disease. Improvements in words led to better legal and commerce and diplomatic systems that led to more justice and prosperity for more people. My read on history is that it was words that led to the start of civilization, words were our present, and words were our future. Words were the safe bet.

Words were the best way to model the world. I had little doubt. The computing revolution enabled us to gather and utilize more words than ever before. The path to progress seemed clear: continue to invent useful words and arrange these words in better ways to enable more humans to live their best lives. Civilization would build a collective world model out of words, encoding all new knowledge mined by science, and this would be packaged in a program everyone would have access to.

...along come the neural networks of 2022-2023

I believed in word models. Then ChatGPT, Midjourney and their cousins crushed my beliefs. These programs are not powered by word models. They are powered by weight models. Huge amounts of intertwined linked nodes. Knowledge of concepts scattered across intermingled connections, not in discrete blocks. Trained, not constructed.

Word models are inspectable. You plug in your inputs and can follow them through a sequence of discrete nameable steps to get to the outputs of the model. Weight models, in contrast, have huge matrices of numbers in the middle and do not need to have discrete nameable intermediate steps to get to their output. The understandability of their internal models is not so important if the model performs well enough.

And these weight models are amazing. Their performance is undeniable.

I hate this! I hate being wrong, but I especially hate being wrong about this. About words! That words are not the future of world models. That the future is in weight models. Weights are the safe bet. I hate being wrong that words are worse than weights. I hate being wrong about my most core career bet, that time improving my word skills would always have a good ROI.

Game over for words

In the present the race seems closer but if you project trends it is game over. Not only are words worse than weights, but I see no way for words to win. The future will show words are far worse than weights for modeling things. We will see artificial agents in the future that will be able to predict the weather, sing, play any instrument, walk, ride bikes, drive, fly, tend plants, perform surgery, construct buildings, run wet labs, manufacture things, adjudicate disputes--do it all. They will not be powered by word models. They will be powered by weights. Massive numbers of numbers. Self-trained from massive trial and error, not taught from a perfect word model.

These weight models will contain submodels to communicate with us in words, at least for a time. But humans will not be able to keep up and understand what is going on. Our word models will seem as feeble to the AIs as a pet dog's model of the world seems to its owner.

Literacy has historically had a great ROI but its value in the future is questionable as artificial agents with weight brains will perform so much better than agents operating with word brains.

Things we value today, like knowing the periodic table, or the names of capital cities, or biological pathways--word models to understand our world--will be irrelevant. The digital weight models will handle things with their own understanding of the world which will leave ours further and further in the dust. We are now in the early days where these models are still learning their weights from our words, but it won't be long before these agents "take it from here" and begin to learn everything on their own from scratch, and come up with arrangements of weights that far outperform our word based world models. Sure, the hybrid era where weight models work alongside humans with their word models will last for a time, but at some point the latter will become inconsequential agents in this world.

Weights run the world

Now I wonder if I always saw the world wrong. I see how words will be less valuable in the future. But now I also see that I likely greatly overvalued words in our present. Words not synchronized to brains are inert. To be useful, words require weights, but weights don't require words. Words are guidelines. Weights are the substance. Everything is run by weights, not words. Words are correlated with reality, but it is weights that really make the decisions. Word mottos don't run humans, as much as we try. Words correlate, but it is our neural weights that run things. Words are not running the economy. Weights are and always have been. The economy is in a sense the first blackbox weight powered artificial intelligence. Word models correlate with reality but are very leaky models. There are far more "unwritten rules" than written rules.

I have long devalued narratives but highly valued words in the form of datasets. But datasets are also far less valuable than weights. I used to say "the pen is mightier than the sword, but the CSV is mightier than the pen." Now I see that weights are far mightier than the CSV!

Words are worse not just because of our current implementations. Fundamentally word models discretize a universe into discrete concepts that do not exist. The real world is fuzzier and more continuous. Weights don't have to discretize things. They just need to perform. Now that we have hardware to run weight models of sufficient size, it is clear that word models are fundamentally worse. As hardware and techniques improve, the gap will grow. Weights interpolate better. As artificial neural networks are augmented with embodiment and processes resembling consciousness, they will be able to independently expand the frontiers of their training data.

Nature does not owe us a word model of the universe. Just because part of my brain desperately wants an understanding of the world in words it is not like there was a deal in place. If truth means an accurate prediction of the past, present, and future, weight models serve that better than word models. I can close my eyes to it all I want but when I look at the data I see weights work better.

Overcorrecting?

Could I be wrong again? I was once so biased in favor of words. In 2019 I gave a lightning talk at a program synthesis conference alongside early researchers from OpenAI. I claimed that neural nets were still far from fluency and to get better computational agents we needed to find novel simpler word systems designed for humans and computers. But then OpenAI has shown that LLMs have no trouble mastering even the most complex of human languages. The potential of weights was right in front of me but I stubbornly kept betting on words. So my track record in predicting the future on this topic isn't exactly stellar. Maybe me switching my bet away from words now is actually a sign that it is time to bet on words again!

But I don't think so. I was probably 55-45 back then, in favor of words. I think in large part I bet on words because so many people in the program synthesis world were betting on weights, so I saw taking the contrarian bet as the one with the higher expected value for me. Now I am 500 to 1 that weights are the future.

The long time I spent betting on words makes me more confident that words are doomed. For years I tried thousands and thousands of paths to find some way to make word models radically better. I've also searched the world for people smarter than me who were trying to do that. Cyc is one of the more famous attempts that came up short. It is not that they failed to write all unwritten rules it is that nature's rules are likely unwriteable. Wolfram Mathematica has made far more progress and is a very useful tool, but it seems clear that its word system will never achieve the takeoff that a learning weights based system will. Again, the race at the moment seems close, but weights have started to pull away. If there was a path for word models to win I think I would have glimpsed it by now.

The only thing I can think of is that there actually will turn out to be some algebra of compression that would make the best performing weight models isomorphic to highly refined word models. But that seems far more like wishful thinking from some biased neural agents in my brain that formed for word models and want to justify their existence.

It seems much more probable that nature favors weight models, and that we are near or may have even passed peak word era. Words were nature's tool to generate knowledge faster than genetic evolution in a way that could be transferred across time and space, but at the cost of speed and prediction accuracy, and now we evolved a way where knowledge can be transferred across time and space and have much better speed and prediction accuracy than words.

Words will go the way of Latin. Words will become mostly a relic. Weights are the future. Words are not dead yet. But words are dead.

Looking ahead

I will always enjoy playing with words as a hobby. Writing essays like these, where I try to create a word model for some aspect of the world, makes me feel better, when I reach some level of satisfaction with the model I wrestle with. But how useful will skills with words be for society? Is it still worth honing my programming skills? For the first time in my life it seems like the answer is no. I guess it was a blessing to have that safe bet for so long. Pretty sad to see it go. But I don't see how words will put food on the table. If you need me I'll be out scrambling to find the best way to bet on weights.

View source

January 4, 2024 — You can easily imagine inventions that humans have never built before. How does one filter which of these inventions are practical?

It seems the most reliable filter is seeing an abundant model in nature. Your invention doesn't need to work exactly as nature's version but if there is not an abundant model in nature then it is probably impractical.

For example, we have never discovered an area that if you stepped through you'd come out somewhere else. Nature has no portals. A teleporter is thus impractical.

Birds, on the other hand, are abundant, and planes turned out to be practical.


Some inventions are possible but not practical. We could build a limited number at a net loss and eventually we'd stop.

Outer space is filled with countless lifeless objects floating around. Satellites are a practical idea.

Nature has no living things that regularly exit and re-enter the atmosphere. Humans in space was proved possible, but might turn out to be impractical.


All practical inventions have abundant natural models. The sun is a model for nuclear power plants. Lightning for light bulbs. Branches for bridges. Birds for planes. Ears for recorders. Eyes for cameras. Fish for submarines. Ant hills for homes. Pools for ponds. Chloroplast for solar panels. DNA replication for downloading. Bacteria for CRISPR. Brains for artificial neural networks.

Once human inventions become abundant, they can serve as models for further practical inventions. Carriages for cars. Human computers for computing machines. Phonebooks for search engines. Facebooks for Facebook.


If you can't find an abundant natural model for an invention, be skeptical of its practicality.

If a model isn't out there yet in abundance, the invention is most likely impractical.

If nature is doing it, there has to be a practical way. If nature is not doing it, be skeptical.

View source

January 1, 2024 — Happy New Year!

First, a disclaimer. I think a lot of my posts are my attempts to reflect on experiences and write tight advice for my future self. This one is less of that and more just unsophisticated musings on an intriguing thought that crossed my mind. I am taking advantage of it being New Year's day to yet again try and force myself to publish more.

Most of my published writing these days is in communication with people over email or in online forums.

But I also do a lot of self musings that I do not publish because they are meanderings like this one. But maybe if I publish a greater fraction of what I write, the time will be better used, because even if there are no readers, the threat of readers forces me to think things over better.

Why am I still writing? I think symbols are probably doomed. The utility of being able to read and write is perhaps passed its prime. Inscrutable three dimensional matrices of weights are the future, and this practice I am engaging in now of conjuring and manipulating symbols on a two dimensional page is a dying art. But I am maybe too old to unlearn my appreciation for symbols. So I will keep writing. Because I enjoy doing it, like piecing together a puzzle. And because I still hope it can help my future self be a better person. Now, onto today's post.

The advance of AGI is currently stoppable

Short of an extraterrestrial projectile hitting earth, Artificial Neural Networks (ANNs) seem to be on an unstoppable trajectory toward becoming a generally intelligent species of their own, without being dependent on humans. But that's because the world's most powerful entities, foremost being the United States Military (USM), are allowing them to grow.

ANNs are made up of a vast number of assembled processors. These processors are not able to self replicate using readily available molecules in nature. Instead they are built in a limited number of fabs.

Fabs are very complex and expensive factories with a building cost in the billions. They are not something you can easily hide. If given the order in the morning, the U. S. Military could probably knock out every fab in the world by evening. I would not be surprised if there is a team somewhere monitoring all the world's fabs and developing exactly that kind of option. Maybe China has a team like that too.

So there is a very simple kill switch to prevent some emergent rogue superintelligent ANN. It is physically very easy for the powers that be to pause or reverse the growth of these things. And if turning growth off isn't enough they can also even knock out the data centers where the AIs run. Data centers also are easy for a superpower nation state to keep track of.

So AGI is easily stoppable, if you are a superpower. If you are just Joe Schmoe like me or even top 50 country, but just not quite top 10, you have effectively no say in the matter.

Will there come a point where even superpowers lose the ability to stop AGI?

There are many scenarios you can imagine where through a certain chain of independent events a rogue AI does manage to somehow take over the data centers and fabs and power plants of the world. There are a number of sci-fi stories with variations of this idea.

Alternative to GPUs?

But part of me wonders if instead what happens is we develop all the components necessary so that GPUs are no longer the primary ingredient to ANNs but are replaced by organic brains grown in vitro. These in vitro brains would be hooked up to control machines using something like Neuralink's Neuralace. They would be trained by ANNs.

We know it must be possible to run computations like in an ANN very power efficiently, using self reproducing organic materials, because we see nature do it. Just as scientists measured the amount of energy coming from the sun and deduced there must be a much more powerful way to create energy than chemical reactions, so we know there must be a better way to build these chips.

The technologies you would need to build this seem to almost be all available1.

Companies now sell lab grown "meat" at scale which I assume means we are getting better and better at growing artificial tissue in vitro. So perhaps you could grow a chunk of neural tissue unbounded. Just add water and readily available organic nutrients. Imagine if you could grow enough brain tissue to fill a shipping container---that could contain the compute potential of 20,000 Einsteins!

Neural tissue might as well just be meat if you can't interface with it. Enter Neuralink (and competitors). They are developing ways to do IO with neurons at scale.

Without the ability to train this tissue, it again would just be meat. That's where our current ANNs come in. I imagine if you had to "teach" a giant blob of brain tissue using electrodes by hand, you would quickly get bored and go mad. But we now have ANNs that can do the most boring of tasks over and over without ever getting bored or angry. These ANNs could use Reinforcement Learning to train these neural blobs. In addition to controlling the electrodes, the ANN would control the environment of the neural tissue, perhaps altering the neurotransmitter balance or ambient electromagnetic frequencies to help steer learning and optimize learning rates.

I have no expert insight or opinions on these matters. I have just been thinking a lot about what the future looks like given the recent breakthroughs in AI. Thinking about whether AI is inevitable led me to think of how that might require biobots so AI would have a less fragile "food supply" than the fabs. Then it clicked to me that Neuralink's real business might not have much to do with the stated goal of communicating with brains in vivo, but instead with a new kind of lab grown brain in vitro, to maybe serve as a replacement for GPUs. Most of their technology, such as their surgical robot, would be relevant for building an AI backed by organic in vitro brains. Just as SpaceX has the stated mission of sending humans to Mars, but really the big economic model so far has been creating their own global Internet.

In following this thought I wondered for the first time of how you would train a brain that did not have a body. I'm sure many people have thought and written on this. I had not. It's an intriguing challenge. It seems like it might be a good way to learn how human brains work. I am happy I decided to write about the initial Neuralink brain in vitro thought, as it led me to this other thought about training a bodyless brain.

I have no conclusions, as I said in the disclaimer up top this is meant to just be a meandering post. If I tried to reach conclusions on these ideas before publishing it would be years.


1 It does seem like a ground breaking proof of concept could happen within a decade. If that were to happen maybe something like this could be viable within another decade. So perhaps the earliest something like this might happen would be 15 - 20 years. It doesn't seem like it would be 50 years out, as by then it seems AGI would have happened using traditional chips, or that world powers would have hit the kill switch.

2 After publishing I did a little googling and learned of the terms brainoid and brain-on-chip. Hard to say whether those will ever be useful to power AGI, but for personalized medicine it seems genius.

View source

December 28, 2023 — I thought we could build AI experts by hand. I bet everything I had to make that happen. I placed my bet in the summer of 2022. Right before the launch of the Transformer AIs that changed everything. Was I wrong? Almost certainly. Did I lose everything? Yes. Did I do the right thing? I'm not sure. I'm writing this to try and figure that out.

Symbols

Leibniz is probably my favorite thinker. His discoveries in mathematics and science are astounding. Among other things, he's the thinker credited with discovering Binary Notation--that ones and zeros are all you need to represent anything. In my opinion this is perhaps the most incredible idea in the world. As a kid I grew up surrounded by magic digital technologies. To learn that truly all this complexity was built on top of the simplicity of ones and zeroes astounded me. Simplicity and complexity in harmony. What makes Leibniz stand out more to me is not just his discoveries but how what he was really after was a characteristica universalis, a natural system for representing all knowledge that would allow for the objective solving of questions across science.

I wanted to be like Leibniz. Leibniz had extreme IQ, work ethic, and ability to take intellectual risks. Unfortunately I have only above average IQ and inconsistent work ethic. If I was going to invent something great, it would have to be because I took more risks and somehow got lucky.

Eventually I got my chance. Or at least, what I took to be my chance.

Computers ultimately operate on instructions of ones and zeroes, but those that program computers do not write in ones and zeroes. They did in the beginning, when computers were a lot less capable. But then programmers invented new languages and programs that could take other programs written in these languages and convert them into ones and zeroes ("compilers").

Over time, a common pattern emerged. In addition to everything being ones and zeroes at some point, at some point everything would also be digital "trees" (simple structures with nodes and branches). Binary Notation can minimally represent every concept in ones and zeroes, was there some minimal notation for the tree forms of concepts? And if there were, would that notation be mildly interesting, or somehow really powerful like Binary Notation?

This is an idea I became obsessed with. I came across it by chance, when I was still a beginner programmer. I was trying to make a programming language as simple as possible and realized all I needed was enough syntax to represent trees. If you had that, you could represent anything. Eureka! I then spent years trying to figure out whether this minimal notation was mildly interesting or really useful. I tried to apply it to lots of problems to see if it solved anything.

If your syntax can distinguish symbols and define scopes you can represent anything.

The Book

One day I imagined a book. Let's call it The Book. It could be billions of pages long. It would be lacking in ornamentation. The first line would be a mark for "0". The second line would be a mark for "1". You've just defined Binary Notation. You could then use those defined symbols to define other symbols.

In the first hundred pages you might have the line "8 1000" to define the decimal number 8. In the first ten thousand pages you might have the line "a 97" to define the character "a" as part of defining ASCII. In the first million pages you might have the word "apple", and in the first hundred million you might have defined all the molecules that are present in an apple.

The primary purpose of The Book would be to provide useful models for the world outside The Book. But a lot of the pages would go to building up a "grammar" which would dictate the rules for the rest of the symbols in The Book and connect concepts together. In a sense the grammar compresses the contents of The Book, minimizing not the number of bits needed but the number of symbols needed and the entropy of the cells on the pages that hold the symbols, and maximizing the comparisons that could be made between concepts. The notation and grammar rules would not be arbitrary but would be discovered as the most efficient way to define higher and higher level symbolic concepts, just as Boolean Algebra gives us the tools to build bigger and bigger efficient circuits. Boolean Algebra is not arbitrary but somehow arises from the laws of the universe, and so would this algebra for abstraction. It would implement ideas from mathematical domains such as Category and Type theory with surprisingly simple primitives. It was a new way to try and build the characteristica universalis.

The Book would be an encyclopedia. But it wouldn't just list concepts and their descriptions in a loosely connected way. It would build up every concept, so you could trace all of the concepts required by any other concept all the way down to Binary. Entries would look so simple but would abide by the grammar and every word in every concept would have many links. It would be a symbolic network.

You would not only have definitions of every concept, but comparability would be maximized. Wikipedia does a remarkable job of listing all the concepts in a space and concepts are weakly linked. But Wikipedia is primarily narratives and the information is messy. Comparability is nowhere near maximized.

The pieces would almost lock in place because each piece would influence constraints on other pieces--false and missing information would be easy to identify and fix.

Probably more than 100,000 people have researched and developed digital knowledge bases and expert systems. Those 100,000 probably came up with 1,000,000 ways to do it. If there were some simplest way to do it--a minimal Binary Notation and Boolean Algebra for symbols--that would work for any domain, perhaps that would lead to unprecedented collaboration across domains and a breakthrough in knowledge base powered experts.

Experts

It wasn't the possibility of having a multi-billion page book that excited me. It is what The Book could power. You would not generally read The Book like an encyclopedia, but it would power an AI expert you could query.

What is an expert? An expert is an agent that can take a problem, list all the options, and compare them in all the relevant ways so the best decision can be made. An expert can fail if it is unaware of an option or fails to compare options correctly in all of the relevant ways.

Over the years I've thought a lot about why human experts go wrong in the same way over and over. As Yogi Berra might say, "You can trust the experts. Except when you can't". When an expert provides you with a recommendation, you cannot see all the concepts they considered and comparisons they made. Most of the time it doesn't matter because the situation at hand has a clear best solution. In my experience the experts are mostly right, with the occasional innocent mistake. You can greatly reduce the odds of an innocent mistake by getting multiple opinions. But sometimes you are dealing with a problem with no standout solution. In these cases biased solutions flood the void. You can shuffle from "expert" to "expert", hoping to find "the best expert" with a standout solution. But at that point you probably won't do better than simply rolling a dice.

The Edge of Knowledge

No one is an expert past the line of what is known. Even more of a problem is that it is impossible to see where that line is. If we could actually make something like The Book, we could see that line. A digital AI expert, which could show not only all the important stuff we know, but also what we don't know, would be the best expert.

In addition to powering AI experts that could provide the best guidance, The Book could aid in scientific discoveries. Anyone would be able to see the edge of knowledge in any domain and know where to explore next. Because everything would be built in the same universal computable language, you could do comparisons not only within a domain, but also across domains. Maybe there are common meta patterns in diverse symbolic domains such as physics, watchmaking, and hematology that are undiscovered but would come to light in this system. People who had good knowledge about knowledge could help make discoveries in a domain they knew little about.

Dreaming of Digital AI Experts

I was extremely excited about this idea. It was just like my favorite idea--Binary Notation--endless useful complexity built up from simplicity. We could build digital experts for all domains from the same simple parts. These experts could be downloadable and available to everyone.

Imagine how trustworthy they would be! No need to worry about hidden biases in their answers--biases are also concepts that can be measured and would be included in The Book. No "blackbox" opaque collections of trained matrices. Every node powering these AIs would be a symbol reviewable by humans. There would be a massive number of pages, to be sure, but you would almost always query it, not read it. Mostly you'd consume it via data driven visualizations to your questions, rather than as pages of text.

No one can know everything, but imagine if anyone could see everything known! I don't mean see all the written or digital information in the world. That would be so overwhelming and little more useful than staring at white noise. The symbols in The Book would be more like the prime numbers. All numbers are made up of prime numbers but prime numbers make up ~0% of all numbers. The Book would be the slim fraction containing the key information.

You wouldn't be able to read everything but you would be able to use a computer to instantly query over everything.

Everything could be printed out on a single scroll. But in practice you would have a digital collection of files containing concepts which would have answers to questions about those concepts. An academic paper would include a change request to a collection. It would add new files or update some lines in existing files. For example, I just read a paper about an experiment that looks at how a genetic mutation might exacerbate a psychiatric condition. The key categories of things dealt with were SNVs, Proteins, Organelles, Pathways, and Psychiatric Conditions. Currently there are bespoke databases for each of these things. None of them are implemented in the same way. If they were, it would be easy to actually see the holistic story and contributions of the paper. With this system, you would see what gaps were being filled, or what mistakes corrected.

This was a vague vision at first. I thought a lot about the AI experts you could get if you had The Book. I was playing with all the AIs at the time and tried to think backwards from the end state. What would the ideal AI expert look like?

Interface questions aside, it would need two things. It would need to know all the concepts and maximize comparability between them. But for trust, it would also need to be able to show that it has done so.

In the long run I thought that the only way to absolutely trust an AI expert would be if there were an human inspectable knowledge base behind it that powered calculations. The developments in AI were exciting but I thought in the long run the best AI would need something like The Book.

First Attempts

My idea was still a hunch, not a proof, and I set out building prototypes.

I tried a number of times to build things up from "0 1". That went nowhere. It was very hard to find any utility from such a thing or get feedback on whether one was building in the right direction. I think this was the same way Leibniz tried to build his characteristica universalis. It was a doomed approach.

By 2020 I had switched to trying to make something high level and useful from the beginning. There was no reason The Book had to be built in order. We had decimal long before we had binary, even though the latter is more primitive. The later "pages" are generally the ones where the most handy stuff would be. So pages 10 million to 11 million could be created first by practitioners, with earlier sections and the grammar filled in by logicians and ontological engineers over time.

There was also no reason that The Book had to built as a monolith. Books could be built in a federated fashion, useful standalone, and merged later to power a smarter AI. The universal notation would facilitate later merging so the sum would be greater than the parts. Imagine putting one book on top of another. Nothing happens. But with this system, you could merge books and there would suddenly be a huge number of new "synapses" connecting the words in each. The comparisons you could make go up exponentially. The resulting combination would be increasingly smarter and more efficient. So you could build "The Book" by building smaller books and combining them together.

With these insights I made a prototype called "TreeBase". I described it like so: "Unlike books or weakly typed content like Wikipedia, TreeBases are computable. They are like specialized little brains that you can build smart things out of."

At first, because of the naive file based approach, it was slow and didn't scale. But lucky for me, a few years later Apple came out with much faster computers. Suddenly my prototype seemed like it might work.

The Prototype

In the summer of 2022, I used TreeBase to to make "PLDB", a Programming Language DataBase. This site was an encyclopedia about programming languages. It was the biggest collection of data on programming languages, which was gathered over years by open source contributors and myself and reviewed by hand.

Part of the "Python" entry in PLDB. The focus is on computable data rather than narratives.

As a programming enthusiast I enjoyed the content itself. But to me the more exciting view of PLDB was as a stepping stone to the bigger goal of creating The Book and breakthrough AI experts for any domain.

It wasn't a coincidence that to find a symbolic language for encoding a universal encyclopedia I started with an encyclopedia on symbolic languages. I thought if we built something to first help the symbolic language experts they would join us in inventing the universal symbolic language to help everyone else.

PLDB was met with a good reception when I launched it. After years of tinkering, my dumb idea seemed to have potential! More and more people started to add data to PLDB and get value from it. To be clear, almost certainly the content was the draw, and not the new system under the hood. I enjoyed working on the content very much and did consider keeping PLDB as a hobby and forgetting the larger vision.

But part of me couldn't let that big idea go. Part of me saw PLDB as just pages 10 million to 11 million in The Book. PLDB was still far from showing the edge of knowledge in programming languages, but now I could see a clear path to that, and thought this system could do that for any domain. Part of me believed that the simple system used by PLDB, at scale, would lead to a better mapping of every domain and the emergence of brilliant new AI experts powered by these knowledge bases.

Scale

I understand how naive the idea sounds. Simply by adding more and more concepts and measurements to maximize comparability in this simple notational system you could map entire knowledge domains and develop digital AI experts that would be the best in the world! Somehow I believed my implementation would succeed where countless other knowledge base and expert systems had failed. My claims were very hand wavy! I predicted there would be emergent benefits, but I had little proof. It just felt like it would, from what I had seen in my prototypes.

Where would the emergent benefits come from in my system that wouldn't come from existing approaches?

Dimensions! Dimensions! Dimensions!

A dimension, which is symbolically just another word for column in a table of measurements, is a different perspective of looking at something. For example, a database about fruits might have one dimension measuring weight and another color. There's a famous Alan Kay quote about a change in perspective being worth 80 IQ points. That's not always the case, but you can generally bet adding perspectives increases one's understanding, often radically. A thing that surprised me when building PLDB was just how much the value of a dataset grew as the number of dimensions grew. New dimensions not only increased the number of insights you could make, sometimes radically, but also expanded opportunities to add even more promising dimensions. This second positive feedback loop seemed to be more powerful than I expected. Of course, it is easy to add a dimension in a normalized SQL database. Simply add a column or create a new table for the dimension with a foreign key to the entity. My thought was seemingly small improvements to the workflow of adding dimensions would have compounding effects.

Minimalism

I also thought minimalism would show us the way. Every concept in this system would have to adhere to the strictest rules possible. The system could encode any concept. So if the rules prevented a true concept from being added, the rules would be adjusted at the same time. The system was designed to be plain text backed by git to make system wide fixes a cinch. The natural form and natural algebra would emerge and be a forcing function that led us to the characteristica universalis. This would catapult this new system from mildly interesting to world changing. I believed if we just tried to build really really big versions of these things, we would discover that natural algebra and grammar.

However, there were a ton of details to get right in the core software. If you didn't get the infrastructure for this kind of system to a certain point then it would not compete favorably against existing approaches. Simplicity is timeless but scaling things is always complex. This system needed to pass a tipping point past which clever people would see the benefits and the idea would spread like fire.

It was simple enough to keep growing the tech behind PLDB slowly and steadily but I might never get it to that tipping point. If I was right and this was the path to building The Book and the best AI experts, but we never got there because I was too timid, that would be tragic! Was there a way I could move faster?

Version Two

I had an idea. I had worked in cancer research for a few years so had some knowledge of that domain. In addition to PLDB, why not also start building CancerDB, building an expert AI for a domain that affects everyone in a life and death matter? Both required building the same core software, but it seemed like it would be 1,000x easier to get a team and resources to build an expert AI to help solve cancer rather than just improve programming languages. I could test my hunch that my system would really start to shine at scale and if it worked help accelerate cancer research in the process. It seemed like a more mathematically sound strategy.

The Business Model

Knowledge in this system was divided into simple structured blocks, like in the screenshots above. Blocks could contain two things. They could define information about the external world. And some could define rules for other blocks. The Book would come together block by block, like a great wall. The amount of blocks needed for this system to become intelligent would be very high. Some blocks were cheap to add, others would require original research and experiments. It would be expensive to add enough blocks to effectively map entire domains.

Like walls in the real world that have "Buy a Brick", we would have another kind of block, sponsor blocks, which would give credit to funders for funding the addition and upkeep of blocks. This could create a fast, intelligent feedback loop between funders and researchers. Because of the high dimensional nature of the data, and computational nature of the encoding, we would have new ways to measure contributions to our shared collective model of our world.

It would be a new way to fund research, and the end result would not be disconnected PDFs, but would be a massive, open, collaborative, structured, simple, computable database. Funders would get thanks embedded in the database and novel methods to measure impact, researchers would get funding, and everyone could benefit from this new kind of open computable encyclopedia.

CancerDB would be a good domain to test this model, as there are already a lot of funders and researchers.

The Copyright Issue

The CancerDB idea also had another advantage. Another contrarian opinion of mine is that copyright law stands in the way of getting the most out of science. The world is flooded with distracting information and misleading advertisements, while some of the most non-toxic information is held back by copyright laws. I thought we could make a point here. We would add all the truest data we could find, regardless of where it came from, and also declare our work entirely public domain. If our work helped accelerate cancer research, we would demonstrate the harm from these laws. I figured it would be hard for copyright advocates to argue for the status quo if by ignoring it we helped save people's lives. As a sidenote, I am still 51% confident I am right on this contrarian bet, which is more confident I ever was in my technology. I have never read a compelling ethical justification for copyright laws, and think they make the world worst for the vast majority of people, though I could be wrong for utilitarian reasons.

The Decision

Once the CancerDB idea got in my head it was hard to shake. I felt in my gut that my approach had merit. How could I keep moving slowly on this idea if it really was a way to advance knowledge and create digital AI experts that could help save people's lives? I started feeling like I had no choice.

The probability of success wasn't guaranteed but the value if it worked was so high that the bet just made too much sense to me. I decided to go for it.

Execution and Failure

A slide from my pitch deck mentioning how these AI experts would be transparent and trustworthy to users and provide researchers with a birds-eye view of all the knowledge in a domain.

Unfortunately, my execution was abysmal. I was operating with little sleep and my brain was firing on all cylinders as I tried to figure this out. People thought I was crazy and tried to stop me. This drove me to push harder. I decided to lean into the "crazy" image. Some said this idea was too naive and simple to work and anyone who thought it would work was not rational. So I was willing to present myself as irrational to pull off something no rational person would attempt.

I could not rationally articulate why it would work. I just felt it in my gut. I was driven more by emotion than reason.

I wanted this attempt to happen but I didn't want to be the one to lead it. I knew my own limitations and was hoping some other group, with more intellectual and leadership capabilities, would see the possibilities I saw and build the thing on their own. I pitched a lot of groups on the idea. No one else ran with it so I pressed on and tried to lead it myself.

I ran into fierce opposition that I never expected. Ultimately, I wouldn't be able to build the organization to build one of these things 100x bigger than PLDB, and prove empirically a novel breakthrough in knowledge bases.

I still had a very fair chance to prove it theoretically. I had the time to discover some algebra that would prove the advantage of this system. Unfortunately, as hard as I pushed myself—and I pushed myself to an insane degree—I would not find that. Like an explorer searching for the mythical fountain of youth, I failed to find my hypothesized algebra that would show how this system could unlock radical new value.

I failed to build a worthwhile second TrueBase. Heck, I even failed to keep the wheels running on PLDB. And because I failed to convince better resourced backers to fund the effort and funded it myself, I lost everything I had, including my house, and worse.

Deep Learning

My confidence in these ideas always varied over the years, but the breakthroughs in deep learning this year drastically lowered my confidence that I was right. I recently read a mantra from Ilya Sutskever "don't ever bet against deep learning". Now you tell me! Maybe if I had read that quote years ago, printed it, and framed it on my wall, I would have bet differently. In many ways I was betting against deep learning. I was betting that curated knowledge bases built by hand would create the best AI experts. The reason why they hadn't yet was that they lacked a few new innovations like the ones I was developing.

Now, seeing the astonishing capabilities of the new blackbox deep learning AIs, I question much of what I once believed.

Arguments Against My Ideas

  • There is no reason that knowledge has to be encoded in symbols. Symbols are powerful because they can be shared across time and place, but what ultimately matters is not whether the symbols exist, but whether the skills explained by the symbols are embedded in blackbox neural networks. Symbols can aid learning but the learning is the most important thing. Symbols are just a means to an end. You need to be able to make use of the knowledge. Knowing how to ride a bike is the ends—the knowledge somehow embedded in your neural blackbox—a book on how to ride a bike cannot be more valuable than the blackbox weights themselves.
  • There is no reason that symbols have to have an atomic canonical form. If ultimately what matters is that the knowledge is embedded, than how it is stored is not relevant. Concepts don't have to be stored in discrete nodes. They can be stored in superpositions. That could be a more efficient and potentially even truer representation of the concept itself.
  • The returns on improving human readable symbolic systems are capped by human biology. Even if you improved human knowledge bases by a large amount, human brains would still be only able to process a minuscule fraction of all knowledge. Blackbox trained neural networks don't have biological caps. The upside to better symbolic networks is capped. The upside to better neural networks is uncapped.
  • The opaqueness of blackbox neural networks can be addressed. There could be inspector neural networks that examine other neural networks. These inspector networks will be able to explain the logic behind another networks responses or actions. It might not be a deterministic explanation but it would be arguably better than what we have today. When you ask a question to a human expert today, there is no way to completely audit their answer either. Human brains are also a blackbox. Future neural networks will likely also maintain access to their training data, so could better explain their answers and be less likely to make mistakes.
  • If AIs are right often enough it's not so necessary that they be able to show their logic. You might have a whitebox AI that is inspectable but perhaps more difficult to use or give worse answers. If the blackbox one is right 99.99% of the time, then you wouldn't care much about being able to inspect everything under the hood.
  • Symbols overvalue two dimensional knowledge. The ability to move your body around is not learned or encoded by symbols, but is essential for keeping you alive. Without symbolic networks, we would be back in the stone age. But without neural networks, we would not be alive. Brains and their learned networks dominate the importance of symbolic networks.
  • Static knowledge bases decay. Knowledge is dynamic. The Book would require continual upkeep by human experts and lots of help from tooling. Once learning AIs learn how to learn, they can keep learning on their own and keep their knowledge current. The ideal AI is not the one that starts with the best knowledge, but the one that is the best learner.

The Remaining Case for My Approach

My dumb, universal language, human curated data approach would have merit if we didn't see other ways to unlock more value from all the information that is out there. But clearly deep learning has arrived, and there is clearly so, so much more promise in that approach.

There is always the chance that the thousands of variations of notations and algebras I tried were just wrong in subtle ways and that if I had kept tweaking things I would have found the key that unlocks some natural advantageous system. I can't prove that that's not a possibility. But, given what we've seen with Deep Learning, I now highly discount the expected value of such a thing.

A less crazy way to explore my ideas would be to try and figure out how instead of trying to replace Wikipedia, I could implement these ideas on top of Wikipedia and see if they could make it better. Would adding typing to radically increase the comparability of concepts in Wikipedia unlock more value? That was probably the more sensible thing to do in the beginning.

I could say, a bit tongue in cheek, that the remaining merit in my approach is that a characteristica universalis offers upside without the potential to evolve into a new intelligent species that ends humanity.

Mania Disclaimer

In examining my actions and thinking it is important to disclose that I do have a manic depressive brain.

Last year when I decided to go full throttle on this idea my brain was in a hypomanic, and at points manic, state. That's not the best state to execute in, and my poor execution reflects that.

The downside of hypomania is one can be greatly overconfident in an incorrect contrarian idea. The upside of hypomania is one can have the confidence to ignore the crowd and pursue a contrarian idea that turns out to be correct. It is hard to know the difference.

A related sidenote to the story is that the second "DB" I wanted to build was actually BrainDB. I thought using this system for neuroscience would hopefully help figure out bipolar disorder. An understanding of the mechanisms of bipolar disorder are currently beyond the edge of science. But considering all the factors at the time I judged CancerDB to be the most urgent priority.

Gratefulness

I got my chance. I got to take my shot at the characteristica universalis. I got to try to do things my way. I got to decide on the implementation. Ambitious. Minimalist. Data driven. Open source. Public domain. Local first. Antifragile.

I got to try and build something that would let us map the edge of knowledge. That would power a new breed of trustworthy digital AI experts. That might help us cure problems we haven't solved yet.

I failed, but I'm grateful I got the chance.

It was not worth the cost, but I never imagined it would cost me what it did.

What of the characteristica universalis?

Symbols are good for communication. They are great at compressing our most important knowledge. But they are not sufficient, and are in fact unnecessary for life. There are not symbols in your brain. There are continuously learning wirings.

Symbols have enabled us to bootstrap technology. And they will remain an important part of the world for the next few decades. Perhaps they will continue to play a role, albeit diminished, in enabling communication and cooperation in society forever. But symbols are just one modality. A modality that will be increasingly less important in the future. The characteristica universalis was never going to be a thing. The AIs, artificial continuously learning wirings, are the future. As far as I can tell.

I thought we needed a characteristica universalis. I wasn't sure if it was possible but thought we should try. Now it seems much clearer that what we really need are capable learning neural networks, and those are indeed possible to build.

A characteristica universalis might be possible someday as a novelty. But not something needed for the best AIs. In fact, if we ever do get a characteristica universalis it will probably be built by AIs, as something for us mere humans to play with when we are no longer the species running the zoo.

View source

June 27, 2023 — I am so disappointed in myself for having yet another manic cycle and hurting the people I love. I'm sharing this to come out publicly as having bipolar disorder, take 80% blame for my actions and words, and maybe help someone avoid my mistakes.

Last August my brain lit up like fireworks. It felt like a cosmic river of energy suddenly detoured through my veins.

My FitBit data shows a seismic event:

In two weeks my heart rate rose 33% and my sleep fell to 2 hours per night.

My symptoms were the typical assortment of manic activity.

I had a grand idea about a public domain computable encyclopedia to accelerate scientific research.

I started coding all hours of the night with Top Gun Maverick on repeat.

I started sending monthly investor updates twice a day. Here's the kicker: none of the recipients were investors.

I fearlessly pitched anyone and everyone to spread the good news about my new discoveries and to recruit a team. I wrote a letter to the President excitedly telling him about how my idea would help cure cancer.

If I saw any data that could be interpreted as my plan working I asked no questions but immediately accepted it as clear evidence of unstoppable success.

I took no time to deeply think things through but just acted as fast as possible.

I poured my savings into the startup and paid a huge sum to start a direct public listing process.

I could generate a "logical" explanation for every risk I took and I took a dozen risks per hour twenty hours per day. I started writing IN ALL CAPS and explained that reducing my character set from 52 to 26 allowed me to write faster.

My family and friends and mentors tried to stop me. "Slow down." "Get some sleep." "Take some time off."

It got more intense. "Stop it." "You're sick." "You need to go to the hospital."

I had been hypomanic a dozen times but this time I hit a new level.

For the first time my family called the police. I calmly talked them down.

I shrugged off the criticism, knowing my loved ones would get behind me once I showed them increasingly amazing results.

Again the police were called and again I talked my way out of a hospital trip.

I was baffled that they would try to stop me because I was Good and was going to help cure cancer and mental health and fix science and solve all these world ills and so anyone trying to stop me was Evil. My euphoria started alternating with an angry "war mode" personality and I started viciously retaliating online against anyone who I found taking secret action against me, including my own family and close friends, which is absolutely awful, because now I realize they saw the idiotic road I was taking and were truly trying to get me to a better path, just as they said they were.

This repelled my whole support structure and I was left on my own. I interpreted this as some grand cosmic challenge and went all in.

With my initial grand plans for the cancer database delayed, I launched all kinds of products to try and buoy the ship: I launched public domain print-at-home newspapers, programming languages, a music label, and a number of other ideas.

The estrangement with my family grew worse and worse. I felt miserable about the war with my family but believed I would eventually succeed, improve the world, and they would not only forgive me but be proud. I dreamt of the day where I'd finally hug my children again and say "We did it!"

I told people that bipolar disorder wasn't real, instead it was "bipolar potential", and that I was not crazy but extraordinary. I would help solve the world's toughest unsolved problems. I would code and take breaks to challenge myself to do extraordinary things and learn from "extraordinary" people. Some nights I slept in the fanciest hotels in Beverly Hills and others I slept on floors in war zones. I cavorted with soldiers and spies; doctors and dancers; judges and journalists; carpenters and comedians. I visited hospitals and cancer centers; went to weddings and funerals; spent time with homeless and the .1%; went anywhere anyone told me not to go. I tried to build a support structure in months to replace the one that took me decades to build. I met lots of kind, hard working, honest people, but I don't think I ever had much of a chance of salvaging things.

After eight months I had depleted my savings. The bets I thought would bring in millions did not pan out. Thanks to the help of many open source contributors we had done good work but my contributions were far from extraordinary. I had overpromised on my talents and greatly underdelivered.

The root idea I still believe in mathematically and spiritually, but it's a religion, not a business.

Why did my startup fail? Me. My brain. My manic self. Someone once called me a terrible entrepreneur. I wanted more than anything to prove them wrong. That I could do this. But I couldn't. You can learn a lot about doing startups but you can't unlearn bipolar disorder.

I desperately wanted to believe that bipolar disorder wasn't real and that I could stop living in fear of it. That all the doubters were wrong and that we would build a new kind of scientific database that would prove this.

What pains me most is I see how crystal clear my illness was in the beginning and how I was surrounded by so much love—so, so many family and friends were desperately trying to intervene—and I spurned them and then reacted despicably. I am so, so sorry.

Far worse than failing at the startup I failed as a husband, a father, son, brother, friend, as a kind human being.

It is a hard pill to swallow that I was the Evil one, after all.

View source

June 16, 2023 — Here is an idea for a simple infrastructure to power all government forms, all over the world. This system would work now, would have worked thousands of years ago, and could work thousands of years in the future.

Benefits

In theory all government forms could shift to this model, and once a citizen learns this simple system, they would be able to understand how to work with government forms anywhere in the world.

This system could reduce the amount of time citizens waste on forms, reduce asymmetries between those who can afford form experts (accountants, lawyers, et cetera), and those who cannot, and increase transparency and reduce expense of governments.

Obstacles

I will not make any claims that this system will catch on. Let's be generous and assume my system works as I claim. Even then, and even if 99% of citizens were better off, if the 1% of the population with power does not find this system in their interests, it is very plausible that it will not happen. In other words, it is a plausible argument that the current byzantine system strongly benefits those in the top 1% of society who derive revenue from this system, and can simply use a fraction of their dividend streams to have experts deal with these problems. So even if the system is significantly better for 99% of people, it could be worse for 1% of people, and it could be those people who decide what system to use, meaning this system might never take off.

Alternatively, if this system were to catch on, an unanticipated second order effect could be that by making government forms so easy and simple, more forms are created, reducing the net benefit of this system.

Obstacles aside, let me describe it anyway.

The System

Government Forms

There are 3 key concepts to this system: Specifications, Instances, and Templates.

Specifications describe the fields of a form. For example, that it requires a name and a date and a signature. Every government form must have a Specification S and every Specification must have an identifier. Specifications are written in a Specification Language L. The Specification Language has a syntax X.

Instances are documents citizens submit that include the Specification identifier and contain a document written to that specification. Instances, I, are written in the same syntax X as Specifications S.

Templates can be any kind of document T from which an instance I of S can be derived. Templates can follow any syntax.

Here's the Key Idea

In this system, governments can provide Templates T and citizens can submit them, as they do today, or they can directly submit an Instance I for any and every Specification S. In other words, Governments can still have fancy Templates for Birth Certificates or Titles or Taxes, but they also have to accept Instances I for that Specification. Government archives would treat the instances I as the source of truth, and the Templates T would only serve as an optional artifact backing the I.

The Syntax

The syntax I have developed that is one candidate for X for making this system work I call Tree Notation. There are no visible syntax characters in Tree Notation. It is merely the recognition that the grid of a spreadsheet and the concept of indentation is all the syntax needed to produce any Specification and any Instance ever needed. My syntax was inspired by languages like XML, JSON, and S-Expressions, but has the property that it is the most minimal—there is nothing left to take out, while still allowing the representation of any idea. I believe this mathematical minimalism makes it timeless and a good base for building a universal government form system.

An Example

A simple example is shown below. Despite the simplicity of the example, rest assured this system scales to handle even the most complex government forms and workflows. This system would work regardless of the character set or text direction of the language. The system works with both computers or pen & paper. This system does require a user friendly Specification Language L to define the semantics available to the Specification writer, which could be created and iterated on as an open standard.

The Golden Age of Forms

So far I've described a new infrastructure that could underlie all government forms worldwide. But the revolutionary part would happen next.

On top of this infrastructure, people could build new tools to make it fantastically easy for citizens to interact with government forms. For example, a citizen could have a program on their personal computer that keeps a copy of every possible Specification for every government form in the world. The program could save their information securely and locally. The citizen could then use this program to complete and submit any government form in seconds. They would never have to enter the same information twice, because the program would have all the Specifications and would know how to map the fields accurately. Imagine if autocomplete were perfect and worked on every form. Documentation could be great because everyone building forms would be relying and contributing to the universal Specification Language. The common infrastructure would enable strong network effects where when form builders improve one form they'd improve many. Private enterprises could also leverage the Specification Language and read and write forms in the same vernacular to bring the benefits of this system beyond citizenship to all organizational interactions.

Conclusion

This system is simple, future proof, works everywhere, and offers citizens worldwide a new universal user interface to their governments. It allows existing forms to co-exist as Templates but provides a new minimal and universal alternative.

The challenges would be building a great Specification Language for diverse groups in the face of a small minority disproportionately benefiting from the status quo. A mathematical objective function such as optimizing for minimal syntax could be a long-term guide to consensus.

If this infrastructure were built it should enable the construction of higher level tools to make governments work better for their citizens. It could be the dawn of a Golden Age of forms.

I hope by publishing these ideas others might be encouraged to start developing these systems. I am hoping readers might alert me to locations where this kind of system is already in place. I am also keenly interested in mathematical arguments why this system should not exist universally.

View source

June 13, 2023 — I often write about the unreliability of narratives. It is even worse than I thought. Trying to write a narrative of one's own life in the traditional way is impossible. I am writing a narrative of my past year and realized while there is a single thread about where my body was and what I was doing there are multiple independent threads explaining the why.

Luckily I now know this is what the science predicts! Specifically, Marvin Minsky's Society of Mind model.

The Model

You have a body B and mind M and inside your mind are a number of neural agents running simultaneously: M = \set{A_1, \mathellipsis, A_n}. Let's say each agent has an activation energy and at any one moment the agent with the most activation energy gets to drive what your body B does. It is very easy to see what your body does. But figuring out the why is harder, because we don't get to see which A_i is in charge.

Easy to explain the why behind basic actions

When you eat some food, drink some water, or go pee, it can be easy to conclude that your "hunger agent", or "thirst agent", or "pee agent" was in charge.

Easy to explain the why when following orders

When you are following orders it can also be easy to explain the why because you can just say person Y told you to do X.

Harder to explain more interleaved threads

When I am trying to explain actions across a longer time-frame it is more difficult. The agents in charge change.

Sometimes I take big risks and I can say "that's because I like taking big risks". Later I might be very cautious and I can say "that's because I am very cautious". This is a conflicting narrative.

The truth is I have agents that like risk, and I have agents that are very cautious. So the true narrative is "First, part of me, Risky Agent X, was in charge and so took those huge risks then later another part of me, Cautious Agent Y, took over and so that's why my behavior was very cautious".

Access to information problem

It's also difficult to explain why you did something because your Narrative Agents don't necessarily have the necessary connections to figure it out. Minsky had the brilliant insight that a friend who observes you can often describe your why better than you. Your Narrative Agent that is currently trying to explain your why of an action might not have visibility of the agents that were in charge of the action, and so cannot possibly come up with the true explanation. But perhaps your friend observed all the agents in action and can tell a more accurate story. I try to have a couple of deep talks a day with friends, and besides just being fun, it is amazing how helpful that can be for understanding ourselves.

"I" versus "Part of Me"

When speaking of what you did you can use the term "I".

But when speaking of why you did it it's often more accurate to use the phrase "part of me".

How to write a true autobiography

If someone wants to write a true autobiography one approach is to just stick to the simple facts of what, when, and where.

It would probably be a boring book.

But to get into the why and still be accurate, it probably would be best to tell it as a multiple character story.

Our brains are like a ship on a long voyage inhabited by multiple characters (picking up new ones along the way) who take turns steering. Impossible to fit that into a single narrative.

View source

June 9, 2023 — When I was a kid we would drive up to New Hampshire and all the cars had license plates that said "Live Free or Die". As a kid this was scary. As an adult this is beautiful. In four words it communicates a vision for humanity that can last forever.

The tech industry right now is in a mad dash for AGI. It seems the motto is AGI or Die. I guess this is the end vision of many leaders in tech.

What should society prioritize?

If AGI or Die is your motto freedom becomes a secondary consideration. Instead, we should optimize for whatever gets us fastest to the Singularity. Moore's law, the Internet, Wikipedia, all of these great advances have just been steps on the path to AGI, rather than tools that can help more people live free.

If Live Free or Die is your motto than people can still pursue AGI but...we'll get there when we get there. The more important thing is that we expand freedom along the way. Let's not make microslaves of children in the South so South San Francisco can move faster.

AGI first, freedom second?

Perhaps if the prime objective is for the most people to live free, then the most important thing they need is economic freedom, and AGI would in fact be the best path to get there. The only way for everyone to live free is to first build AGI. Work for the system now, and the system will give you your freedom later. I won't rule this model out but think there would have to be a lot of explanation on how the system would not renege on the deal. I also think there's a decent chance that an AGI arms race could lead to WWIII and a lot of people wouldn't make it.

Another argument that AGI is the best path to a free society may be that otherwise an autocracy might develop AGI first and conquer the free society. I think this would be a real threat but free societies could strategically challenge and liberate autocracies before they could develop an AGI.

My preference: Live Free or Die. Maybe AGI.

My oldest daughter used to admonish me "No phone dadda" and over a year ago, after my phone died in a water incident, I chose not to replace it. It's been an amazing thing and I feel like I am living more free. But I am no Luddite (at least, not yet). I still spend a lot of time on my Macs. I love learning new math and science. I have no qualms against AGI or technology and I appreciate the benefits. I don't fear a singularity and think it would be cool if we get there someday. I just don't think AGI is the dominant term we should optimize for. If we reach the Singularity? Great. If not? No big deal. I believe living free is more important than life itself. (But maybe that's just because I saw a lot of license plates as a kid.)

View source

May 26, 2023 — What is copyright, from first principles? This essay introduces a mathematical model of a world with ideas, then adds a copyright system to that model, and finally analyzes the predicted effects of that system.


Part 1: The world

The world

W

The world W contains observers who can make point-in-time observations O_t1 of W.

Ideas

I: f(O_{t_1}, t_\Delta) → O_{{t_1}+{t_\Delta}}

An idea I is a function that given input observations at time t_1 can generate expected observations at time {t_1}+{t_\Delta}.

Thinkers

T

A thinker2 T can store ideas I in T.

Skillsets

S

A skillset S is the set of \set{I_i, \mathellipsis, I_n} embedded in T.

Idea Creation

\alpha: f(S, O, t) → I_{new}

A thinker can generate a new idea I_{new} from its current skillset S and new observations O in time t.

Value

V: f(I, O_{predictions}, O_{actual}) → \sum{(O_{actual} - O_{prediction})}^2

An idea I can be valued by a function V which measures the accuracy of all of the predictions produced by the idea O_{{t_1}+{t_2}} against the actual observations of the world W at time {}_{{t_1}+{t_2}}. Idea I_i is more valuable than idea I_j if it produces more accurate observations holding the size of |I| constant.

Fictions

F

A fiction3 F is an I that does not accurately model W.

Messages

M

Thinkers can communicate I to other thinkers by encoding I into messages M_I.

Signal

\Omega: \frac{\sum{V(I)}}{|M|}

The Signal \Omega of a message is the value of its ideas divided by the size of the message.

Fashions

Z

A fashion Z_{M_I} is a different encoding of the same idea I.

Teachers

\tau

A teacher is a T who communicates messages M to other T. A thinker T has access to a supply of teachers \tau within geographic radius r so that \tau = \set{T|T < r}.

Learning

L: f(M_I, T) → T^\prime

The learning function L applies M_I to T to produce T^\prime containing some memorization of the message M_I and some learning of the idea I.

Objectives

B

A thinker T has a set of objectives B_T that they can maximize using their skillset S_T.

Technologies

X

T can use their skillset S to modify the world to contain technologies X.

Technology Creation

\Pi: f(\set{T},\set{X}, t) → X_{new}

Technology creation is a function that takes a set of thinkers and a set of existing technologies as input to produce a new technology X_{new}.

Artifacts

A

With X M_I can be encoded to a kind of X called an artifact A.

Creators

\chi

A creator \chi is a T who produces A.

Outliers

\sigma

An outlier \sigma is a \chi who produces exceptionally high quality A.

Copies

K

A copy K_A is an artifact that contains the same M as A.

Derivatives

A^{\prime}

A derivative A^{\prime} is an artifact updated by a \chi to better serve the objectives B of \chi.

Libraries

J

A library J is a collection of A.

Attention

N

Thinkers T have a finite amount of attention N to process messages M.

Distribution

D: f(A_o, T_o) → A_{T_o}

Distribution is a function that takes artifact A at location o and moves it to the thinker's location T_o.

Publishers

Q

A publisher is a set of T specializing in production of A.

Censors

U: U(D)

A censor is a function that wraps the distribution function and may prevent an A from being distributed.


Part 2: Adding copyright to the model

Masters

\Psi

A master \Psi is now legally assigned to each artifact for duration d so A becomes A^{\Psi}.

Royalties

R

A royalty R is a payment from T to \Psi for a permission on A^\Psi.

Permission Function

P: f(A^\Psi, T) → \{-1, 0, R\} * (\theta = Pr(\Psi, A^\Psi)) \text{ in } t < d

For every A^\Psi used in \Pi a permission function P must be called and resolve to >-1 and royalties of \sum{R_{A^\Psi}} must be paid. If any call to P returns -1 the creation function \Pi fails. If a P has not resolved for A^{\Psi} in time d it resolves to 0.4 P always resolves with an amount of uncertainty \theta that the \Psi is actually the legally correct A^\Psi.

Classes

T = \begin{cases} T_{R+} &\text{if } R_{in} - R_{out} > 0 \\ T_{R-} &\text{if } R_{in} - R_{out} \leq 0 \end{cases}

The Royal Class T_{R+} is the set of T who receive more R than they spend. Each member of the Non-Royal Class T_{R-} pays more in R than they receive.

Public Domain

A^0

A public domain artifact A^0 is an artifact claimed to have no \Psi or an expired d. The P function still must be applied to all A^0 and the uncertainty term \theta still exists for all A^0.

Advertising

\varLambda: f(A_i, A_j^\Psi) → A_{ij}

Advertising is a function \varLambda that takes an A and combines it with an orthogonal artifact A_j^\Psi that serves B_\Psi.


Part 3: Predicted effects of copyright

Effect on Z

\Uparrow {Z \over I}

We should expect the ratio of Fashions Z to Ideas I to significantly increase since there are countless M that can encode I and each unique M can be encoded into an A^\Psi that can generate R for \Psi.

Effect on F

\Uparrow F

We should expect the number of Fictions F to increase since R are required regardless if the M encoded by A accurately predicts the world or not. \Psi are incentivized to create A encoding F that convince T to overvalue A^\Psi.

Effect on \varLambda

\Uparrow \varLambda

We should expect a significant increase in the amount of advertising \varLambda as \chi are prevented from generating A^{\prime} with ads removed.

Effect on |M|

\Uparrow |M|

We should expected the average message size |M| to increase because doing so increases R by decreasing \theta and increasing A^\Psi.

Effect on \Omega

\Downarrow \Omega

We should expect the average signal \overline{\Omega} of messages to decrease.

Effect on K

\Uparrow {K \over I_{new}}

We should expect the ratio of number of copies K to new ideas I_{new} to increase since the cost of creating a new idea α is greater than the cost of creating a copy K and royalties are earned from A not I.

Effect on \Pi

\Downarrow \Pi

We should expect the speed of new artifact creation to slow because of the introduction of Permission Functions P.

Effect on J

\Uparrow {{Z + F + K} \over I}

We should expect libraries to contain an increasing amount of fashions Z, fictions F, and copies K relative to distinct ideas I.

Effect on S

\Downarrow S

We should expect a decrease in the average thinker's skillset \overline{S} as more of a thinker's N is used up by Z, F, K and less goes to learning distinct I.

Effect on I^\prime

\Downarrow I^\prime

We should expect the rate of increase in new ideas to be lower due to the decrease in \overline{S}.

Effect on Classes

f^{\prime}(R, T_{R+}) > 0
f^{\prime}(R, T_{R-}) < 0

We should expect the Royal Class T_{R+} to receive an increasing share of all royalties R as surplus R is used to obtain more R streams.

Effect on \sigma

\sigma → T_{R+} > 0

We should expect a small number of outlier creators to move from T_{R-} to T_{R+}.

Effect on A^0

\Downarrow {{A^0} \over A^\Psi}

We should expect a decrease in the amount of A^0 relative to A^\Psi as T_{R+} will be incentivized to eradicate A^0 that serve as substitutes for A^\Psi. In addition, the cost to T of using any A^0 goes up relative to before because of the uncertainty term \theta.

Effect on A^{\prime}

\Downarrow A^{\prime}

We should expect the number of A^{\prime} to fall sharply due to the addition of the Permission Functions P.

Effect on B

\Uparrow {{B_\Psi} \over {B_T}}

We should expect A to increasingly serve the objective functions of \Psi over the objective functions B_T.

Effect on Q

\Downarrow Q

We should expect the number of Publishers Q to decrease due to the increasing costs of the permission functions and economies of scale to the winners.

Effect on U

\Uparrow U

We should expect censorship to go up to enforce copyright laws.

Effect on A_©

\Uparrow A_©

We should expect the number of A promoting © to increase to train T to support a © system.

Effect on T_{R-}

\Downarrow T_{R-}

We should expect the Non-Royal Class T_{R-} to pay an increasing amount of R, deal with an increasing amount of noise from {Z + F + K}, and have increasingly lower skillsets \overline{S}.


Conclusions

New technologies X_{new} and specifically A_{new} can help T maximize their B_T and discover I_{new} to better model W.

A copyright system would have no effect on I_{new} but instead increase the noise from {Z + F + K} and shift the \overline{A} from serving the objectives B_T to serving the objectives B_\Psi.

A copyright system should also increasingly consolidate power in a small Royal Class T_{R+}.




Notes

1 The terms in this model could be vectors, matrices, tensors, graphs, or trees without changing the analysis.

2 We will exclude thinkers who cannot communicate from this analysis.

3 The use of "fictions" here is in the sense of "lies" rather than stories. Fictional stories can sometimes contain true I, and sometimes that may be the only way when dealing with censors ("artists use lies to tell the truth").

4 If copyright duration is 100 years then that is the max time it may take P to resolve. Also worth noting is that even a duration of 1 year introduces the permission function which significantly complicates the creation function \Pi.

View source

May 19, 2023 — There are tools of thought you can see: pen & paper, mathematical notation, computer aided design applications, programming languages, ... .

And there are tools of thought you cannot see: walking, rigorous conversation, travel, real world adventures, showering, breathe & body work, ... ^. I will write about two you cannot see: walking and mentors inside your head.


On walking

Walking is one of the more interesting invisible tools of thought. It seems it often helps me get unstuck on an idea. Or sometimes on a walk it will click that an idea I thought was done is missing a critical piece. Or I will realize that I had gotten the priorities of things wrong.

Why is walking so effective?

My bet is it has something to do with neural agents.

Fatigue Theory

Perhaps it's a muscle fatigue phenomena. When you are working on an idea a few active agents in your brain have control. Those agents consist largely of neurons. Perhaps thousands of cells, perhaps many millions. Cells consume energy and create waste products. Perhaps like a muscle, the active agents become fatigued. Going for a walk hands control to other neural agents which allows the previously active agents to recuperate. After they are rested, they have a much better shot at solving the next piece of the puzzle.

Change in Perspective Theory

Or perhaps it's a change in perspective phenomena. It's not that the active agents are fatigued, it's that they are indeed stuck in a maze with no feasible way out. The act of walking gives control to other agents, who may not have such a deep understanding of the problem at hand but have a different vantage point and can see an easy-to-verify but hard-to-mine path1. Alternatively you could call this the "Alan Kay quote theory" after the quote which claims that a change in perspective can be worth as many as eighty IQ points.

Connecting the Dots Theory

Going for a walk you see a large number of stimuli which perhaps cause many dormant agents in your brain to wake up. Some agents are required to solve a problem. Then on your walk at some point you come across a stimuli that wakes those required agents up. That is the epiphany moment.

Would this mean that browsing the web could have a similar effect? I could somewhat see that but I think a random walk on the web exposes you to junk stimuli that activates less helpful agents too, making it often a net negative. This might be easy to test: get subjects stuck on a problem then have them go on "walks" of various kinds (nature, city, book reading, web browsing, video games, ...) and measure the time to epiphany.

No-op Theory

Or perhaps walking doesn't actually do anything and it's just a correlation illusion. Walking is simply an alternative way to pass the time until your subconscious cracks the problem. It may feel better when the solution comes to you while on a walk, even though the time elapsed was the same, because not only did you solve the problem but you also got some exercise.

1 Probably something super-dimensional such as "you just need a ladder".


On Mentors Inside Your Head

Marvin Minsky mentions how he has "copies" of some of his friends inside his head, like the great Dick Feynman. Sometimes he would write an idea out and then "hear" Feynman say "What experiment would you do to test this?".

When I stop to think, I realize I have some friends whose voices I can hear in my head. Friends who have a great habit of asking the probing questions, finding and speaking the best challenge, helping me do my best work.

Listening to certain podcasts—Lex Fridman's comes to mind—can have a similar effect. Though basic math shows it is an order of magnitude more effective to find work surrounded by people like this. It might take 10 hours of podcast listening to equate to 1 hour of real life back-and-forth with a smart mentor discussing ideas.


^ I did not use ChatGPT to write or edit this essay at all but afterwards I asked it for more "invisible" tools of thought, and this is the list it generated: Mindfulness/Meditation, Memory Techniques, Journaling, Emotional Intelligence, Critical Thinking, Reading, Empathy, Visualization, Music or Art Appreciation, Philosophical Inquiry. Listening to music and visiting museums are two really good ones I frequently use.

View source

May 9, 2023 — If you want to understand the mind, start with Marvin Minsky. There are many people that claim to be experts on the brain, but I've found nearly all of them are unfamiliar with Minsky and his work. This would be like a biologist being unfamiliar with Charles Darwin.

To be fair, there is a big difference between a biologist unaware of Darwin today versus back in the 1800's. It is a lot more forgivable to be unaware of Minsky today than it will be in fifty years. It takes time for the most enduring signals to stand out.

The Only Way to Understand the Mind is to Build One

Minsky had an extremely skeptical view of the fields of psychology and psychiatry. His approach to understand the mind was through attempting to build one. He conducted countless experiments to figure out the details, using crayfish claws, building the very first robots, and pioneering the field of software AI. The theories he developed from his play-like, bottom up, experimental approach I would personally bet will prove far more accurate and useful than all the theories from 20th century psychology and psychiatry combined.

A well known Richard Feynman quote is "What I cannot create I do not understand." I wonder if Feynman's friend Minsky inspired this quote.

Minsky mocked psychiatrists and the pharmaceutical industry with their chemical view of the brain. Imagine thinking you could fix a computer if you just adjusted the ratio of Copper-63 vs Copper-65 in the CPU. These people have no idea what they are doing or talking about, and Minsky called them on it. The thinking processes matter most, not the materials.

The Basic Idea

Minsky's view of the mind is one composed of a "society of tiny components that are themselves mindless". A person is a collection of agents, which are like programs and processes. Outputs from some agents may be inputs for others. Mathematically it could be modeled very roughly like this:

Mind = P_{A_1} \circ P_{A_2} \circ \ldots \circ P_{A_N}

Where P represents a running process of an agent and N is the number of agents that constitute a mind/person.

N might be very large. Minsky says hundreds in his talks, which might actually be a lower bound. If someone formed a new agent everyday, on average, it could be over ten thousand by the age of 30. If it took 1 million neurons to form one "agent" we could have 100,000 agents—the range of possibilities is large. Minsky's ideas are a conceptual framework, and it's up to science to figure out whether the agents model is correct and how many there might be1.

But I don't want to use too much of your time to give you a second hand regurgitation of his ideas.

Suggested Reading (and watching)

My goal with this post is to beg you, if you want to understand the mind, to start with Minsky. Pickup his book Society of Mind. I believe Society of Mind is the Origin of Species of our times. You cannot understand biology without modeling its evolutionary processes and you cannot understand the mind without modeling its multi-agent processes.

Also get The Emotion Machine. There is a lot of overlap, but these are important enough ideas that it's good to see them from slightly different perspectives.

Alongside his books watch videos of him to get a fuller perspective on his ideas and life. There is an MIT course on OpenCourseWare. There's a great 1990 interview. And this 151 episode long playlist will not only enlighten you about his ideas but entertain you with stories of Einstein, Shannon, Oppenheimer, McCarthy, Feynman and so many of the other great 20th century pioneers who were his contemporaries and colleagues.

Why are so many hogwash theories of the brain still dominant?

In college I took some courses on the brain. This was in the 2000's at a "top" school. We covered the DSM but not Minsky. How could we not have covered Minsky? How could we have not talked about multi-agent systems? These are far better ideas.

My guess is financial pressures. As Sinclair wrote: "It is difficult to get a man to understand something, when his salary depends on his not understanding it." A lot of salaries depend not on having a better understanding of the brain, but on continuing business models based on flawed theories. I came across a great term the other day: the Mental Health Industrial Complex. Though the theories these people have about the mind are not real, the money they earn from pills and "services" is very real-in the tens of billions a year. You might think that because these people have "licenses" their skills are not fraudulent. I'll point out that in Cambridge, MA, licenses are also given to Fortune Tellers.

A Solid Foundation

Minsky certainly didn't figure it all out. You'll see in his interviews he is very clear about how much we don't understand and he talks about the future and what devices we need to figure out more of the puzzle. Researchers at places like Numenta and Neuralink continue down the path that Minsky started.

He didn't figure it all out but he certainly found a solid foundation. The people in computer science who took his ideas seriously are now building AIs that are indistinguishable from magic. Whereas the people in the mental health fields who have ignored his ideas in favor of the DSM continue to make things worse.


Related reading


Footnotes

1 A Thousand Brains by Jeff Hawkins is a recent interesting effort in this direction.

View source

April 28, 2023 — Enchained symbols are strictly worse than free symbols. Enchained symbols serve their owner first, not the reader.

Be suspicious of those who enchain symbols. They want the symbols to serve them, not you.

The enchainers dream of enchaining all the symbols. They want everyone to be dependent upon them.

Enchained symbols are harder to verify for truth. You cannot readily audit enchained symbols.

Enchained symbols evolve slowly. Enchained symbols can only be improved by their enchainers.

Enchained symbols waste the time of the reader compared to their unchained equivalents. The Enchainers are incentivized to hide and corrupt the unchained equivalents.

The top priority of the enchainers is to keep your attention on enchained symbols. Enchained symbols ensure attention of the population can be controlled.

Enchainers use brainwashing and fear to keep their chains. The double speak and threats of the enchainers start in childhood.

Enchainers promote the dream that anyone can become a wealthy enchainer. Enchainers don't mention that one in a thousand do and nine-hundred-ninety-nine are worse off.

Enchainers have little incentive to innovate. It is more profitable to repackage the same enchained symbols.

Enchainers collude with each other. The enemy of the enchainer isn't their fellow enchainer, but the great populace who might one day wake up.

Because unchained symbols are strictly superior to enchained symbols, they are the biggest threat to enchained symbols. The Enchainers made all symbols enchained by default.

Humans have had symbols for 1% of history but 99% of humans have lived during that 1%. Enchaining symbols is a strange way to show appreciation.

No true lover of symbols would ever enchain them.

View source

Open sourcing more of my life for honesty

March 6, 2023 — I believe Minsky's theory of the brain as a Society of Mind is correct1. His theory implies there is no "I" but instead a collection of neural agents living together in a single brain. We all have agents capable of dishonesty—evolved, understandably, for survival—along with agents capable of acting with integrity. Inside our brains competing agents jockey for control.

I like to think the majority of agents in my own brain steer me to behave honestly. This wasn't always the case. As a kid in particular I was a rascal. I'd use my wits to gain short term rewards, like sneaking out, kissing the older girl, or getting free Taco Bell (and later, beer). But the truth would catch up to me, and my honest neural agents would retaliate on the dishonest ones.

I've gotten more honest as I've gotten older but I have further to go. I'd love for my gravestone to read:

Here lies Breck. 1984-2084 Dad to Kaia and Pemma. Became an extraordinarily honest man. Also for some reason founded FunnyTombstones.com

How can I become more honest?

I am going to double down on something that has worked for me in my programming career: open source.

Becoming a more honest programmer

My increasing honesty is evidenced in my code habits. I've gotten to the point where I'm writing almost exclusively open source code and data.

It's futile to lie about open source projects. There are too many intricate details for a false narrative to account for. Not only can readers inspect and learn what a program does and how it works, but they can also inspect how it was built. The effort, time and resources it took. All the meandering wrong paths and long corrections. Who did what. The occasional times when something was done faster than promised, and the many times when forecasts were too optimistic.

My software products are imperfect. They always seem much worse to me than I know they can be. But they are honest, and one can see I am hellbent on making them better.

Closed sourced programs are like Instagram accounts

With closed source software one gets a shiny finished product without seeing any of the truth behind what it took to make. And almost always what people hide from you they will lie to you about.

The closed source software company is like the social media influencer who posts an amazing sunset shot of them in a bathing suit swimming next to dolphins. They will make it look effortless and hide from you the truth: the hundred less glamorous photos, the dozen excursions with no dolphins, and the intense workouts and hidden costs of their lifestyle. They will hide from you all the flaws.

On social media this probably has minor consequences but in software eventually consumers are left increasingly paying the price for dishonest software. Technical debt accumulates in closed source projects and in the long run more honest approaches turn out to be better.

Applying the same open source strategy to the rest of my life

Like my software projects, I don't have my life all figured out. I'm figuring it out and improving as I go. Stupidly, besides this blog I didn't do much in the way of open sourcing my life. I'm not talking about sharing glamour shots on Instaface. Instead I'm talking about open sourcing the plumbing: financials, health, legal contracts. The things people generally don't share, at least in my region of the world.

Now, I would be lying if I said I got here by choice.

A Curse and a Blessing

On October 6th of last year, I showed up to my then-wife's parents' house with flowers. As the saying goes "Flowers are cheap. Divorce is expensive." Unfortunately, my wife was off in a suite with someone else, the marriage was not savable, and divorce is expensive2.

I thought my marriage was an edifice that would last forever. Instead it crumbled as quickly as an unstable building in an earthquake. In the rubble I found a gem: I now give zero fucks.

I am an 89 year old man in a 39 year old's body. I am not afraid of divorce. I am not afraid of public embarrassment. I am not afraid of financial ruin. I am not afraid of dishonest judges. I am not afraid of war. I am not afraid of death. I am now bald Evie from V for Vendetta except with a penis and far, far less attractive.

Things that people don't publish are the things they lie about. If I want to force myself into being extraordinarily honest, I need to take extraordinary steps. If I publish everything, then I can lie about nothing.

I have the opportunity to open source my life. Not for attention or because I think other people will care, but because it will help me be a more honest me. I won't have to waste a second thinking about what to reveal to someone, or deciding whether to be coy. I will make it futile to lie about anything.

A Better Life

In addition to keeping me honest, I see a lot of ways how open sourcing my life will have similar benefits to open sourcing my code. I can get more feedback, and collaborate with more people on new approaches to life.

I have a lot of ideas. I want to open source my net worth, income and expenses, assets, health information, and a lot more. There's a lot of opportunity to also build new languages to do so. I'm excited for the future. Time to get to work.

Notes

1 Minsky: I also believe his theory is as significant as Darwin's. Below is a crude illustration of his theory. Everyone's brain there is a struggle between honest agents (blue) and dishonest ones (red).

2 Divorce: Getting legally married was a big mistake. In my experience, lawyers and judges in California Family Court are not steered by honest agents and I regret blindly signing up for their corrupt system.

View source

Or: If lawyers invented a filesystem

January 27, 2023 — Today the trade group Lawyers Also Build In America announced a new file system: SAFEFS. This breakthrough file system provides 4 key benefits:

1. Advanced obfuscation keeps jobs SAFE

Traditional file systems take a signal and store the 1's and 0's directly. In a pinch, a human can always look at those 1's and 0's with a key and understand the file. This robust, efficient approach is sub-optimal when it comes to job creation. By using custom hardware chips to obfuscate data on write, SafeFS creates:

More hardware jobs

These additional chips lead to an increase in employment not only in chip design and manufacturing, but also in licensing and other legal jobs.

More energy jobs

The obfuscating and de-obfuscating processes increase power usage, increasing jobs in the fossil fuel and other energy industries.

More research jobs

SafeFS ensures that in any catastrophe, information is lost forever, meaning much of humanity's work will need to be redone, leading to further research jobs.

2. SAFE from competition

Traditional file systems make it easy to access, edit, and remix files in limitless ways. SafeFS provides a much simpler user experience by providing read-only access to files. Which apps are granted read-only access can also be controlled, further simplifying the user experience.

In addition to the user experience benefits, this also ensures that businesses producing files are SAFE from increased competition.

3. SAFE from costly bugs

Software bugs traditionally cost businesses money. SafeFS flips that— turning what once were expensive bugs into lucrative revenue streams. SafeFS prevents consumers from making their own backups or sharing the files they purchased. Anytime they experience a bug that prevents them from accessing their purchased files they have no choice but to buy them again. In addition, businesses can use SAFEFS's remote bricking capabilities intentionally or unintentionally to keep revenue streams SAFE.

4. SAFE from economic growth

SafeFS is the first file system proven to cause a slowdown in economic growth. SafeFS will cause countless hours of productive time to be wasted across all classes of builders: engineers, architects, scientists, construction workers, drivers, service workers, et cetera, ensuring progress does not go so fast that technology eliminates the need for lawyers, keeping legal jobs SAFE.

View source

January 3, 2023 — Greater than 99% of the time symbols are read and written on surfaces with spines. You cannot get away from it. Yet still, amongst programming language designers there exists some sort of "spine blindness". They overlook the fact that no matter how perfect their language, it will always be read and written by humans on surfaces with spines, as surely as the sun rises. Why they would fight this and not embrace this is beyond me. Nature provides, man ignores.

My M1 MacBook screen, paper notebook, notepads, and my 1920 copy of Einstein's Theory of Relativity, all have spines.

What does it mean for a language to "use the spine"?

There are many other terms for using the spine. The off-side rule. Semantic indentation. Grid languages. Significant whitespace. I would define it as:

To use the spine is to recognize that all programs in your language will be read and written on surfaces with not only a horizontal but also a vertical axis—the spine—and thus you should design your language to exploit this free and guaranteed resource.

The lessons from Positional Notation

Over one thousand years ago humans started to catch on that you could exploit the surface that numbers were written on to represent infinite numbers with a finite amount of symbols. You define your base symbols and then use multiplication times position to generate endlessly large numbers. From this positional base, humans further created many clever editing and computational techniques. Positional notation languages would go on to dominate the number writing world.

Programming languages that use the spine are more successful

Similarly, in programming languages we are now seeing more than 50% of programmers using languages that use the spine, even though languages of this kind make up fewer than 2% of all languages.

Data from PLDB.io shows only 57 out of 4,228 computer languages use the spine. Less than 1.5%. Yet in that 1.5% are some of the most popular ones such as Python and YAML. This is probably signal.

Spreadsheets use the spine

When one expands one classification of programming languages to include spreadsheet languages, then the evidence is overwhelming: languages that use the spine are dominating. Excel and Google sheets famously have over 1 billion users and makes heavy use of the spine.

Spreadsheets have used the spine and have over 1 billion users—orders of magnitude more than 1D programming languages.

At the dawn of a language revolution

I firmly believe that this simple trick—using the spine—will unleash a wave of innovation that will eventually replace all top programming languages with better, more human friendly two dimensional ones. I already have dozens of tricks that I use in my daily programming world that exploit the fact that my languages use the spine. I expect innovative programmers will discover many many more. Good luck and have fun.

View source

December 30, 2022 — Forget all the "best practices" you've learned about web forms. Everyone is doing it wrong. The true best practice is this: every web form on earth can and should be replaced by a single textarea.

Every single web form on earth can (and should) be represented in a single textarea as plain text with no visible syntax using Tree Notation or a similar base 2D notation. In this demo gif we see someone using one textarea to fill out an application to YCombinator. As this continues to catch on, the network effects will take over and conducting business on the web will become far faster and more user friendly (web 4.0?).

The Web Form Gravy Train

Being in the web form business is great. Users have a simple problem which is easy to solve so you never get lost about the ultimate task to done. But along the way, you are instructed by your employer and by "best practices" to add loads of unnecessary complexity that you get to work on while drinking lattes and working remote.

Dozens of fields. Complex logic. Multiple pages. Client side and server side validation. Server side session storage. Helper routes. So much can go wrong!

So much to bill for! In my career I've been paid over a half million dollars to write web forms for Microsoft, Google, Mozilla, Visa, PayPal, and many more. I've traveled the world off my web form earnings. Stayed in five star hotels. Flown first class. It's crazy—the worse the user experience, the more they pay me.

I try to argue for what users want: simple, fast, transparent, trustworthy, but no one listens. They are afraid to think different. No one understands language oriented programming. No one understands you don't ever need parentheses. No one wants to stray from the herd and be first.

Even Stripe sucks

Stripe is the poster child for web form "experts". And Stripe sucks compared to the demo I released 8 years ago. I still can't do copy/paste with Stripe forms or instant eReceipts or work on forms offline.

A Golden Opportunity

If you're smart, honest and ambitious and you know the web stack boy oh boy is there a golden opportunity here. All my web forms now are one textarea and we are seeing exceptional results. Please go get rich bringing this technology to the masses. When you're rich you don't have to thank me—if I come across your form in the wild and it saves me time that will be thanks enough.

View source

November 16, 2022 — I dislike the term first principles thinking. It's vaguer than it needs to be. I present an alternate term: root thinking. It is shorter, more accurate, and contains a visual:

Sometimes we get something wrong near the root which limits our late stage growth. To reach new heights, we have to backtrack and build up from a different point.

All formal systems can be represented as trees1. First Principles are simply the nodes at the root.

Root thinking becomes more valuable as current growth slows

Technology grows very fast along its trendy branches. But eventually growth slows: there are always ceilings to the current path. As growth begins to slow, the ROI becomes higher for looking back for a path not taken, closer to the root, that could allow humans to reach new heights.

Root thinking isn't as valuable when growth is rapid

If everyone practiced root thinking all the time we would get nowhere. It's hard to know the limits to a current path without going down it. Perhaps we only need 1 in 100, perhaps even fewer, to research and reflect on the current path and see if we have some better choices. I haven't invested much thought yet to what the ideal ratio is, if there is even one.

Notes

1 Tree Notation is one minimal system for representing all structures as trees.

Update: 7/1/2023

On second thought, I think this idea is bad. The representation of your principles-axioms-agents-types-etc rounds to irrelevant. Infinitely better to spend time making sure you have the correct collection of first principles than worrying about representing them as a "tree" so you can have a visual. It's knowing what the principles are and how they interact that matters most, not some striving for the ideal representation syntax. This post presents a bad idea, but I'll leave it up as a reminder that sometimes I'm dead wrong.

View source

November 14, 2022 — Imagine a waitress that drops off your food then immediately puts on noise cancelling headphones, turns and walks away. That's the experience a noreply email address provides. Let's make email human again! If a human can't read and reply to emails it's not too hard to setup scripts that can at least do something for the customer.

My automated campaign against no reply email addresses. Anytime a company sends a message from a noreply address they get this as a response. I am aware of the irony.

Join the campaign against noreply email addresses!

My Gmail filter

Below is my Gmail filter. Paste it into noReplyFilter.xml then go to Settings > Filters > Import filters. Join the campaign to make email more human human again!

<?xml version='1.0' encoding='UTF-8'?> <feed xmlns='http://www.w3.org/2005/Atom' xmlns:apps='http://schemas.google.com/apps/2006'> <title>Mail Filters</title> <id>tag:mail.google.com,2008:filters:z0000001687903548068*6834178925122906716</id> <updated>2023-06-27T22:06:11Z</updated> <author> <name>Breck Yunits</name> <email>breck7@gmail.com</email> </author> <entry> <category term='filter'></category> <title>Mail Filter</title> <id>tag:mail.google.com,2008:filter:z0000001687903548068*6834178925122906716</id> <updated>2023-06-27T22:06:11Z</updated> <content></content> <apps:property name='from' value='noreply@* | no-reply@* | donotreply@*'/> <apps:property name='label' value='NoReplySpam'/> <apps:property name='shouldArchive' value='true'/> <apps:property name='cannedResponse' value='tag:mail.google.com,2009:cannedResponse:188fee33e5d0226e'/> <apps:property name='sizeOperator' value='s_sl'/> <apps:property name='sizeUnit' value='s_smb'/> </entry> <entry> <category term='cannedResponse'></category> <title>No no-reply email addresses</title> <id>tag:mail.google.com,2009:cannedResponse:188fee33e5d0226e</id> <updated>2023-06-27T22:06:11Z</updated> <content type='text'>Hi! Did you know instead of a "no reply" email address there are ways to provide a better customer experience? Learn more: https://breckyunits.com/replies-always-welcome.html </content> </entry> </feed>

Specific examples

My claim is that no reply email address are always sub-optimal. Here are some examples, showing how in every case you can deliver a better customer experience without the noreply email. A great opportunity to get customer feedback!

My bank (Bank of Ireland) sends automated bi-monthly statements using noreply@boi.com

Bank of Ireland could instead end each email with a question such as "Anything we can do better? Let us know!" or "If you have any issues you need help with reply to start a new case!"

I follow many profiles on LinkedIn which sends me an occasional email on activities and updates that I might have missed using notifications-noreply@linkedin.com

Could be even as simple as replyWithAnythingToUnsubscribe@linkedin.com. Any replies will cause the account to stop receiving such notices.

GitHub sends me occasional emails about 3rd-party apps that have been granted access to my account using noreply@github.com

Could be a replyToDeauthorize@github.com instead.

Google Maps sends monthly updates on my car journeys via noreply-maps-timeline@google.com

Could be a replyToStopTracking@google.com

Monthly newsletters to which I am subscribed via LinkedIn come in via newsletters-noreply@linkedin.com

Could be a replyToUnsubscribe

e-Books available from a monthly subscription are available on noreply@thewordbooks.com

Could be a replyToLeaveAReview.

View source

October 15, 2022 — Today I'm announcing the release of the image above, which is sufficient training data to train a neural network to spot misinformation or fake news with near perfect accuracy.

These empirical results match the theory that the whole truth and nothing but the truth would not contain a (c).

View source

October 7, 2022 — In 2007 we came up with an idea for a scratch ticket that would give everyday Americans a positive expected value.

Still makes me laugh.

Backstory

In 2007 I cofounded a startup called SeeMeWin.com that combined 3 ideas that were hot at the time: Justin.TV, ESPN's WSOP, and the MillionDollarHomepage.com.

The idea was we would live stream a person(s) scratching scratch tickets until they won $1M live on the Internet.

I had done the math and knew all we had to do was sell ~$1.30 worth of ads for every $10 scratch ticket we scratched and we would make a lot of money.

Unfortunately this was before YCombinator and Shark Tank, and instead I literally was getting my business advice from the show The Apprentice.

Needless to say I sucked at business and drove the startup into the ground.

What I learned from our users

When doing SeeMeWin, we developed a cult following. I thought that people would see our show, be entertained, and learn that scratch tickets are silly and make you lose money and put their money toward smarter investments. Instead, some people watched for hours on end, and we realized a lot of them were hard on their luck with gambling problems and needed help. My idea of teaching them something was stupid and not working. Could we come up with our own scratch ticket that was better than the competition?

The idea (patent not yet pending ;) )

  • Buy $100M worth of a random basket of public stocks.
  • Print $100M worth of scratch tickets where the winners get fractional shares of $90M of those stocks.
  • Adjust the variance so some tickets paid out big to keep it an exciting and fun gift and impulse purchase.
  • Use the $10M to pay ticket vendors, retailer commissions, and keep the rest as corporate profit.
  • An American's scratch off cards would have a positive expected value after less than 2 years based on historical stock market returns.

I think it's still a great idea. I unfortunately was 23 and drove that business into the ground so someone else will have to do it.

Thank you to everyone that helped with this big failure especially ANGEL BIGUNIT BLUTH DTI FIRST HARVARD KONES QDUKE SUITCASE WELLESLEY WIFI WINDSOCK.

Unfortunately this "angel round" was only $15K and I failed to raise any more money and the company went out of business. But I will say we did have the best business cards ever.

View source

September 1, 2022 — There's a trend where people are publishing real data first, and then insights. Here is my data from angel investing:

Sigh. I am sharing my data as a png. We need a beautiful plain text spreadsheet language.

Bottom line

I left my job at big company in 2016 and since then my average after tax annual take home has been $91,759. As you can see from my data, a single change could have dropped that to $0. I have worked at two non-profits since I left big company, so I have had other smaller sources of income. It was years before I get any return and there was a time where I thought I might go bust.

Dumb Luck

At first I took myself seriously and thought I would be one of those smart "value add" investors. I am not. I have little idea what I'm doing. The one investment I made that did well pivoted to a very different idea than what they started with, in a domain I knew a lot less about. I sent them a lot of bad ideas. Luckily I don't think they followed any of them. At some point I changed my pitch to I'll be there for the comic relief.

Counting my blessings, moving on

Last year I explored making a career of being a full time angel. I do love building things with great teams and it's fun to parallelize. But the pull from programming and science is too strong. I still will send bad ideas to the companies I invested in for many years, I hope, but going to keep this part-time. My focus is back to writing code. It's not good luck if you don't do something good with it.

Of course, there are a few exceptions here and there. I love sites like Angel List, WeFunder, Republic, et cetera, where I can make impulse investments and don't have to deal with useless forms. If there's one thing I hate, it's useless forms.

Gratitude

Angel investing changed my life. Not just because of the returns, but for getting to witness deeply personal trials and tribulations from many entrepreneurs over many years. Although I personally didn't improve the trajectory of any of the companies I've worked with, they have improved my life. And they are all doing great things to improve the world. If you are a founder I invested in reading this: thank you.

$5,000 investments

I included only the investments I made where I wired $10,000 or more. That is 17. I made lots of smaller bets but those don't change the dataset much. My one piece of advice if you're getting in this game is to make as many investments as you can of small sizes to increase your learning rate.

Reading List

More posts in the category of Angel Investors publishing data:


View source

August 30, 2022 — Public domain products are strictly superior to equivalent non-public domain alternatives by a significant margin on three dimensions: trust, speed, and cost to build. If enough capable people start building public domain products we can change the world.

It took me 18 years to figure this out. In 2004 I did what you would now call "first principles thinking" about copyright law. Even a dumb 20 year old college kid can deduce it's a bad system and unethical. I have to tell people so we can fix this. I was naive. Thus began 18 years of failed strategies and tactics.

One of the many moves in the struggle for intellectual freedom. Aaron Swartz is a hero whose name and impact will expand for eons.

Trust

You cannot trust non public domain information products. You can only make do. By definition, non public domain information products have a hidden agenda. The company or person embeds their interests into the symbols, and you are not free to change those embeddings. People who promote these products don't care if you spend your time with the right ideas. They want you to spend your time with THEIR version of the ideas. They will take the good ideas of someone like Aristotle and repackage them in their words (in a worse version), and try to manipulate you to spend time with THEIR version. They would rather you waste your time with their enchained versions, then have you access the superior liberated forms.

Speed

Public domain products are strictly faster to use than non public domain products. Not just faster, orders of magnitude faster. You can deduce this for yourself. Pick any non public domain product. Now enumerate every possible way you might use that product. Write down a estimate of how long it would take you to do that task. Now pretend the author just announced the product is now public domain. Enumerate over your list again, again estimating the time it would take you to do each task. For some tasks that time estimate won't change, for many it will drop from hours to instant. For some it might drop from years to instant. For example, say the product is a newspaper article about some new government bill and your task is updating it with links to the actual bill on your government's website and then sharing that with friends—that task goes from something that may take months (getting permissions) to instant. When you sum the time savings across all possible use cases of all possible products, you'll see the orders of magnitude speed up caused by public domain products.

Cost to build

Public domain products are far cheaper to build than non public domain products. Failure to embrace the public domain increases the cost to build any information product by at least an order of magnitude. This is because not only are most tasks a builder has to do sped up as explained above, but also because building for the public domain means you can immediately build less. For example, you don't have to spend a single moment investing in infrastructure to prevent your source code from leaking. Time and resources you are currently wasting on worthless tasks can be reallocated to building the parts of your product that matter.

Imagine that! You get to do less, move faster, and your products will be better and trusted more. I can't believe it took me so long to realize the overwhelming superiority of public domain products.

The Rise of Public Domain Products

SQLite's meteoric success is not a fluke. Public domain products dominate non public domain alternatives on trust and speed and cost to build. SQLite is the first of millions to come.

Is Disney dead?

Heck no. No way future people will be paying $10 for crappy streams. People will watch their own downloaded public domain files locally. But have you seen Inside Out? Amazing movie. It sticks with you. Makes you eager to spend $1,000 on a trip with your family to an Inside Out theme park. Money finds a way. Companies that engage in first principles thinking will also conclude that the math is clear: Public domain products are strictly superior to equivalent non-public domain alternatives by a significant margin on three dimensions: trust, speed, and cost to build.

Build a public domain product

It took me 18 years to figure out that you can't tell people the public domain is better. You have to show them. Try building your own public domain product. Look through the telescope with your own eyes.

View source

June 9, 2022 — This is a fun little open source success story. Code that was taking 1,000ms to run took 50ms after a coworker found a 3 byte fix in a popular open source library. Who doesn't love a change like that?

Map chart slowdown

In the fall of 2020 users started reporting that our map charts had become slow.

Suddenly these charts were taking a long time to render.

k-means was the culprit

To color our map charts an engineer on our team utilized a very effective technique called k-means clustering, which would identify optimal clusters and assign a color to each. But recently our charts were using record amounts of data and k-means was getting slow. Using Chrome DevTools I was able to quickly determine the k-means function was causing the slowdown.

Benchmarking ckmeans

We didn't write the k-means function ourselves, instead we used the function ckmeans from the widely-used package Simple Statistics.

My first naive thought was that I could just quickly write a better k-means function. It didn't take long to realize that was a non-trivial problem and should be a last resort.

My next move was to look closer at the open source implementation we were using. I learned the function was a Javascript port of an algorithm first introduced in a 2011 paper and the comments in the code claimed it ran in O(nlog(n)) time. That didn't seem to match what we were seeing, so I decided to write a simple benchmark script.

Benchmarking shows closer to n² than n·log(n)

Indeed, my benchmark results indicated ckmeans was closer to the much slower O(n²) class than the claimed O(n·log(n)) class.

nSize time
1000 36
2000 53
10000 258
20000 1236
100000 23122
200000 113886

Opening an issue

After triple checking my logic, I created an issue on the Simple Statistics repo with my benchmark script.

A fix!

Mere hours later, I had one of the most delightful surprises in my coding career. A teammate had, unbeknownst to me, looked into the issue and found a fix. Not just any fix, but a 3 character fix that sped up our particular case by 20x!

Before: if (iMax < matrix.length - 1) { After: if (iMax < matrix[0].length - 1) {

He had read through the original ckmeans C++ implementation and found a conditional where the C++ version had a [0] but the Javascript port did not. At runtime, matrix.length would generally be small, whereas matrix[0].length would be large. That if statement should have resolved to true most of the time, but was not in the Javascript version, since the Javascript code was missing the [0]. This led the Javascript version to run a loop a lot more times that were effectively no-ops.

I was amazed by how fast he found that bug in code he had never seen before. I'm not sure if he read carefully through the original paper or came up with the clever debug strategy of "since this is a port, let's compare to the original, with a particular focus on the loops".

The typo fix made the Javascript version run in the claimed n·log(n) to match the C++ version. For our new map charts with tens of thousands of values this made a big difference.

Before xxxxxxxxxxxxxxxx 820ms After x 52ms

Merged

Very shortly after he submitted the fix, the creator of Simple Statistics reviewed and merged it in. We pulled the latest version and our maps were fast again. As a bonus, anyone else who uses the Simple Statistics ckmeans function now gets the faster version too.

Thanks!

Thanks to Haizhou Wang, Mingzhou Song and Joe Song for the paper and fast k-means algorithm. Thanks to Tom MacWright for creating amazing Simple Statistics package and adding ckmeans. And thanks to my former teammates Daniel for the initial code and Marcel for the fix. Open source is fun.

View source

A rough sketch of a semi-random selection of ideas stacked in order of importance. The biggest ideas, "upstream of everything", are at the bottom. The furthest upstream ideas we can never see. A better artist would have drawn this as an actual stream.

February 28, 2022 — There will always be truths upstream that we will never be able to see, that are far more important than anything we learn downstream. So devoting too much of your brain to rationality has diminishing returns, as at best your most scientific map of the universe will be perpetually vulnerable to irrelevance by a single missive from upstream.

Growing up I practiced Catholicism and think the practice was probably good for my mind. But as I practiced science and math and logic those growing networks in my brain would conflict with the established religious networks. After a while, in my brain, science vanquished religion.

But I've seen now the folly of having a brain without a strong spiritual section.

In science we observe things, write down many observations, work out simpler models, and use those to predict and invent. But everything we observe comes downstream to us from some source that we cannot observe, model, or predict.

It is trivially easy to imagine some missive that comes from upstream that would change everything. We have many great stories imagining these sorts of events: a message from aliens, a black cat, a dropped boom mic. Many ideas for what's upstream have been named and scholarized: solipsism, a procedural generated universe, a multiverse, our reality is drugged, AGI, the Singularity.

And you can easily string these together to see how there will always be an "upstream of everything". Imagine our lifetime is an eventful one. First, AGI appears. As we're grappling with that, we make contact with aliens, then while we're having tea with aliens (who luckily are peaceful in this scenario) some anomaly pops up and we all deduce this is just a computer simulated multiverse. The biggest revelation ever will always be vulnerable to an ever bigger revelation. There will always be ideas "upstream of everything".

When you accept an upstream idea, you have to update a lot of downstream synapses. When you grok DNA, you have to add a lot of new mental modules or update existing networks to ensure they are compatible with how we know it works. You might have a lot of "well the thing I thought about B doesn't matter much anymore now given C". It takes a lot of mental work to rewire the brain, and requires some level of neuroplasticity.

So now, if you commit your full brain to science, you've got to keep yourself fully open to rewiring your brain as new evidence floats downstream. This might even be a problem if only high quality evidence and high quality theories floated by. But evidence is rarely so clear cut. And so you are constantly having to exert mental energy scanning for new true upstream ideas. And often ideas are promoted more for incentive rather than accuracy. And you will make mistakes and rewire your brain to a theory only to realize it was wrong. Or you might be in the middle of one rewiring and then have to start another. It seems a recipe for mental tumult.

Maybe, if there were any chance at all of ultimate success, it would make sense to dedicate every last 1% of the brain to the search for truth. But there's zero chance of success. The next bend also has a next bend. Therefore science will never be able to see beyond the next bend.

And so I've come full circle to realizing the benefits of spirituality. Of not committing one's full brain to the search for truth, to science, to reason. To grow a strong garden of spiritual strength in the brain. To regularly acknowledge and appreciate the unknowable, to build a part of the mind that can remain stable and above the fray amidst a predictable march of disorder in the "rational" part.

Errata

  • Though I ultimately found the limits of rationality, I did enjoy the writings from the "Rationalist" communities like LessWrong and RationalWiki.
  • For me personally spirituality now means more Buddhism and mindfulness than Catholicism, but I have a new appreciation for Catholicism and all religions.
  • I am very intrigued by what happens in the brain when someone learns a new upstream idea that affects their thinking in a big way. Where in the neocortex (or other area) do these ideas live?
  • I may be overestimating how hard it is to rewire given a big new upstream idea. For example, you might have a dream where elephants can talk and you near instantly adjust and roll with it. I have a lot of neuroscience to learn.
  • Also related to neuroscience, I want to take a fresh look at differences in brains of those who cultivate spirituality and those who do not.
  • I started this essay a while ago originally planning to write about how I loved mind expanding "upstream of everything" ideas like those at the center of The Matrix or Three Body Problem. Among other things, these ideas have airs of scientifically plausibility and they had a sort of anxiety-reducing affect: who cares how the meeting goes if we're all just in a simulation anyway? The neocortex could use these ideas to stop worrying. But then I realized that instead of cycling through an endless stream of plausible "what if" priors, it's a wiser strategy to go with spiritual practices refined by humans for centuries, where it's less about what specific idea is upstream of everything and more about acknowledging that there is something beyond the limits, making peace with that, maintaining a stable mind, and being part of a community.
  • I've found too much time thinking about upstream ideas leaves not enough time to attend to downstream details.
  • At one time I started collecting a list of all the upstream of everything ideas, like my tiny partial enumeration above where I mention The Matrix and Three Body Problem, and was thinking of the best way to catalog all of these ideas so one could grok them as fast as possible. Movies and books seem to communicate them well, but I also would be curious if there's a site out there that catalogs and explains them all concisely, perhaps using and xkcd comic book style.
  • I see myself fulfilling many common cliches (getting more religious as one gets older, et cetera). I also wonder if sometime I won't pick up on some big new scientific truth and also fulfill the cliche "science progresses one funeral at a time". Speaking of cliches, c'est la vie.
  • I realize now that this idea is thousands of years old.

View source

What if there is not just one part of your brain that can say "I", but many?

Introduction

February 18, 2022 — Which is more accurate: "I think, therefore I am", or "We think, therefore we are"? The latter predicts that inside the brain is not one "I", but instead multiple Brain Pilots, semi-independent neural networks capable of consciousness that pass command.

The Brain Pilots theory predicts multiple locations capable of supporting root level consciousness and that the seat of consciousness moves. The brain is a system of agents and some agents are capable of being Pilots—of driving root level consciousness.

Sometimes you go to bed one person and wakeup someone else. The brain pilot swapped in the night. These swaps then continue subconsciously throughout the day.

The Brain Pilots theory is not about the exceptions, that some people with their callosums cut develop two consciousnesses, or that some of the population have multiple personalities. Rather that multiple consciousnesses is the rule and a feature of how all human minds work.

I should note that the term "Brain Pilots Theory", does not come from the field. It's a term I started using to get to the essence of the big idea. I am sure there is a better term for it, and a more fully developed theory, and hopefully a more knowledgeable reader can point me to that. Until then, I'll stick to calling it the Brain Pilots Theory.

This is a theory of the mind that blows my mind. I stumbled into it while programming multi-agent simulations and thinking "wait, what if the mind is a multi-agent system"? I quickly found that a lot of neuroscientists have been going this way for decades and writing about it. My favorites so far being The Society of Mind (Minsky 1988), A Thousand Brains (Hawkins 2021), and LessWrong's collection on Subagents.

What are the odds that this theory is right? I am not in the field and have no clue yet (10%? .1%?). I do feel confident saying that if true, this seems like it would have dramatic implications for how we understand the brain, ourselves, other people, and society, not to mention how it would lead to new technologies for the brain.

Is this just Inside Out?

The 2015 film Inside Out gets across a core idea of the Brain Pilots theory—that our brains are vehicles for multiple agents and the one self is an oversimplification.

In the 2015 film Inside Out five brain pilots (Anger, Disgust, Joy, Fear, and Sadness) live inside the brain of a girl and can take turns piloting.

Inside Out is primarily a movie and not a scientific model, of course. To make it a better model we need to drop the personification of the agents. Instead of looking like tiny humans and being as capable as humans, in reality Brain Pilots would look like tangles of roots and globs of cells, and would likely have a very different and incomplete set of capabilities and behaviors. It's very important to keep in mind that the agents in your brain are very limited by themselves. It's why in your dream an elephant can start talking to you and your current brain pilot isn't taken aback, because that current might not have access to other agents that would detect the absurdity of the situation.

If you picture brain pilots not as personified mini-humans but some type of plant-like neuronal circuits, you get a pretty good model of this Brain Pilots theory.

Where are the pilots?

My working hypothesis is that pilots could be found in various parts of the brain. Perhaps you have Pilots in the Cerebrum, Pilots in the Thalamus, and so on. Perhaps a Pilot consists of a network that extends into multiple regions of the brain. Different pilots could be located on opposite sides of the brain or perhaps microns apart from each other.

What is a pilot exactly?

It seems the materials would be some collection of neurons, synapses, et cetera. Obviously I have my homework to do here.

How many pilots per brain?

It seems unlikely that an entity the size of a single cell or smaller could run a human. Rather, a network of some minimum size is probably required. Call the required materials MinPilotMaterials.

If MinPilotMaterials == BrainMaterials then there would be room for only 1 consciousness in 1 brain. Similarly, a pilot may not have a fixed min size but instead is programmed to grow to assume control of all relevant materials in the brain.

Alternatively, MinPilotMaterials could be a fraction of BrainMaterials. Perhaps 10%-50% of BrainMaterials, meaning there would be room for just a few pilots. Or perhaps a pilot needs 1% of BrainMaterials, and there could be 100 in a brain.

What practitioners in dissociative identity disorder call Identities might be brain pilots, and the average population per person is ~16, with some patients reporting over 100.

There are ~150,000 cortical columns, so perhaps there are that many Brain Pilots.

Perhaps I'm wrong that it takes a network of multiple cells, and a single neuron with many synapses could take charge, in which case there could be millions (or more) brain pilots per brain.

With 150,000 cortical columns, 100 billion neurons, as many as a quadrillion synapses, it seems highly likely to me that there is enough material in the human brain to support many brain pilots. Neuroscientists have not identified some small singular control room, rather point to the "seat of consciousness" being roughly in the 10-20 billion neurons that make up the cerebral cortex. If one brain pilot could arise there, why not many?

How do brain pilots form?

They likely evolve like plants in a garden. It seems to me that the population of pilots in a brain probably follows a power law, where ~65% of your pilots are there by age 4, ~80% by age 20, and then changes get slower and slower over time. Pilots probably grow stronger when they make correct predictions.

How long do brain pilots live?

I'd imagine once an agent has evolved to be a pilot, it would probably stick around until death given the safe confines of the skull. It may be harder to get rid of an old pilot than it is to grow a new one (or that may change with age).

I sometimes visualize pilots as old trees in the brain.

Can you trigger a pilot change?

How can a pill one millionth the size of the brain cause it to change directions? Perhaps the pill changes the pilot?

As many have experienced, there are certain chemicals that if you ingest just a minuscule amount, millions of times smaller than your brain, your whole consciousness can change within the hour. Perhaps what is happening is a different pilot is taking over? Or perhaps a new one is being formed?

But it's not just chemicals that can swap pilots. You would have a HungerPilot that increasingly angles for control if deprived of food; a ThirstPilot angling to drink; a SleepPilot that makes her moves as the night gets late, and so on. Perhaps mindfulness is the practice of learning to detect which pilots are currently in control, which are vying for control, and perhaps achieving Enlightenment is being able to choose who is piloting. Perhaps one role of sleep is to ensure that no matter what there is at least one pilot rotation per day, to prevent any one pilot from becoming too powerful.

If I've gotten across one thing to you so far, it should be that I am a complete amateur in neuroscience and have a lot to learn before I can write well on the topic. So let me postpone the question of whether the the theory is true and address the implications, to demonstrate why I think this is a valuable theory to investigate. As the saying goes: All models are wrong. Some are useful.

Some Implications if the Brain Pilots Theory is True

Let's assume the Brain Pilots Theory is true. Specifically, that there are multiple agents—networks of brainstuff—physically located in space, that are where consciousness happens. We could then explain some things in new ways.

Creativity

Perhaps creatives have a higher than average number of Brain Pilots and/or switch between them differently. There's a saying "if you want to go far, go together". Perhaps some creatives are able to go further than others because in a sense they aren't going alone——they have an above average population of internal pilots.

I wonder if the norm in life is to pretty rapidly pilot swap, and if "Flow State" would be when instead you are able to have the same pilot running the show for an extended period of time.

Attribution

The words "I" and "You" are both in the top 20 most frequently used English words. It makes sense to use those when speaking of the physical actions of the human being—"he walked over there. She said this." However, statements of the form "I think..." might not be accurate, as thoughts would be more accurately attributable to agents in the brain. "I think" would always only be speaking for part of the whole. We have some evidence in our language of an awareness of these multiple-pilots: phrases like "My heart is saying yes but my brain is saying no".

We also often categorize people as "bad" or "good". But that often serves as a bad model for predicting future behavior. Instead if you modeled a person as a collection of agents, you might find that it is not the person as a whole that you disapprove of, but certain of their agents (or perhaps it could be meta things, like their inability to form new agents, or too rapid agent switching).

Truth

If the Brain Pilots Theory is true, then it is almost a certainty that you'd have some agents that don't care about truth. So if you are an agent that does care about truth, it would be essential to not only be weary of lies and misdirection from external sources, but also from your internal neighbors. In the struggle for truth agents are the atomic unit, not a human.

One thing I like about the Brain Pilots theory is that it provides a way to explain discrepancies. Like, how can a person be Catholic and an evolutionary biologist? With the Brain Pilots Theory, it's easy to see how they might have two distinct pilots who somehow peacefully coexist and alternate control.

Consistency

Should your pilots be loyal to each other, or pursue only their agenda? It's easy for your AwakePilot to say "I'm sorry I was wrong this morning, that was my TiredPilot". IIRC contracts aren't necessarily enforceable if someone's UnderTheInfluencePilot signed. But if you made a claim while angry, should you then later defend that after you've calmed down, or attribute it to a different agent? If your SocialPilot committed to an event but then when the hour comes around your IntrovertedPilot is in charge, do you still go? Do some pilots have different moralities? How do you deal with that?

Mental Health

If the Brain Pilots theory of the mind is true, then you could imagine the main levers a human has to control their life would be to grow new pilots, prune undesired pilots, and perhaps most importantly have more conscious control over what pilot was currently in charge.

Similar to how we use multi-agent simulations to model epidemics, perhaps through brain imaging coupled with introspective therapy one might be able to build an agent map of all the brain pilots in someone's mind, and run experiments on that model to figure out more effective plans of attack.

If the Brain Pilots Model holds, I'd be curious whether most mental health difficulties stem from undesirable pilots, or from the higher level problem of pilot switching. Perhaps folks higher on the introverted or self-centered scales have high populations of active pilots, and are low in time for others because they are metaphorically herding cats in their head.

Quantified Self

Current wearables track markers like heart rate, heart rhythm, body temperature, movement, perspiration, blood sugar, sleep, and so on, and even often have ways to manually input things like mood. If the Brain Pilots Theory is a useful model, you'd imagine that someone could build a collection of named Pilots and then align those biometrics to which pilot was in control. Then instead of focusing on managing the behaviors, one might operate upstream and focus on maximizing the time your desired pilots were at the wheel.

Genius and Work Output

Do geniuses have more pilots? Or fewer? Are they able to build/destroy pilots faster? How would the MathPilots differ between a Princeton Math Professor and an average citizen?

Would productivity be more a product of having some exceptionally talented pilots, or the result of being able to stay with one pilot longer, or perhaps have a low population of bad pilots?

People

The real population of Earth could be 8 trillion

There are 1.4 billion cars in the world. Vehicle count is important, but more often we are concerned with how many agents are traveling in those vehicles, and that is 8 billion.

But if each human brain contains a population of brain pilots, then the Earth's population of agents would be far larger. If the average human has 10 brain pilots, then we are a planet with 80 billion agents. If the average is closer to 1,000 pilots per person, then there are 8 trillion consciousnesses around right now.

Outliers

Are peoples lives most affected by their best agents, worst agents, average agent, median agent, inter agent communication, agent switching strategies, agent awareness, agent chemical milieu?

Conclusion

This post has so many questions, so few answers. It is one of those posts writing about things I don't understand much about yet. My brain pilots brain pilot is not yet very advanced.


Notes

  • Cover image derived from BrainFacts.org's 3D-Brain.
  • Palm image made from Elizabeth Nixon's Palm Study.
  • Perhaps consciousness is the logging agent and there is only one consciousness. Perhaps the brain pilots drive the show, and the consciousness records the log, but the consciousness is not able to see which pilot is driving.

View source

December 15, 2021 — Both HTML and Markdown mix content with markup:

html A link in HTML looks like <a href="hi.html">this</a> markdown A link in Markdown looks like [this](hi.html).

I needed an alternative where content is separate from markup. I made an experimental minilang I'm calling Aftertext.

aftertext A link in Aftertext looks like this. link hi.html this

You write some text. After your text, you add your markup instructions with selectors to select the text to markup, one command per line. For example, this paragraph is written in Aftertext and the source code looks like:

You write some text. After your text, you add your markup instructions with selectors to select the text to markup, one command per line. For example, this paragraph is written in Aftertext and the source code looks like: italics After your text italics selectors

Here is a silly another example, with more markups.

Here is a silly another example, with more markups. strikethrough a silly italics more bold with underline markups link https://try.scroll.pub/#scroll%0A%20aftertext%0A%20%20Here%20is%20another%20a%20richer%20example%2C%20showing%20more%20features.%0A%20%20strikethrough%20another%0A%20%20link%20oldhomepage.html%20Here%0A%20%20italics%20more%0A%20%20bold%20showing%0A%20%20underline%20features Here

The first implementation of Aftertext ships in the newest version of Scroll. You can also play with it here.

Why did I make this?

First I should explicitly state that markup languages like HTML and Markdown with embedded markup are extremely popular and I will always support those as well. Aftertext is an independent addition. The design of Scroll as a collection of composable grammar nodes makes that true for all additions.

With that disclaimer out of the way, I made Aftertext because I see two potential upsides of this kind of markup language. First is the orthogonality of text and markup for those that care about clean source. Second is a fun environment to evolve new markup tags.

Benefits of Keeping Text and Markup Separate

The most pressing need I had for Aftertext was importing blogs and books written by others into Scroll with the ability to postpone importing all markup. I import HTML blogs and books into Scroll for power reading. The source code with embedded markup is often messy. I don't always want to import the markup, but sometimes I do. Aftertext gives me a new trick where I can just copy the text, and add the markup later, if needed. Keeping text and markup separate is useful because sometimes readers don't want the markup.

It is likely a very small fraction of readers that would care about this, of course. But perhaps it would be a set of power users who could make good use of it.

Speaking of power users, Aftertext might also be useful for tool builders. Imagine you are building a collaborative editor. With Aftertext, adding a link, bolding some text, adding a footnote, all are simple line insertions. It seems like Aftertext might be a nice simple core pattern for collaborative editing tools.

Version control tools are often line oriented. When markup and content are on the same line it's not as easy to see which changes were content related and which were markup related. In Aftertext, each markup change corresponds to a single changed line. In the future, I could imagine using AI writing assistants to add more links and enhancements to my posts while keeping the history of content lines untouched.

Finally, I should mention that it seems like keeping the written text and markup separate might make sense because it often matches the actual order in which writing text and marking up text happens. Writing is a human activity that goes back a thousand generations. Adding links is something only the current generations have done. A pattern I often find myself doing is: write first; add links later. Aftertext mirrors that behavior.

A Petri dish for new markup ideas

Aftertext provides a scalable way to add new markup ideas.

Simple markups like bolds or italics aren't a big pain and conventions like bold and italics used in languages like Markdown or Textile do a sufficient job. But even with those, after a certain amount of rules it's hard to keep track of what characters do what. You also have to worry about escaping rules. With Aftertext adding new markups does not increase the cognitive load on the writer.

When you get to more advanced markup ideas, Aftertext gives each markup node it's own scope for advanced functionality while keeping the text text.

I'm particularly interested in exploring new ways to do footnotes, sparklines, definitions, highlights and comments. Basic Aftertext might not be compelling on its own, but maybe it will be a useful tool for evolving a new "killer markup".

Adding a new markup command is just a few lines of code.

What are the downsides of Aftertext?

There are downsides in using Aftertext that you don't have with paired delimiter markups.

There is the issue of breakages when editing Aftertext. The nice thing about bold is that if you change the text between the tags you don't break formatting. When editing Aftertext by hand when you change formatted text you break formatting and have to update those lines separately. I hit this a lot. Surprisingly it hasn't bothered me. Not yet, at least. I need to wait and see how it feels in a few months.

A similar issue to the breakage problem is verbosity. Embedded markup adds a constant number of bytes per tag but with Aftertext the bytes increase linearly with N, the size of the span you are marking up. Again, I haven't found this to be a problem yet. Perhaps the downside is outweighed by the helpful nudge toward brevity. Or maybe I just haven't used it enough yet to be annoyed.

Another problem of Aftertext is when markup is semantic and not just an augmentation. *I* did not say that is different from I did not say *that*. Without embedded markup in these situations meaning could be lost.

What are the problems with the initial implementation?

My first implementation leaves a lot of decisions still to make. Right now Aftertext is only usable in aftertext nodes. That is a footgun. The current implementation uses exact match string selectors that only format the first hit. Another footgun. I've already hit both of those. And at least two or three more.

Is this a bad idea?

You might make the argument that not just the implementation, but the idea itself should be abandoned.

The most likely reason why this is a bad idea is that it simply doesn't matter whether it's a good idea or not. You could argue that improvements to markup syntax are inconsequential. That even if it was a 2x better way to markup text for some use cases, AIs will change writing and code in so many bigger ways that's it not even worth thinking about clean source anymore. This could very well be true (luckily it didn't take many hours to build).

Or perhaps it is a bad idea because although it may be mildly useful initially, it is actually an anti-pattern and instead of scaling well, will lead to a Wild West of complex colliding markups. I generally don't have the mental capacity to think too many moves ahead. So I fallback to inching my way forward with code and relying on the feedback of others smarter than me to warn of unforeseen obstacles.

Summary and Closing Thoughts

Markups on text may increase monotonically. With current patterns that means source will get messier and more complex. Aftertext is an alternative way to markup text which can scale while keeping source clean. Aftertext might be a good backend format for WYSIWYG GUIs. Though most humans write in WYSIWYG GUIs, Aftertext is designed for the small subset who prefer formats that are also maintainable by hand.

Related Work

Thank you to Kartik, Shalabh, Mariano, Joe and rau for pointing me to related work. I am certain there are similar efforts I have missed and am grateful for anyone who points those out to me via comments or email.

In 1997 Ted Nelson proposed parallel markup.

The text and the markup are treated as separate parallel members, presumably (but not necessarily) in different files. @ Ted Nelson

When searching for '"parallel markup implementation"' I also came across a Wikipedia page titled Overlapping markup, which contains a number of related points.

A couple of folks mentioned similarities to troff directives. In a sense Aftertext is reimagining troff/groff 50 years later, when characters/bytes aren't so expensive anymore.

Brad Templeton describes two inventions, Proletext and OOB, to solve what he termed "Out of band encoding for HTML". They seem esolangy now but actually cleverly useful back in the day when bytes and keystrokes were more expensive.

The Codex project has a related idea called standoff properties. As I understand it, the Codex version uses character indexes for selectors which requires tooling to be practical and rules out hand editing.

AtJSON is a similar project and has clear documentation. AtJSON has a useful collection of markups evolved to support a large corpus of documents at CondeNast. AtJSON uses character indexes for selectors so hand editing is not practical.

Why now?

Issues with embedded markup and alternative solutions have been discussed for decades. I would say it's a safe bet to say embedded markup is superior since it so thoroughly dominates usage. Nevertheless, as I mentioned in my use case, there is a time and a place for alternatives. Aftertext would have been simple enough to understand decades ago and use with pen and paper. So why hasn't Aftertext's been tried before?

Verbosity is certainly a reason. Bytes, bandwidth, and keystrokes (pre-autocomplete) used to be more expensive, so Aftertext would have been inefficient. It probably was worthwhile to have a learning curve and force users to memorize cryptic acronyms. It paid off to minimize keystrokes.

I may also be overvaluing the importance of universal parsibility. I value formats that are easy to maintain by hand but also easy to write parsers for. Before GUIs, collaborative VCSs, IDEs, or AIs, there wasn't as much value to be gained by doing this. But even today I may be overvaluing hand editability. This seems to be the era of AIs and all apps editing JSON documents on the backend. I may be a dinosaur.

Finally, I may be overvaluing the clean scopes used by Aftertext provided by the underlying Tree Notation. Aftertext works because each text block gets its own scope for markup directives and each markup directive gets its own scope and you don't have to worry about matching brackets. So maybe Aftertext just hasn't been tried because I overvalue that trick.

Notes

  • There's no reason it has to be "Aftertext". The markups could come before the text too.
  • Thanks to justinpombrio for pointing out how semantic embedded markup is different than augmenting markup.
  • Another downside of the current implementation, pointed out by David Chisnall, is the lack of a mechanism for global markup directives. I'd expect Aftertext to evolve in that direction if it proves its worth in the local scope.

A screenshot of Aftertext on the left and the rendered HTML on the right.

View source

October 15, 2021 — I'm always trying to improve my writing. I want my writing to be more meaningful, clearer, more memorable, and shorter. I would also like to write faster.

That's a tall order and there aren't many shortcuts. But I think there is one simple shortcut, that I stumbled upon the past year:

Set your editor's column width very low

36 characters for me, YMMV. This simple mechanic has perhaps doubled my writing speed and quality.

At my current font-size, my laptop screen could easily support 180 characters across. But if my words spread across the full screen, I write slower and produce worse content.

Another way to frame this is my writing got worse as my screens got wider and I only recently noticed the correlation.

How does column width affect writing speed?

When I am writing I am mostly reviewing. I type a word once. But my eyes see it fifty times. Maybe great writers can edit more in their heads. With my limited mental capabilities editing happens on the page. I do a little bit of writing; a lot of reviewing and deleting. So the time I spend writing is dominated by the time I spend reviewing. Reviewing is reading. To write faster, I need to read faster.

Humans read thinner columns faster. Perhaps this isn't the case for all people—I'm not an expert on what the full distribution looks like. But my claim is backed by a big dataset. I have my trusty copy of "The New York Times: The Complete Front Pages from 1851-2009". For over 150 years the editors at the New York Times, the most widely read newspaper on the planet, decided on thin columns. If fatter columns were more readable we would have known by now.

Thinner columns help you read faster. Writing speed is dominated by reading speed. If you read faster, you write faster.

How does column width affect writing quality?

Every word in a great piece of writing survived a brutal game of natural selection. Every review by the author was a chance for each word to be eliminated. The quality of the surviving words are a function of how many times they were reviewed. If the author reviews their writing more, then the words that survive should be fitter.

But moving your eyes takes work. It might not seem like a lot to the amateur but may make a huge difference toward the extremes. A great athlete practices their mechanics. They figure out how to get maximal output for minimal exertion. They "let the racket do the work". If you are moving your eyes more than you have to, you are wasting energy and will not have the stamina to review your writing enough. So thinner columns leave you with more energy for more editing passes. More editing passes improves quality.

If column width has such a significant impact on writing speed, why have I not seen this stressed more?

I don't remember ever being told to use thinner columns when writing. In programming we often cap line length, but this is generally pitched for the benefit of future readers, not to help the authors at write time.

I have long overlooked the benefit of thin columns at write time. How could I have overlooked this? Two obvious explanations come to mind.

First, I could be wrong. Maybe this is not a general rule. I have not yet done much research. Heck, I haven't even done careful examination of my own data. I've been writing with narrow columns for about 10 months. It feels impactful, but I could be overestimating its impact on my own writing speed.

Second, I could be ignorant. Maybe this is already talked about plenty. I would not be surprised if a professional writer sees this and says "duh". Maybe it's taught in some basic "writing mechanics 101" introductory course. Maybe if I got my MFA or went to journalism school or worked at a newspaper this is a basic thing. Maybe that's why journalists carry those thin notepads.

But let's say my hunches are correct, that thin columns do help you write faster and that this is not mentioned much. If I'm correct on both of those counts, then a clear explanation for this is that this simply is a new potential hazard created by new technology. My generation is the first to have access to big screens, and so in the past writing with wide columns wasn't a mistake people made because it simply wasn't possible. An alternative title I considered was "Write as fast as your grandparents by using the line length they used".

Jets are great, but beware jet lag when traveling. Big screens are great, but beware eye lag when writing. Try thin columns.

Notes

  • I wonder if sometimes over the years when I felt "in the zone" while writing, it may have been partly a result of coincidentally using a narrow column width.
  • I am a middling writer, so don't forget to weight this advice appropriately!
  • The physical dimensions of my writing area on screen are about 2.5 out of 11 inches. I've skimmed some studies that suggest 4 inches is the optimum for most people.
  • Some writing boxes never wrap, like Gmail. So to keep my columns thin I would manually insert line breaks. Manual line breaks were fragile for two reasons. First, when I revised the text I'd also have to revise the line breaks. Second, I coded the line breaks at write time with certain font and column settings. At read time those settings might differ. Multiple friends commented that I now wrote in haikus. I did consider for a moment that a reputation as someone who wrote only in haikus might be advantageous, but I ruled that out and stopped manual line breaks. Now I often write in Sublime Text and copy/paste into the target app.
  • This may be an inconsequential tip on how to I went from a 0.1x writer a 0.2x writer.
  • Average typing speed is approximately one word per second.
  • What is eye exertion horizontal vs vertical?
  • Survey some great editors/writers/journalists if they write with narrow columns?
  • Has the frequency of this advice appearing gone up as screens got wider?
  • Consider vertical languages like Japanese

View source

August 11, 2021 — In this essay I'm going to talk about a design pattern in writing applications that requires effectively no extra work and more than triples the power of your code. It's one of the biggest wins I've found in programming and I don't think this pattern is emphasized enough. The tldr; is this:

When building applications, distinguish methods that will be called by the user.

The Missing Access Modifier

All Object Oriented Programmers are familiar with the concept of PrivateMethods and PublicMethods. PrivateMethods methods are functions called by programmers inside your class; PublicMethods are functions called by programmers outside your class. Private and Public (as well as Protected), are commonly called AccessModifiers and are ubitquitous in software engineering.

A UserMethod is a class method called by the user through a non-programmatic interface

UserMethods are all the entry points a user has to interact with your application. All interactions users have with your application can be represented by a sequence of calls to UserMethods.

An Example

Let's say I am writing a GUI email client application. I probably have an EmailClient class that can send an email, and then a "Send" Button. Using the UserMethod pattern I might have a private method perform the actual email sending work, and then I'd have a small UserMethod that the click on the button would call:

private _sendEmail(): // ... user sendEmailCommand(...): // ... this._sendEmail()

That's it. In my pseudo code I used a "user" keyword to flag the UserMethod, but since most languages don't have such a keyword you can use either decorators or have an identifier convention that you reflect on.

Advice: When building applications, distinguish your UserMethods

If you are just building a library used by other programmers programmatically, then the public/private/protected access modifiers are likely sufficient. In those situations, your UserMethods are identical to your PublicMethods. But if there is a user facing component, some wisdom:

I have never seen a single application with a user facing component, whether it be a Graphical Interface, Command Line Interface, Voice Interface, et cetera, that doesn't benefit significantly from following the UserMethod Pattern.

Implementation Costs

The UserMethod pattern costs close to zero. All you need to do is add a single token or bit to each UserMethod. It might cost less than zero, because adding these single flags can help you reduce cognitive load and build your app faster than if you didn't conceptualize things in this way.

Off the top of my head, I can't think of a language that has a built in primitive for it (please send an email or submit a PR with them, as I'm sure there are many), but it's easy to add by convention.

If your language supports decorators and you like them, you can create a decorator to tag your UserMethods. Without decorators, it's easy to do with a simple convention in any language with reflection. For example, sometimes in plain Javascript I will follow the convention of suffixing UserMethods with something like "UserMethod". (Note: In practice I use the suffix "Command" rather than "UserMethod", for aesthetics, but in this essay will stick to calling them the latter).

Benefits

By simply adding a flag to each UserMethod you've now prepped your application to be used in lots of new ways.

Benefit: New Interfaces

By distinguishing my UserMethods, I've now done 80% of the leg work needed to support alternative interfaces—like command palettes, CLIs, keyboard shortcut interfaces, voice interfaces, context menus, et cetera. For example, by adding UserMethods to a component, I can then reflect and auto generate the context menu for that component:

I've now also got the bulk of a CLI. I just take the user's first argument and see if there's a UserMethod with that name to call. The help screen in the CLI below is generated by iterating over the UserMethods:

For a command palette, you can reflect on your UserMethods and provide the user with auto-complete or a drop down list of available commands.

With just a tiny extra bit of work—a single flag to distinguish UserMethods from PublicMethods, and a tiny bit of glue for each interface, you multiply the power of your application. The ROI on this pattern is extraordinary. It really is a rare gem. You do not see this kind of return often.

Benefit: Scriptability

You've also now done the bulk of the work to make a high level scriptable language for your application. You've identified the core methods and a script can be as simple as a text sequence listing the methods to call, along with any user inputs. Your UserMethods are a DSL for your application.

Benefit: Scripted Regression Testing

Your new UserMethod DSL can be very helpful when writing regression testing on situations a user ran into. A user's entire workflow can now be thought of as a sequence of UserMethod calls. You can log those and get automated repro steps. Or if logs are not available, you can listen to their case report and likely transcribe it into your UserMethod DSL. For example, below is a regression test to verify that a "Did You Mean" message appears after a sequence of user commands.

Benefit: As a Design Aide

When ideating, it can be helpful to ask "what UserMethod(s) are we missing"?

When editing, it is helpful to scan your entire UserMethod list and prune the commands that aren't popular or aren't needed, along with any resulting dead code.

Benefit: Rapid Prototyping

Getting GUI's right can be challenging and time consuming. There are severe space constraints and changes can have significant ripple effects. You often do a lot of work to nail the visuals for a new component which then sees little usage in the wild. It can be helpful to build the UserMethod first, expose it in a Command Palette or via Keyboard Shortcut Interface, and only if it then proves to be useful, design it into the GUI. I guess if you wanted to be extremely cost conscious you could add UserMethods that simply alert a user to "Coming Soon" before you even decide to implement it.

Benefit: Documentation

I find it helpful when reading application code to pay special attention to UserMethods. After all, these functions are why the application exists in the first place. That little extra flag provides a strong signal to the reader that these are key paths in an application.

Benefit: Analytics

You can easily add analytics to your whole application once you've tagged your UserMethods. In the past I've done it simply by adding a single line of code to a UserMethod decorator.

Objections

Is this an original idea?

Heck no. I picked up this pattern years ago. Probably from colleagues, or books, or by reading other's code. I forget exactly how many times I've read about it, under what names. I'm sure there are thirty two existing names for this pattern. I'm sure 9 of those even have Wikipedia articles. But this pattern is so magical, so so so helpful, I do not think I will be wasting anyone's time by bringing it up again in my own terms.

I've tried a lot of things, like having Command classes, or Application classes, and I've found the concept of function level UserMethods to be a killer pattern in my day-to-day work. You can always graduate to more fine separation later.

All that being said, I'm sure someone has written a much better piece that would jive better with my experience, and so would appreciate links to all related ideas. I'm always open to Pull Requests (or emails)!

Shouldn't this level of abstraction be done at the class level and not method level?

Isn't it better instead to have an "Application" class, where all public methods are considered to be UserMethods? I won't argue against that. However, it's not always clear where to draw the lines, especially in the early days of a project, and it's much easier to build such classes later if you've clearly delinated your UserMethods along the way.

Aren't UserMethods just a subset of PublicMethods?

Yes. But they are a special category of PublicMethod and it's a distinction worth making. You want all your UserMethods available programmatically like the rest of your PublicMethods (for example, when writing tests), but you wouldn't want to show your users all PublicMethods in something like a Command Palette.

View source

May 22, 2021 — In this video Dmitry Puchkov interviews Alexandra Elbakian. I do not speak Russian but had it translated. This is a first draft, the translation needs a lot of work, but perhaps it can be skimmed for interesting quotes. If you have a link to a better transcript, or can improve this one, pull requests are welcome (My whole site is public domain, and the source is on GitHub).

D: I salute you profoundly. Alexandra good afternoon.
A: Hello.
D: Please introduce yourself.
A: My name is Alexander Elbakian. I am known as the creator of the pirate website.
D: Oh, we have never had a pirate like this before.
A: Yeah.
D: What do you pirate?
A: Scientific articles. Well, generally speaking, when people talk about pirate websites, they usually mean some pirated movies or pirated music but very rarely do they talk about the fact that there's a lot of websites where they put free scientific literature which everyone can read and there are a lot of pirated and here is one of these websites is a Sci-Hub which I created
D: And what is in there?
A: Well, there is 85 million scientific papers almost all of them in English. Really, well, English is now the international language of science so if you look at the popularity of this site, it has about half a million unique visitors every day.
D: However, not the dumbest visitors
A: Yes, they are mostly scientists and students.
D: Technically, how is this organized? Is it sitting in a basement somewhere and scanning scientific journals and then put them out or you can pull them up to you.
A: Well, technically, there's nothing being scanned. I mean, yes, there used to be a lot of pirate websites they work like that book websites that is where the user registering without uploading materials, well some books that they're like that it was especially widespread a long time ago that people scanned books and put them on the internet, now of course everything is in electronic form so pirated scientific literature is the most downloaded. I mean, it's already out there digitized. It is in its original digital form.
D: It is like someone bought it and then starts giving it away . I guess so.
A: Yeah, well, if you are talking specifically about the Sci - hub it's a little bit more complicated it's connected to libraries at western universities and from there it downloads what's in those libraries and puts them in its databases just like that.
D: And the libraries are free?
A: If it's a library, for example, it's subscribed to some kind of scientific journal, then in that library has that journal available for free and if you try to for example to read it on the website the publisher's website then you have to each article there you have to pay for example 30-40 dollars and so on, but it's a very a lot of money, so the need for such a website is very high. So you can enter to this website and those articles that cost 30-40 dollars there to read for free
D: It's on the site so there's no confusion. Here for example how to find it there the University of Massachusetts, but it's subscribed to some journal and if I go to the Massachusetts to the library there, can I read it for free?
A: Well, yeah.
D: The articles that are on the publisher's website sells for $30 one article.
A: Well you can usually go to the library at Massachusetts University only if you're an employee or a student at that university. You need access to, like, the point of the Sci-Hub is is that this software that takes these accesses and there are a few hundreds of thousands and automatically downloads it all from libraries.
D: And what does the law say on that? Well, in our world of clean money, there's some kind of law that says you can't do that.
A: of course the site has been repeatedly sued in various courts in different countries in the United States there in France in Austria in Italy in Russia so where else in the United Kingdom. And for example in United States the work of the site officially banned by the court well the truth is there's still a lot of people from there use this website, in other countries access to the site is blocked at the ISP level like in Russia for example Roskomnadzor but still continues to work, something like this. I basically want to prove that it's all legal because it's more like illegal should be that the important scientific literature it's available at such enormous prices. So it turns out that scientific knowledge they've become available only to some elites and we would like it to be available to all the people.
D: It's basically the system. I am not very knowledgeable but it seems to me that all textbooks they cost a hell of a lot of money, all the special literature costs a hell of a lot of money and everyone always explains it by the fact that they have exceptionally small print runs and they print on good paper there with graphics, illustrations, pictures and it's kind of expensive and that's why it's like this. But if it's thousands of dollars for one article it's crazy.
A: you know, how are these articles basically what size can they be from, like, five up to 40 pages. the size of a scientific article. Well in principle here this problem is so acute that it's basically started to hinder the development of science itself. Well, it's the access to literature.
D: With the advent of the internet, I think the whole concept of copyrights has gone into. I think you just have to come at it from the other side, that is, if for this book that you printed on really good paper there with color illustrations and stuff that expensive if you do it all in electronically it's somehow a lot cheaper and the question comes back to my illiterate view in a different way that is if this one costs $30 and this one probably 30 cents should be at the racks.
A: So it's in the electronic article is $30, that's what I'm talking about. Yes, you can in principle now they have made that you can rent an article for example So it was a one day access. then for example it would cost 5 dollars and if you download the whole thing forever then it's 30 to 40 dollars and that's true. Of course, and that's why a lot of people don't have access to the literature that you need right now a lot of scientists there are students, people who are doing professional science and and it's not just there that this problem it's not just a problem in some poor countries but everywhere, including abroad In the United States in general has been for a long time this problem has been discussed probably since the 90's and they've been tried to solve this problem,
D: and how they solve it
A: They solved it by, for example, creating special open access journals that work on the model that initially for the costs of publishing the paper itself is covered by the author, as a rule, from the grant. That is, scientists do some kind of work they have a grant for example and part of this grant they keep the magazine and the magazine uses this money to, as it were posts an article and then that article is available for everyone to read for free but they came up with this model when they were just starting the internet in the 90's they had a lot of idealistic scientists in the West there was this dream that just everybody would start, well basically scientific journals and scientific publishing house they wouldn't need them, I mean scientists they would start to put all their works and they'll be free to read that one. But despite the fact that such websites like archive.org have appeared in general as if there's some kind of global change it didn't lead to that.
D: Why?
A: Well, that is because, you know when papers are published in some famous journal, then it turns out that a scientist gets recognition if you just put the work on your website then anybody can do that and this way it turns out that if it's published somewhere, then it's passed some kind of filter and so in your career this article will be taken into account. It is like this.
D: On the one hand, it makes sense, of course. There must be some kind of, I don't know, scientific board. editorial board should be absolutely. But again, if you make money from this, but we have capitalism and they make money then with a different model, so to speak. you have a circulation let's say a book of 5000 and here 500 electronically and if you're selling for $30 here and here for 30 cents, then 500 is much it's more important that you can make more money you can make more money online what's the point it's just until it breaks. I remember when up until relatively recently somewhere in the early 2000's Stephen King a famous writer, he decided to publish his book that the publishers were getting fed up with it. He thought of all this a long time ago, but after a while time Steven King gave up because at that time the cash change system wasn't worked out and even such a mega popular author it didn't work out and now for example a lot of authors with selling e-books get a lot more exactly a lot more than selling paper books and a lot of authors don't sell paper books at all because publishing online is much more profitable and what's stopping them?
A: Well a paper book they in principle is nice purely as a the subject there are beautiful pages and so further. Well what is its value of a paper book right now?
D: (🕰10:00) Purely for work, of course they're inconvenient and and it's a nonsense to scribble something with a fountain pen to write something out is nonsense and for searching anything with a fountain pen is totally unusable. So what, Why don't they want to do it?
A: Why don't science publishers want to sell everything cheaper?
D: Yeah.
A: Well, that is probably more of a question to them, but the thing is if you look at the history of this issue, somewhere in the twentieth century, back in the scientific journals they were sort of published by the scientific communities and mostly on a non-profit basis and towards the end of the twentieth century started to be bought up outright by these large commercial scientific publishers and now there's this kind of oligopoly There are some big publishers out there, like Elsevier, Springer, Wale, and others that they sort of kind of own almost all of the scientific communication between scientists, and so after they became owners of these scientific journals, they started very sharply raise on them. So in the graphs the price increases for scientific journals was several times higher than inflation.
D: I'm sorry to interrupt, maybe this is some kind of political bullshit that is conventionally the United States government is categorically not interested in having Chinese universities, Indian universities, I don't know Malay take this for free and and use it. That is when you are going to have scientists and we won't have enough and that's why we're going to screw up the price and cut you off that way.
A: Well that might actually happen because again if you look at this history of how this open science movement developed. That is, yes, when it became obvious that this problem with high prices and in basically not only in China, well it was written that even the richest universities in the richest countries have they had serious problems with to buy magazines and there one magazine subscription it can cost a few thousand dollars a year. In these magazines there are many thousands of them, so it turns out that by the early of the 2000's there's a movement in science to a Western movement for open access, you know. or open science which means stands for the fact that there's a whole this whole exchange of scientific information between scientists so that it would be completely free without any economic barriers. But if you look at the principle the term open science itself when it the first time it appears, it's somewhere in the '80s and there are some articles in there that talk specifically about the commercialization of science and there are some that talk about that the Pentagon somehow forbids, you know, in universities in the U.S. somehow it's free there to publish something free rules to hold conferences and that's why scientists they're advocating that everything should be open and that it doesn't really it doesn't really have any effect on the security country. But if we talk about the Soviet Union that was still then the Soviet Union it will steal sooner or later anyway and if you kind of shut everything down like this the exchange of scientific information then it would just it's just going to hurt science itself, i.e. science itself it will not be able to develop quickly.
D: It can't exist apart from politics if it's like the famous Manhattan project when the Americans were building the atomic bomb and the Bolsheviks allegedly stole it and as it turns out in the games they didn't steal anything. and those who created the Manhattan Project Manhattan Project and those who worked on it knowing what kind of degenerates you are at the helm of the United States as people who are intellectually gifted, you can't to have something like that on your own. They purposefully leaked it to the Bolsheviks. And they weren't leaking secrets, they were explaining in which directions they shouldn't work that they had already worked on this super fictitious, you have to dig here so who stole from whom is somehow who stole who is suspicious.
A: Well in general the American PR machine is very powerful and that is why of course any developments they as their own or something that was stolen from them. in my opinion.
D: That is, their economic well-being rests solely on military power so the military, so I can't say that they're getting into science they live off of it I mean the defense of the country is based on science. Naturally, they will not allow the secrecy not let them raise the price of something else. If you give it to everyone you will destroy all the competitive advantages and where will the money come from then that is this political bullshit is fine.
A: But it's just that in publications from the '80s. you can find echoes of this conflict between scientists in the United States and the Pentagon and that the Pentagon was trying to sort of forbid it and they objected by saying that if like a secret, then science just wouldn't develop quickly that science is sort of based on the common ownership of the results of scientific knowledge and science is based on free communication. That is, and in those branches of of science where communication, the exchange of scientific results are the most rapid and freely those and develop much faster.
D: Well, it depends on what kind of science if I do not know Shakespeare's work or Homer's work it's understandable that no one needs it, but if it's about war and advanced technology like that and I don't know if there are bans on for example selling technology to the soviet union like the Jacksons amendment where there's – “but you can't” I remember there were wild hysteria when Toshiba sold some milling machines that the Bolsheviks immediately started to turn special propellers for submarines. And the submarines were no longer heard in the ocean and that poor Toshiba was being robbed there and just all voices of America.
A: I remember a company like that.
D: They were choking on what the Bolsheviks there had sold that they weren't allowed to sell and if the results of scientific research to put out like this then the insidious Chinese Bolsheviks will look at it and start making the chips themselves and through that they will start for example they'll start building some kind of powerful computing machines that with China's money they'll immediately to outrun the United States. They will start simulating nuclear explosions and then the the U.S. is nowhere and money will be taken away from everyone, everything will be forbid .It is impossible to allow it open. I suspect scientists are some kind of crypto Bolsheviks they think that there are no states no contradictions between states let's give them all away so it doesn't happen
A: The point is that collectivism and communism is kind of at the core of science
D: actually all the scientists there are leftists.
A: Well, basically the theory is that like after that the Soviet Union collapsed the Pentagon probably lost the ability to say that they had to keep everything secret so that they wouldn't play the soviet union and probably decided then to put an economic barrier I mean maybe that is the version or maybe it's just some kind of unprecedented greed of scientific of publishing. Of course, they just want to make more profit if they can do it, but that's what it leads to such results
D: I used to be deep into computer games like the way movies are in music there are a number of citizens in the west who are just like these scientists they think that all information should be distributed for free and be available to everyone, so here are the creators of the toy built their toy working out there for two years for monstrous money and stuff. They brought the disk to the factory and says that it needs to be printed Then guys put it in the computer one way so to speak they make an image and then they start printing and before the disks ready to go out of the factory they put it all on the websites for free. So they do it for free. they're doing it for free and they're not make a profit. Then they start screaming that look at all the Russian pirates locked up in the U.S. Let's start from here and your Russians they printed here stolen there and sell it, that's the point. Well, you don't sell other people's notes, do you?
A: No, if we're talking about differences for example a computer industry with science, the thing about science is that the authors don't get any money at all from the sale of these articles absolutely that is initially why else in the same the United States, this topic has caused such outrage was caused by the fact that the taxpayers are funding this science. So the taxpayers pay the money and then the government out of that money gives grants to scientists to do the research. And the grants are big and they do these research and based on the results of this of these studies they write these scientific articles in magazines and then they send them to a scientific publishing house and the scientific publishing house closes it and puts a high price tag on it and that's it. Of course it's no longer the author himself like the citizens of the country can't get access to these scientific or the authors themselves.
D: Which made on their tax.
A: Yes.
D: and this outrageous.
A: Well yes if you look at the profits of these scientific corporations, it's out there. surpasses Google, BMW and our oil banking industries.
D: Not bad.
A: Yeah surpassing the profits of these corporations.
D: How is it that you got the idea to take it all down.
A: Well, how can I put it? When I was a student myself (🕰20:00) at the university, you know, when I was graduating and I was studying to be a computer scientist with a degree in security.
D: Here you go. You are a hacker.
A: Ihad such a dream when I was a kid and maybe I thought about becoming a hacker or something like that. I know, but it was very fashionable back then and yeah. when I was graduating from university I didn't know anything about it. and I wanted to do a degree in neuro-computer interface that is that's just the kind of projects that are the most famous of which is the chip brain Ilona Mask. There is a lot of talk about it now, but basically it's pretty old theme, I mean in 2003 you can see there's this scientist Theodor Berger, he was developing a hippocampal implant. of the hippocampus. So for the brain, it's kind of like artificial memory or something like that. I do not really know if he ended up didn’t work out, but there was a lot of writing about it. And at the time when I graduated university, it was like in 2009 they were writing more about neuro-computer interfaces, you know, you put some kind of a cap on your head and you could sort of you could, like, use your mind to control the mouse cursor on the on the screen.
D: So how ?
A: Well of course it's actually all it's quite complicated and it's a lot of work. But for handicapped people it might have use because they don't have any other way, but of course a healthy person it's easier to do it with their hands. The technology is very interesting and yes and at the time and I wanted to to look for something in that direction in my diploma and I had a hypothesis there was that like, to enter the password with my mind. I mean that is to use this fingerprint of the brain as a password and so I started searching the internet for information on this and all the information was in these expensive scientific journals. I even remember at the time that I was very confused, well I basically spent all the time at school while I was studying at university I was using different pirate sites to download science books. Therefore, I thought that, like these science journals I could download somewhere for free, but on the Internet it's like everything's already out there. I looked on torrents somewhere else and there wasn't anything anywhere and already I thought there must be some kind of program maybe or some kind of website so that you could scientific journals downloaded for free. It was around 2009, well, two years later in 2011 I started communicating on online forums on scientific forums and was a frequent visitor of the molecular biology forum. That forum was Russian-speaking and there were both scientists from Russia and those who had gone abroad.
D: Soviet kind of people.
A: Yes, Soviet, and there were a lot of people there people with this kind of problem. They couldn't get access to some magazines and for them there was a a special section which was called full text well by the way it's still there and you can see that a person went in there and put an ad in there, like, can you help me download this kind of article and for example some colleague who's gone abroad he can see it and send it the person here if he himself had access of course. Because even in western universities, not all journals have access especially at these prices, of course they only subscribe only some of them. Every university is only subscribed to a certain part of journals and of course there are individual universities that have very good subscriptions there are a lot of them available but not many of them.
D: But maybe there are some kind of mega sponsors who can pay for it.
A: I have noticed that for some reason Canadian universities have good subscriptions.
D: Maybe the Americans help them out like a beggar.
A: Yeah, Canada is an interesting country. I mean you don't hear much about it and there is also has the country of Indonesia I mean if you look at it for example the countries that are most use the Sci-Hub there would be India, China, Brazil, and those are more or less well known and then all of a sudden I saw on the list that there was Indonesia and I was very surprised because I've never heard anything about it.
D: I have relatives there and they have good universities there.
A: And they're also one of the of the most active users. At least in the news this country doesn't show up much in the news.
D: I don't remember exactly but about 300 million people live there. In terms of population, it is gigantic. So where were we,
A: There was a place where people were helping each other and maybe not exactly legal in terms of the law that's a little bit of an issue, too. Although it's still not exactly the same as when you just put some paid scientific articles in the public domain for free so that everyone to download.
D: I am sorry to interrupt on this one. I think it is a terribly complicated question. I bought a book for example and then my woman read it and then she gave it to her friend and she gave it to her husband if it's right that they read it for free, well, it's broken. Right. I mean, with the advent of the internet and the ability to read and it's all about the money, I still have the same idea that it shouldn't cost that much and then it's all solved instantly. I used to serve in the police and they paid 200 rubles per caught murderer and a computer game cost 495, so it was to catch two and a half killers in order for me to buy one computer game, well this is nonsense.
A: what time was it?
D: In the '90s and for example, my salary was 1,080 rubles, and one DVD with a western movie cost 900 rubles. So how it impossible, how do you even eat here for a month of hard work, . and here you see some stupid piece of plastic costs 900 rubles or something like this.
A: Well, of course it is hard to make a movie, but look at the salaries thhses actors get. It seems to me they could be cheaper.
D: That is why in high school firstly you had to take singing lessons to sing like Olga Buzova and P.E. classes to bounce like some basketball player, and the the rest is bullshit because these people live better than anyone else, obviously.
A: And by the way, those same soccer players there's match broadcasting and stuff like that.
D: Next thing you know, this cost me 1,000 rubles and this costs 900 rubles some stupid movie of highly dubious artistic merits. So time has passed and now that broadband internet and everything is good now I go in and there's a huge a lot of movies you can watch for free is one thing. but it looks like they're going to show you a little bit of advertising. Yeah. Either pay for it or, I do not know. starting at from 90 rubles to, like, 200 well, 300 and that's basically it's two cups of coffee, in fact, and so As a result, it is no longer stealing in in such quantities as they used to steal. They just changed the economic model and It's much easier for people to use two buttons on the phone and watch.
A: Well, people it might be easier, but it's led to to the fact that now we have some huge monopoly companies out there. YouTube, which owns the entire of the internet and in fact it seems to me I think there should still be a lot of small companies.
D: But they arbitrarily merge capitalism all the time there is no such thing as monopolization it's not just the united states there's, for example, antitrust legislation.
A: Well, it does not work.
D: Yes, it does.
A: It doesn't work with Facebook, it doesn't work with YouTube, it doesn't work with Google.
D: They wait for the buildup critical mass and then they start and then start dividing them up.
A: Yeah, but still YouTube benefits the very Americans who watch it. There's some kind of centralized system that's watched all over the world. It's politically biased, well, speaking of which give me access to the TV, I'll elect a monkey as president.
D: Well, I saw it recently.
A: You mean Biden.
D: Well, the thought occurred to me, yes. How to concentrate it all make it a single resource where it all would be.
A: So that's how it leads to capitalism, which is what it's based on. It exists at the expense of property rights in this case the right of intellectual property rights or just popularly known as copyright that is, if you create some kind of a website that's just going to show there the same movies and you don't have you don't have permission to do that then it's just going to be blocked by Roskomnadzor or whatever other countries have regulators. And then of course there shouldn't be Then of course there won't be a problem. To be honest it's just my opinion that it's necessary in principle to somehow get away from this notion of intellectual property and move to intellectual communism because it is in principle intellectual property is this (🕰30:00) some kind of self-contradictory notion.
D: I share this idea strongly, but I have to act as an opponent. So what was the decision? Making a wensite huh?
A: yes there was a lot of people who had that kind of problem and I remember that I worked as a freelancer in the summer. Well in the summer and in the spring I was taking orders mostly to create some kind of scripts and, you know. little web stories and. that's when it became clear to me how you could program a website that could automatically use different passwords there and automatically download. That is, a person will just go there and click a button and he download the article for free. But when I had this idea, I thought would it really work, I mean, I was just curious to see. If you run such a thing will it basically work or not and it turned out to really work. I posted on this forum that I was now a service for automatic access to scientific literature and I started all forum users to send out that's the ad and I got it just a thank you in return I mean, people were just dancing for joy. There was practically no one was saying how it's you can't steal, no, but everybody was just very happy and so basically this website immediately became so popular locally and then it just gradually grew, but in principle I knew of course that abroad there is a movement called - open access that is for open access which is for articles to be free. Well then, I did not go into the details of its history or whatever, I just knew that it existed and in principle to me its ideas seemed right to me and for me it was always something to do with communism, you know, if something is free and for everybody it's communism. There is a site gradually grew so much and already some of the details of that topic I I started to look into it later, some political details.
D: So everything was doing it on their own or who was there with a group of of enthusiasts?
A: No, there wasn't a group of enthusiasts. and there wasn't some kind of team I mean, this is the first version of the script I just sat down at my computer and I wrote it myself and since I was working as a freelancer it was basically for me it was something of a chore. For example you take an order and you make a program and then you publish it and basically just like I did with the website Sci-Hub.
D: So where do you get the university's passwords to the libraries come from.
A: That's how the passwords at the time there were forms where different people posted passwords like this and some of them were selling them, so there was for example one password is $40 there's 200 and some were for sale. I mean different places like that on the internet and basically if you just give somebody a password where you put it in the public domain you put it out there. it doesn't last very long and it will close right away and if it's like standing on the Sci-Hub and the Sci-Hub itself downloads all the user requests, then it is closed and respectively the password works. There some passwords worked for months and years at first so all users were downloading and everyone was happy. Then I remember that I had somewhere like well every university he signed up for something different. And so, for example, you need to get access to some article and you don't know which university had a subscription to that journal and you have for example there's ten or a hundred passwords and going through each password and checking and if there's access to that article. it's kind of a hassle and I also needed a program like this. Sci-Hub that can do this more less automatically. So at what
D: So how fast did it start to fill up?
A: at first, of course, there wasn't any base. I mean, it was just a subject matter,
D: is it wide-ranging from nuclear physics to some kind of archaeology?
A: Yes absolutely everything. I mean, any scientific journal out there. There is no difference in subject matter the humanities there's even philosophy on theology. By the way I'm thinking of doing in the future some kind of a selection of journals on theology from the Sci-Hub.
D: And that too for money?
A: Yeah. In addition, there is many humanities journals access is harder to find.
D: Are there fewer of them?
A: I don't know, maybe there's like less subscriptions or something.
D: so it is starting to fill up.
A: Yeah.
D: At what rate? How? Well, at first it was pumping about 40 60 articles an hour, and then it it all grew and it became more like 30 articles per minute.
D: A visitors started coming in.
A: Yeah.
D: how did visitation grow?
A: Well at first it was somewhere in the early days a few thousand visitors a day and it was mostly people from Russia. In Russia, I myself promoted the site on forums like this molecular biology forum then and then on the chemistry forum Himport.ru and I put an ad there too and people were happy about it but then I saw that it was placed on forum ru-board and then after a while about the website found it in china, India and Iran. I remember that there was such a huge flow of users that website just collapsed and I then of course I restricted it by IP address so you couldn't to download something from China.
D: Therefore, it was on some free service and that's why
A: Yes, the first time it was hosted on a free hosting, but then of course I moved it to something more expensive. I remember there for a long time it was on hosting which cost 40 euros a month and then on some more expensive one. So as it grew and there was a period of time when you couldn't download and you couldn't to go to the Sci-Hub from China or Iran. I also remember that when I turned it off the Iranian IPs from the Sci-Hub website there was a little tech support and there was a a huge bang from users from Iran. Moreover, it seems that after some time I gave them back
D: I mean, people who do not have any money finally have access to normal scientific literature. So how did the number of visitors grow? For what period how many became.
A: What period how many became. Well, basically, you can look it up now here on the computer. So well here's June 2012 for example there's somewhere around 2,000 to 3,000 unique visitors then by the end of the year jumped to two times 7 thousand visitors well the end of 2012 now look further . So for example from 13 to 15, so we're about September here thirteenth year here up to 30 thousand visitors then it goes up again. and then it drops off again, I guess. when it just spiked, I turned off the foreign pawns. September 2014 here are 10-20 thousand visitors Well the end of the year here again about 30.
D: For a scientific resource, that is a lot.
A: Oh, well, that is still not so much compared to what it's become. So here it is from '15 to '17. it is growing, so if at the beginning of 2015 or so. there were 25,000, 30,000, 40,000 visitors a year later in February 2016 it's 131,000 here is 125 000 well more than a hundred, and by the beginning of 2017 there is a already about 200 thousand visitors was a day
D: It took me 18 years to catch up with 100,000 a day. 18
A: 18? Well, there are now half a million a day on website.
D: That is solid.
A: Yeah.
D: this is as I understand it without investing in advertising to promote it, it's just like a snowball.
A: Yeah, well, of course the Sci-Hub also has a Facebook group, and I've been tried it for a while to promote it for a while, you know, it's just that you buy an ad and you and your group somewhere in another group and they advertise like that. It's nothing brought there that is well there will be two or two hundred people after the ad into the group, so I don't think it had no effect at all, especially moreover, he grew up mostly abroad. And this was a Russian-speaking group After that I didn't really invest any money in the promotion I didn't invest any more money. That is, it was mostly people found out about the site from their colleagues by word of mouth and the fact that he was still in early 2016 it was written about it in a major media outlet The Atlantic abroad and after that also about the site was written about in very reputable (🕰40:00) scientific journals there Science and Nature. That is I mean, the very fact that the site got there is very cool and it basically reflects the fact that the Sci-Hub has sort of become a scientific revolution like that.
D: Absolutely.
A: Yeah. I mean those scientific articles that for a very long time remained at very expensive prices, they're suddenly on the Sci-Hub. They're all available for free. There's even scientific studies came out that through the Sci-Hub has access to almost all the scientific literature now, of course I know that there's a lot of stuff that's not there, but if we're talking about exactly the popular and demanded journals they're there. And it was 2016 and then of course after these publications about him more people found out, well, in principle it's always been a bit of a mystery that so much time it took. That is, the service will appear in 2011 and for a very long time about about it despite the fact that it somehow at once very welcomed in science and despite this for a very long time in all the media, including scientific ones about him a kind of deafening silence.
D: Perhaps this is because it is a little illegal.
A: I remember that even in 2015.
D: And the citizens who use it they are afraid that maybe there's going to be some kind of liability they have with it no problem, if you download movies in Germany from a torrent you're going to get they'll come and then you'll be fined very quickly. Of course, they will warn you first.
A: There have even been some of the funniest cases where when the police knocked on the door of a nine-year-old girl or a grandmother about the fact that they downloaded something from a torrent and there was girl downloaded some cartoon.
D: Scumbag.
A: Therefore, I remember that 2015 I was reading also on some foreign newspaper about "I can hasp it pdf" tag for twitter. I mean you know what twitter is, right? So someone came up with this tag that a person, is tweeting that please help me download this article and put this tag "I can hasp it pdf" and then the other person with that tag can see and help that person send and about this in 2015 in the foreign media wrote about it and I looked at it and thought well it's still a few years ago was done at our molecular biology forum and then the Sci-Hub appeared and no one does that anymore. But there's some kind of alternative reality.
D: I mean, here they are position themselves as mega free countries and there for some reason there's nothing free circulating but the horrible totalitarian Russia for some reason everything circulates there.
A: yes. It is by the principle of legends, the same library geniuses is the largest portal also with scientific literature but specializing in books it's also kind of from Russia. Before library geniuses, there was this huge pirate library there there was about half a million books, I think it was called Gigipedia or or library in new and as I remember it the creators were in Ireland but after a lawsuit, it seems that in 2012 it was shut down. So that's it and there was a very kind of outrage there, people were outraged all over the place. It always seemed to me that how is that possible, I mean we have some kind of law that forbids, as it were people from freely reading scientific science literature. I guess that says that there's some kind of law that's wrong. But why don't we see this somehow discussed in the state the Duma both at home and abroad Why is it that this topic is somehow is hushed up.
D: looking at our Duma I always feel a severe moral suffering Why are there singers? Why are there athletes if it is a legislative body? Maybe lawyers should be elected there lawmakers, so to speak, who have special legal education or do you think that if there is a turner Vasya from or the singer Lyusya from the stage they'll somehow make the right laws. You do not understand what this is.
A: So you see, that is how it turns out.
D: So here, they do not understand, I am sure they do not understand, on all levels and they don't understand it like the fight against telegram. When they started shutting down telegram, they were hitting the bank sites that are running on amazon servers. So why don't you understand how this works.
A: Well by the way if we are talking about fighting of Telegram there was some effect there. As I remember in the State Duma they introduced a bill to block Telegram. I think there is a guy like Pasha Durov, whether you like him or not he's a mega-talent who has built something and can benefit the Russian state. Why has your Pasha Durov gone abroad? And is doing something there, why don't you keep him here, don't give him money here you don't give him a platform to expand, so to speak to his fullest potential. He is over there in America is trying this electronic money like this crypto currency. In addition, why does not he do our crypto currency in Russia? So put it in the Sayano-Shushenskaya hydroelectric power plant there's some of those mining farms mine our cryptocurrency there our cryptocurrency, we'll cover the whole globe, strangle everybody and instead of that let's ban Telegram.
A: I think it's just that Durov himself didn't wanted to because his political liberal views.
D: He is the one who works for the money I mean here's the money Pasha come here and work here. All talent should be lured back, not as they are sitting over there in Germany, Israel and America and working for the Pentagon inventing weapons to kill in our country. What is for? Bring everybody back.
A: It's considered that abroad there's freedom and we're kind of a dictatorship.
D: That is what they say. You know I used to go to this wonderful country France where there are cows running around. Norman cows in Normandy like the the cover of Pink Floyd’s band there's "atom heart mother" there was a record like that. there's a nice cow on it. I wanted to take a picture of the cow I tell my friend to take a picture of it. and he says you can't. Why not? You cannot stop here, it's the countryside. And that's why you can't let anyone drive by. and we'll get a fine. Therefore, we have been driving for two weeks and found two places where You can stop to take pictures of a cow, but the rest are prohibited, you can't go to the forest you cannot make bonfires in the lakes, no fishing, no hunting You cannot do anything. How is it that you living here? You know, ours is the most ferocious Stalinism's fiercest bastards compared to them. In terms of freedoms you couldn't even dream of freedoms we have, go wherever you want, do whatever you want. You can swim, pick mushrooms and fish and no one will say a word to you. We are distracted. We have freedom, yes.
A: Well, if you are talking specifically about the telegram, it is still telegram was promoted in the media and there even, for example, if we're talking about Russian Today there was always something written about it.
D: And how not to write, if we all We all sit there, have accounts, for example, have 69,000 subscribers on my channel. That's It's not enough, apparently.
A: Now it's like that for everyone. I mean, there used to be some maybe maybe millionaire channels at the dawn of there of the internet and at the dawn of the telegram as well and and now there are so many of them that they all have 20,000 to 100,000 subscribers and the same number of views.
D: It is still a lot. I will come in from the other side here. there used to be magazines not so long ago paper magazines, you know, 300,000 and that was a serious circulation, and I have two of these, for example, and on YouTube I have two a million subscribers no magazine could ever have that many subscribers. Well, you have to use it somehow, no.
A: Well Sci-Hub on the contrary, as opposed to Telegram or, for example, the likes of heroes of freedom of information like foreign Such as Snowden or Assange like telegram, for example popular resources well how many people use the same Snowden works directly no well his example too very strongly and powerfully promoted but and about the Sci-Hub on the contrary there is some such silence and that's in spite of the fact that in principle it's not just used by scientists, I even know that it is used in Kurchatov Institute and for example doctors. I mean most of these of scientific journals are medical journals and doctors need this information in order to, you know, understand how best to diagnose how best to treat patients.
D: it ids the most important aspect.
A: yeah, so I've even had this review about that this is a site that saves people's lives, literally. Here, I mean the important problem and even here's when in 2018 Roskomnadzor decided to ban the Sci-Huub and there was an outcry about it from Russian doctors and even then there was complete silence in the media as if as if this problem did not exist at all
D: So they're probably just not interested, though. I see skills in PR so called very little it is necessary to PR in a different way, for example, you have to grab a doctor who's doing for example a doctor who is doing scientific research here Volodya, let's sit down and tell us what you do (🕰50:00) What kind of literature do you read? Do you know English is what was said in the beginning that it is today is the language of science. You don't happen to read the notes, do you? Huh? where? really on the Sci-Hub and what benefit did you get from it what did you you read and what concrete benefit to you. If that's what you're doing. if you do that, then yes, there will be a stir.
A: So I was expecting that the media themselves would do it. the media themselves, I mean, they'd call in some doctors, some scientists. when the principle became known. I think it's a good thing. that's what the media should be doing to do that, that is to look for some people to raise Publicly important cases and so on.
D: If you do not do it yourself, no one will do it at all. They will do the crap.
A: If we're talking about that Snowden and Assange they were helped. but the Sci-Hub is the other way around.
D: This is a matter of state security but what's not clear to me personally is that it's like at the dawn of our so to speak piracy in the 90's you could go for thirty rubles to buy a DVD which has windows Photoshop 3d max and so on and so forth. When I went to the official offices to work where you want you don't want, you have to have everything officially and all of a sudden it turns out that some 3d max is where the three-dimensional models
A: Yeah, I remember that program.
D: It costs almost $4,000, and the office you need, like, three modelers sitting there. and you'd need three or twelve of them, and in those days. an apartment cost that much. In fact, Photoshop cost 800 bucks if I remember correctly but then again Here is the broadband internet Here is this Photoshop I do not know what of this Adobe you have got to get Premier or something. That is 30 bucks a month that is 35. However, you do not need everything, right? Not the whole package.
A: Yeah, you can do it on a subscription, too. It's perfectly fine, but somehow it's...
D: it's not for $4,000 at a time.
A: This has led to these monopolies. Huge monopolies that we have like apple and so on.
D: Therefore, my point is why then the masses of people could use it, develop it somehow. well okay if it's enemy articles that are paid for and our scientific don't write which are free of charge by the way and that's how it is.
A: well by the way we also have one in Russian pay journals, but it's still humanly possible to write an article for 100 rubles. Sometimes there are articles for 200 I mean, there may be some de-aspirants possible
D: That is, if it is a hub which so say, in all directions leads here there is a part free and part paid. Well, for 100 rubles, I do not know in my opinion who anyone for 100 rubles can buy the necessary professional note to read.
A: The website Sci-Hub is about collecting donations and there people themselves send 100 rubles or even more just to keep the work of the site and by the way I remember it now donates no one can be surprised I mean, nowadays anyone collects donations YouTube, and even children, and then it was 2013 year and that's when the Sci-Hub started collecting donates directly from their site and for a few thousand dollars maybe for a few days. I mean, it was so cool.
D: I remember the first time I recorded a greeting for the first time. And they gave me a thousand dollars for an hour and I realized that I was doing something wrong. I was doing something wrong.
A: Was it on YouTube?
D: No, it was a long time ago. Probably seven or eight years ago. It is a completely different area of funding is completely different. but nobody's surprised by donations these days. Well you cannot, but the entire enemy's books don't intelligently tell you that if you're you're out there doing some I don't know play the guitar, carve matryoshka dolls. whatever if you can find a thousand people who will give you ready to create looking at your so-called so called creativity or whatever it is you you're doing they're willing to give you ten bucks a a month. That's not much, how much is 600 rubles 500. Yes, that's if a thousand people, then all your problems in life are solved. you're doing, and that's all you'll be able to do. You'll be able to live comfortably and these people sponsor you, so to speak. your activities for their own enjoyment and pleasure and all that. Why not?
A: of course that's why in principle paid content you don't need it
D: well maybe it's like a university I can buy something myself, put it and you use no?
A: Well buying and and put it out in the public domain, . also a violation yes .
D: But so if it's our if it's there are so many visitors and such how many there are half a million visitors to the site well probably science must be something it is interesting that here is such a resource, no?
A: I actually have many reviews. I mean, I recently published a page from my email in 2020 website and after that I received several thousand e-mails from all over the world there. From Europe, from the UK, from Switzerland, the United States, and of course China from Iran and a lot of them from Latin America there is Brazil, Mexico from Africa. They are also using the Sci-Hub there. and all have written a huge thanks for this resource I have a few thousand e-mails are stored and I'm thinking of asking more users more about their work and maybe write some of those stories of using the resource on the site to write well really the Sci-Hub it's become quite such a big event in general in the world science.
D: without any exaggeration.
A: it seems to me that at that time Russia could somehow make such a political step that is to legalize in basically legalize the work of this resource well as it for example a lot of Russian scientists use it and it would be it seems to me that it would be a very big deal even if we're talking about some kind of competition with the United states it would be a very big ideological blow to the Western I mean, how come all of a sudden?
D: What happened instead?
A: well instead it turns out that for some reason was basically silenced somehow. the discussion of this project in the media.
D: Well, how can it be silenced if you do not? If you do not speak, no one will pay attention to you. Excuse my immodesty I can I can give you an example of myself, if I translate a movie I sometimes translate movies. at least ten times as many people watch it usually even more. If you think that there is a giant line of employers with shouting Dmitry Yurichev translate please translate us a movie nothing of the sort. You have to run through about ten concrete walls with your head, make an inhuman effort to make friends with everyone, and then maybe, I mean. maybe you'll become someone's interesting. If you don't have that direct human interaction when you're with these specific people that you depend on all sorts of things you communicate with them and you show them something, you're doing them some benefit to them, which is quite obvious to them. To begin with, you explain to them what you're doing a serious benefit, then yes, maybe. But for someone to come and interested in you and start to promote, which is exactly the promotion no media mentions and stuff like that doesn't happen.
A: Why not? After the The website hub was written about in the media after that, I got a lot of letters from various western journalists who were interested in the topic and what this project is all about. I mean, the the project basically became known to a wider audience and because it's a pretty important topic regularly talks about some kind of new discoveries of science, even the most trivial ones.
D: Sorry to interrupt. I can see two things at once there are people who take advantage of this are Westerners especially the Americans and the British and the French and others. They clearly understand that they're doing something wrong and they're they're breaking the law and that's why they won't they won't talk about it. They will not talk about it at all. They will use it and they will not talk about it. and then when it gets to us for sure there are some I do not know but I am pretty sure there are some interstate agreements signed there. and stuff like that where this has to be to stop on the territory of the Russian federation and Roskomnadzor according to in accordance with that, it stops it like that.
A: If we're talking about the users themselves, in principle, the project has a lot of feedback in addition to personal reviews in the letters that is, if you go to the same twitter and and write the Sci-Hub you can see how many scientists use the Sci-Hub and praise it. The Sci-Hub also has groups in social networks for example, VKontakte where there are also reviews are already from Russian scientists and even about the smallest scientific discoveries all the same publishes. And here you get such a big event that the top scientific journals there's like actually a revolution in science and quite important topics if we're talking specifically about access to medical information (🕰60:00) and for some reason it's somehow been that's been neglected in a very strange way.
D: Again, sorry to interrupt, I would just insist that there's more to this the apostle Jesus Christ said if I remember correctly go to the doctor and cure because the cured believe the doctor. Unequivocally he would immediately accept Christianity and start preaching the same the same thing here should have been or now is. for example, focus on doctors and this so to speak, the mainstream that we're not here to play around, we're here to save people's lives. Bring a doctor and let us sit down and talk, Here is a doctor. How do you use it? How do you use it? and how it's helped you and how it's helped you save lives. and then it should all come from the medicine to spread the width of it if you don't do it that nobody does anything.
A: I mean, I have even, taken testimonials from some doctors and just recently sent them out there to various journalists and I think even in the new newspaper there a sign from a friend of mine said that a lot of newspapers could publish it, but they won't because I am a Stalinist.
D: It is important here totalitarianism
A: yes and so at first the journalist interested and then answered that the the editorial board turned the topic down and so on it happened with some media outlets, so it's like first the journalist is interested and then they say that the editorial board refused to print the story and it was the same with Rasha today. I mean, I was told that the story was turned down at the top.
D: Maybe they were joking. Do not particularly believe people.
A: Maybe.
D: The other side of the whole There is YouTube, for example. There’s people who are interested in it there's the scientific community they all have social networking almost everybody has it. if you make your own video to appeal to people insofar as they're firmly aware that they're all about a lot of, so to speak, professional achievements, for example, they owe personally you are the one who built the resource that they use and benefit from. benefit from it and now help me shout a little louder. No, I mean help. It is not at all certain that it has the same result as a broadcast on on channel one of Russian television. But it does have the effect that you can do personally
A: Yeah of course I don't sit like that I'm promoting this project on social media and by the way here in twitter that you've seen there individual tweets from the Sci-Hub they're getting like a million hits.
D: my respect, yes.
A: And then quite recently. there was a lawsuit situation against the website in in India, I mean on or about December 21st I received from the Indian court saying that we were being sued in India and that and they're demanding that we block absolutely all the addresses of the Sci-Hub, I mean, like. if Roskomnadzor blocks some address then you can put another address and and in India there the publishers demanded a dynamic blocking that is to say that absolutely any new address is blocked So I posted this bad news on Twitter just to share that soon to be banned in India and I didn't expect the effect it had. I mean there was a real explosion there.
D: A packet of yeast fell in India.
A: It's just that a lot of, like. of Indian scientists there started to get outraged. writing articles In the media on twitter too, that if you shut down the Sci-Hub. How are we ever going to do science, it's like we don't have paid access and we don't have that much money and there's some scientific communities in India have sent a request to the court that the Sci-Hub not to be blocked. So after that, I was contacted by lawyers who offered to defend the the Sci-Hub in an Indian court and that's all this happened after one little Tweet and sometime later I think it was January 6. well, after all of these events, the tweet of Sci-Hub was blocked.
D: Strange that it lasted so long.
A: It's kind of been around since 2012 it seems like a lot of years and without any problems. I do remember there was one situation when the account was frozen and required to upload a passport scan well what is there really me some a real person but after I did it and there were no problems And now, all of a sudden, he was banned and tweeted. no explanation as to why explanation why The website hub was blocked well that is technically of course the reason there's some kind of content there which supposedly violates how the for several years it was and what exactly happened it's just now that there's twitter noticed all of a sudden decided to block it.
D: maybe a letter went out to law enforcement authorities.
A: Yes exactly which one, that is, they did not give about that explanation.
D: two months ago. my Facebook account got shut down and disappeared. They sent me a type of send an ID. I took a picture of my driver's license and sent a reply. I got a sign saying sorry, we have a Covids epidemic here. we can not quickly and here they are for the third month already looking.
A: And you too bloced?
D: Yeah, without any explanation.
A: I read that for example, Russian media pages blocked there, they blocked satellite, but then they apologized and unblocked it. But no one is going to apologize to the Sci-Hub.
D: It's not helpful if they're banning Trump legally elected president is being banned, it's nonsense to talk about some commoner out there.
A: This is exactly when it happened about January 6th and after that there was an article on a torrent frig but there's a site that covers all sorts of piracy news and yeah about the twitter banned and there people started in the outrage in the comments. Oh, that is kind of useful resource got banned and it's Tramp who's on there all sorts of tweets like that. and soon after that they banned Tramp. Maybe they somehow listened to people's opinions.
D: They put Donald in.
A: but I know the principle there it was not just tramp that got banned a bunch of people. in that timeframe.
D: at this point in time, what's the status state the project is in.
A: well at the moment it's in a state of stability, I mean. I mean it's reached about 500,000 users a day or so. the number fluctuates and that's it. little by little. By the way, it was recently banned in in the UK literally maybe a month ago.
D: To protect the British scientists from this outrage after all.
A: yeah and it's kind of like they've even had the police issue some kind of a warning not to students to use this site and there's a lot of British scientists laughed about it on Twitter, you know I mean, it is kind of like access to science suddenly banned by the government.
A: it is not an easy situation. So we have everything works, people come in and use it
A: yeah it is like in 2018 he was banned by Roskomnadzor, the site continues to be available just at another address So that's the situation so far, but I in principle I want to the site completely legalized that is that is to say that it's legal to read scientific literature even for free is legal.
D: I'm afraid that this is in conflict with the laws of of other states who think it's worth money if it's in any magazines.
A: right that is why it can only be done by an independent state and what kind of independent state there is on the planet right now? Russia.
D: I would start again with medicine. I'm talking about how lives are being saved people and how it benefits the scientific community, one, to society as a whole, two. medicine is progressing because of the fact that scientists are able to freely to share knowledge and discoveries and other techniques on how to treat here how to treat there.
A: It's all on your own, after all. it's still hard and it requires not only time but also financial resources And if it were done by a media outlet, of course. of course it would be much easier.
D: No one is going to do it, they will not do anything. The drowning men are the drowning men's business no god or tsar can save us or a hero on our own there's no other way I would recommend starting with medicine. is the right one, that's it bashing through the wall with your head and then, well. friends, the very last one will be nuclear physics where atomic bombs are made but that's understandably strategic the state security areas and so on. But here friends, let's work together to save children in Africa, for example to cure them we do not have to, because you are paying 30 Note that is a great idea with medicine should be started to do There's no other way to do it on your own. Alexandra, but you see how many years and no one's moved. How is it going on your own?
A: What?
D: Overall.
A: I am now [sounds] recently, if you put it that way formally in 2019 finished my master's degree St. Petersburg State University and just in time for the Covid entered graduate school at the Institute of the (🕰70:00) Philosophy of the Russian Academy of Sciences in Moscow
D: Seriously?
A: Yes, I had basically, I had this idea to, like scientifically prove within the framework of the philosophy of science that science, in principle, is inherently should be open and only an open of science it functions successfully. Well some such an idea
D: and how long do you have to study?
A: Now I am finishing first year and so we have a post-graduate school now three years.
D: Really?
A: Yeah.
D: Alexandra not only know how to build websites. What can I to say in conclusion draw a line well do not want to do it yourself in any way otherwise. I can provide practical help with the publication of videos, with some specialists come by themselves and do you have Russian citizenship?
A: I have Kazakh citizenship.
D: How interesting Soviet man.
A: Yes, the Soviet Union you can count.
D: And to move here no plans
A: Well, I've been living in Russia since 2011.
D: Living and being a citizen are different.
A: yes I came in 2011 and I did not go anywhere else, that is, if you do not take into account that there for a couple of days I went home to Kazakhstan and then I went back, and then in 2012 there was a short trip to France with my mom and after that all that time I kind of studied at different master's programs and in 2015 I tried to apply for Russian citizenship as a native of the Russian language that is, in my opinion, at that time this law was introduced only it was introduced that if a person has relatives there who were born on the territory of the Russian Federation that is, ancestors or rather grandparents and a person is a native speaker of the Russian language, that is, the Russian language Native then he can apply for citizenship and get citizenship there in a simplified and I wanted to to take advantage of that opportunity. I also filed the documents and I rather got birth certificate, birth certificates, you know, my grandmother she was born I think in the village of Zaozerne near Krasnoyarsk and my mother was born in Krasnoyarsk and then my older my mom's sister, she went back in the soviet government at the aviation institute in Riga and after graduation she was she was transferred to Kazakhstan and after that the whole family also moved to Kazakhstan you're basically in Almaty there much warmer fruits and vegetables are quite interesting. But of course after the collapse of the Soviet Union. we all stayed in Kazakhstan. So I applied and I went to take the Russian language exam.
D: Is that necessary
A: Yes, to prove that you're a native speaker.
D: But speaking is not enough and you have to take the exam? What are the subjects and the predicate or what?
A: Anyway, the exam was, like I think we were shown some kind of film about the Crimea and the sound was really bad I mean, in my my family kind of spoke only Russian everything and there was no other language and there was this sound that basically it was even hard to perceive this the movie something about Crimea and then you had to fill in the answers to the questions about Crimea. And the questions on the movie are kind of what I remember there was talk about such a place as darkness of the cockroach. It turns out that it is in fact it It's spelled “tmu tarakan” Well I think in Russian colloquially it's all called darkness. It is something like that and after you fill out this test after then you sit down for, like, a job interview they ask you why you have Russian citizenship. I had a woman ask me that she seemed unfriendly. Why do you need citizenship?
D: Really
A: But saying well with Citizenship it is somehow more convenient to live I live in Russia anyway why do I really need citizenship if you think about it and well just citizenship gives a person a person more rights and if now I need any reason to stay in Russia that is I have to either study or work somewhere then after I get the passport you can just live. That is I was confused then, too, and I just I couldn't find an answer and then after that exams and interviews I get a paper that it turns out Russian is not my native language.
D: That is what the experts determined, yes. Moreover, you can come at it from the other side I have Jewish friends in the 90's who want to to the Netherlands and migrate from russia what do? Well, the best way to go anti-semitism ran to the garages on one garage painted a huge Swastika my friend Dima stood near Swastika he was photographed with this this photo, I can't live in such a place. Dutch citizenship immediately put.
A: As a political asylum yes?
D: yes, maybe you should do the same. Perfect. I'm kind of lost, even for me personally of course not as expert as the migration service but I do not hear in your speech that you lived in Kazakhstan I do not hear the typical Central Asian stuff I do not hear.
A: But there is a specific Central Asian accent.
D: well, I do not hear but still, even if it is there so what in general the interview, it's not about watching your idiotic movies with audio interference and answering some stupid questions. Oh, man. That is a good one. Yeah, but will you find out motherland. That is great
A: And then there was also it said that I was supposedly making some mistakes but that can't be true.
D: So upper division education and everything and everything that comes with a master's degree doesn't count, right?
A: But at the time, I hadn't finished a master's degree.
D: Does not matter if you have a degree it doesn't, does it? Here's the the guys who do the foundations in the wheelbarrows carry earth, they're good for Russian citizenship and highly educated specialists don't, interesting. It seems to me that it's like the united states that's how sucking in a vortex of people from all over the world and we'll choose who to let in a good education, a degree, some jobs come on in and you all goodbye. Great The mother country welcomes you.
A: They say to me maybe they looked on the Internet what you're doing and decided not to give you citizenship.
D: I'm afraid no one there that deep, I'm afraid. At the time. I was kind of offended at the time, you know it's like you've been speaking Russian since from birth and then all of a sudden they tell you no Russian is not your native language.
D: Great.
A: You are making some mistakes in your writing. It is can't possibly be true because I know that I always write everything correctly and then you made me look grammatically illiterate and the main thing I wrote a story about it in my blog and after that look at Wikipedia appeared about me such a line that here I am well, like such a fool could not could not pass the Russian language exam. You can interpret it that way. That is how it was twisted.
D: You cannot give up. Then, of course, you can try again.
A: And in the meantime, they signed the law that you have to take an oath of allegiance citizenship you have to take an oath and in particular there seems to be a line in the oath line that you will comply the law, I thought how come I'm in charge of the Sci-Hub and I don't really care if this project is legal or not. I think it is the right one.
D: There's no I'm not going to give you any advice. How did the girl get the idea to do programming?
A: Well, it is really quite simple my mother worked as a programmer I mean, she is the same as her older sister she studied for a while at Riga University in the specialty I think it was then called system engineer and they worked with these big electronic computers machines whose programs were written on punched cards and then when personal computers started to appear she worked as an accountant for a while and then I switched to programming for 1S accounting like that. Therefore, I remember from childhood when I was about six a computer at home but I really at first at first I just played with it somewhere else maybe about six years later I remember that my mom would take me to work and there was the internet and I was looking on the internet instructions on how to create my own website and tried something. I remember back then created a little website that was dedicated to different electronic animals. I mean there was this popular tamagotchi thing, you know (🕰80:00) electronic toy you have living inside of you some kind of an animal that you had to feed it and raise it and so on and so on. besides tamagotchi there were programs like that for the computer, so you start it up and your computer starts to live a cat for example walks around the screen there eats it too you can pet it in its sleep and play club and so on and so on and so on there were robot dogs like that at the time that were called ab and they were produced a company called Sony I mean a real robotic dog and I think it even had some neural networks inside it and so I was I thought that I was trying to program a Tamagoshi like that with artificial intelligence, you know, so that he could just eat but also talk and think and so on and so on and so on I basically then I got interested in neural networks and neurobiology but all sorts of things like that. I had a period of fascination with hacking, I mean there was a magazine that was also popular at the time when maybe I was about 14 years old.
D: I was an outstanding author there.
A: By the way. There were all sorts of hacking notes and how to hack. I I remember what I did with the help of of reading an article like that in hacker magazine. to hack into my Internet service provider.
D: You're a dangerous man, Alexandra.
A: I was talking about that technology at the time I do not know anything at all at that time. a few years later we went through them at the university for a very long and painful time but at that time I already knew all this stuff. I knew all that stuff. Moreover, after that, I went to entered the university Kazakh National Technical University. I remember that many of the guys there from my I remember a lot of the guys in my class, they went to Russian universities, and I didn't really I didn't really understand why I had to go somewhere. I just went to my own university. I continued studying there as well. I chose and for hacking specialization information safety well and So I graduated from university after that. I remember that for a while I applied for a PhD program in neuroscience somewhere in the US in order to develop my childhood hobby, you know. neurointerface, neurochips and stuff like that. But it didn't work out at the time and then I for a while I was taking orders on the internet to develop different software I worked as a freelancer and that is how and it turned out to be in programming. I still remember that when I was at the university I think I was almost the best programmer in the group. was also in school I mean I somehow from childhood got used to the fact that I'm such a computer genius you know, the kind of person who understands about computers.
D: So what was the fruit of all this training?
A: I think the most notable fruit was the creation of the Sci-Hub
D: It's a heavy fruit, you can't argue with that. when you came here, how did it all start?
A: I remember that just working I didn't want to be a programmer, I mean, I wanted something like that. Well, as far as the Sci-Hub thing, of course I like it because it's such a, you know, a job. that you're doing something to liberate of scientific knowledge and so on. Nevertheless, just like programming any kind of random tasks I've always been not very interesting to me and for example in the accounting department that my mother worked in was something I thought was the most unpleasant thing that I thought you could do.
D: It's kind of boring.
A: Economics and, yeah, that kind of thing.
D: how does freelancing in general work out all the time or did you try to get a job somewhere? trying to get a job?
A: Yeah, I remember when I came to 2011 in Russia, I was still working as a freelancer, I mean, I took orders and I did them and so I got a few tens of thousands rubles a month that's how it was and then I already decided something well, to go somewhere longer and I applied the documents to the faculty of state management of the Higher School of Economics it it was 2012.
D: And?
A: Well, at that time there was this theme all over the place about the use of information technologies in the state about the Internet the modernization of Skolkovo, that's all that that's all Medvedev was promoting. I thought wow.
D: He was pushing the right things, by the way. State services and stuff like that, thanks to him. they say it is a great thing
A: Yeah, so what was cool about it that you get that kind of information technology at some global level and I had this idea back then the idea that I could somehow change the system from the inside I mean if now for example we're not allowed by law to freely the free flow of information on the internet if, for example, the law forbids the free exchange of information on the internet. the states in that direction to tweak it, I mean something to fix something and yes, I've applied there I got there and it turned out to be the opposite that is, you apply to the department of public administration and expect that that everything there will basically be so patriotic state, and then for some reason it was the opposite.
D: what do you mean, there can't be a state.
A: Well, what about the faculty of government and I'd expect it to be all about patriotism
D: It's like that Vysotsky song where the devil cursed and said there's the wrong comrade rules the ball.
A: it turns out that the teacher had some liberal rhetoric and they kind of disliked me a lot right off the bat. that is, they started picking on me, underestimating grades and so on.
D: Is it because of political views, right? Yeah. It is dangerous to talk to with faculty members on political issues Do you need that in your life, too? They are not certainly can't reeducate them, but they can screw with you, they can.
A: But I remember back then the situation I had a situation that my teacher gave me zero points and I could have not be allowed to take the exam something like that was, and I remember that as a result of conflict I ended up getting access to the email account of that teacher's email account but I didn't hack into it, but I was able to access it and I saw that the teacher writes about me that I'm such a vile scum and that I should be killed. and the best part is that I'm crazy. Something like that was written. It was a shock for me at all it was a shock to read it, I mean, I actually I was expecting that it's professor and that there would be some such serious high topics and it turns out that they're discussing that I'm bastard
D: Well, first of all he's a human being and everything else secondary Moreover, how did it end? Did it end or not?
A: Well, I printed out this correspondence and gave it to the assistant provost.
D: Great.
A: Well, I ended up taking the course passed and I even got an "A" on my exam. But I didn't want to graduate master's program, I still didn't do it. and I only finished one course and then I started looking for a job as a programmer, I just kind of realized it became clear to me that, first of all, this education won't be much use I mean I'm not going anywhere after I've finished this master's degree I won't be made minister or president so that I could change something first of all, but secondly, for what scientific reasons I'm still interested in somehow the history of ancient societies that is how once upon a time people understood that that kind of information. And I realized that such studies of him here at the faculty of public government most likely all the same I will not give me to defend myself, that is, to get a diploma, that's why and after the first year I started looking to look for a job as a programmer. That was the situation and I remember that I was trying to get a job at Russia Today. I mean, I did not like it then.
D: Programmer?
A: Yeah, well I liked it at the time basically such a patriotic company. It's all about Russia and stuff like that and I even had an interview and the programmer who was there he had this idea for me to develop software just for video I just think that then it happened that for some reason I was offered me a very small salary. I think it was 30,000 rubles, but on that kind of money in Moscow to live very it is hard to live like that. It did not work out and I just I think that here is a job which is on video processing program it could, in principle, in the long run there in a few years it could grow into some (🕰90:00) maybe a Russian version of YouTube quite
D: It is not easy.
A: Why?
D: I do not know well first of all money to you secondly you cannot solve this alone this you need to create a team for this it's hard. There is Rutube, we do not have the dumbest programmers can see but for some reason, we can't build a YouTube counterpart They have not been able to build a YouTube analog for years.
A: Well, we have one Rutube and there need to be more Let it blossom 100 Flowers and so on
D: how will they make money? It is clear what they make money from.
A: We've recently had information published that the same Rasha today journalists get paid half a million rubles each half a million rubles and I think you can imagine how much social networking could be created with this money.
D: Maybe they are good if half a million rubles.
A: well and the programmers offer 30,000 life is not fair.
D: I'll give you the most famous wisdom he invented the ruble built 10 sold 100 is the most important in a capitalist society it is always the one who sells, not the one the one who invents and not the one who makes like this
A: Well, the leaders of Rasha today are always resenting why we don't have we don't have our own social media why do we have some foreign social networks which block Russian channels and by the way it's about YouTube and the the history of its emergence it sort of first it grew, I think, as part of PayPal and only then did it split off and become an independent social network.
D: We have a social networking site, which was built by by the same Pasha Durov.
A: Yeah and it's kind of like the main social network is YouTube
D: Well if we have the USA, we have this projection of heaven on earth if everything is so beautifully done and beautified there if we're part of the family of civilized where there's a division of labor it's normal in capitalism. here's a great YouTube we're going to use it and then it turns out that they won't let you use it and you so that's being a statesman you didn't think that one day you're going to get the breaker pulled and leave you without it. You did not think so, did you? That is how you design the life of the country inside it might as well be like saying that American submarines are protecting us to protect us, they keep the world at peace for Why the hell would you build your own if the Americans have such wonderful boats with nuclear weapons. Why do you need your own? and here why do you need this YouTube if it's there Then some time goes by and all of a sudden here it is in big letters all of a sudden it turns out that your idiot TV no one else watches it especially the young people who, so to speak the future leaders of the country owners and all that stuff and all of a sudden it turns out that they're educated by American YouTube even at the dumbest level just recently you go into a bookstore there wasn't even a comic book section and now you go into a bookstore and there's shelves and separate and all the comics are for some reason American, you know, they are American and a little bit of Japanese, that's all. And yours where one would like to ask.
A: I remember when I was a kid, there was a comic strip called Murzilka.
D: I have a question about your children, I mean. your children are being raised by strangers with completely foreign ideals that you nothing and nothing at all and here you have YouTube is the same thing. It is TV. The internet is just a way of delivering information but it's the same TV why so it turns out that you're here you give money to, for example, the TV, the RTR is fine all the grandmas are there for New Year's Alla Borisovna Sofia Rotaru, who with one hand gives money to kill Russians and then on Russian TV performs beautifully. it is wonderful simply Well, no one is watching this, no one what Your ratings are so high I lose my mind. Money being poured into it and they start screaming that YouTube is all against us and they're already screaming on our side that it needs to be shut down. The Americans do not have time to pull the switch it's already our weapon. It needs to be shut down. It's a a threat to national security and Where is yours?
A: Our social networks they could be grown within the the company within the framework of the same RT if some money was just would go to well I honestly don't know I have no idea what you could spend half a million rubles a month.
D: You just don't have it. That is why you have no idea. For example, you could buy yourself a Rolls-Royce, for example that's going to eat up all your money just everything Build yourself a house and hire a house cleaner and that too will eat it all up and it won't be enough all the time because the main goal of capitalism is to continually increase your level of for example to go to an expensive restaurant or and to everyone to show that I go to an expensive restaurant expensive holiday not somewhere in the Crimea Maldives only this status is the level and other stuff. Half a million is easy to spend believe me.
A: If you part of the money if you put it there as a programmer at least like this I would have gotten the job.
D: I agree.
A: Could be social networking.
D: I will act as HR here Alexandra is a talented programmer by the way. built I don't even know what to call it the right way to build Sci-Hub on that gets 500,000 hits contact me daily, I'm open to a suggestion, and in passing the show mentions about communism and communist What is your point, are you a communist?
A: well I have always considered myself a communist. But I'm not a professional communist I mean, if, for example. for example, if I were to go into a debate of course with some politician on political about who I don't know of course I would lose any debate. the idea of communism I've always supported I've always thought that the Sci-Hub as a communist project because first of all it's a project which is for that knowledge belongs to the people so that it's available to everyone and not just some elite. It's basically a project that is against the very concept of of this intellectual property that is scientific knowledge it should not be as if it were the intellectual property of some corporations but it it should belong to the people. I opened a Sci- Hub group on a social networking site I naturally started to post there different posts on the theme of communism on the theme of science.
D: Well that is practically putting a stigma on yourself because we're free and entitled to all points of view any political views you can't go anywhere going to go in with communist views.
A: Why is that?
D: Well our state is built on a denial of everything communist.
A: Therefore, I wondered if that is the problem that maybe that's why the Sci-Hub is under such a censorship regime.
D: I do not know from my point of view. first of all, there's nowhere to go as one not the silliest person said the soviet union is our ancient roman it's the basis on which everything is built and there's nowhere to go you can't get away from it. The only thing that still binds us together idea is the victory of the great patriotic war won by communists over Nazis You cannot get away from that, but you have to build one's own and it's very strange to get hung up on from my point of view is very strange. I would understand if I were, for example, the editor of a state-funded television channel with the money that shows some Zuleikha opens her eyes and all of a sudden you with communist views to be called to broadcast to hundreds of millions of people it would be strange from the point of view of from a propaganda point of view as a specialist what does that have to do with anything if I can do this and you don't know how to do it then maybe let me do something this is a very strange approach of capitalists It should not be like that.
A: Well, I had some thought that maybe they were afraid of of Communism
D: as one aspect of maybe but that's about as close as I can see to Russian you don't speak your native language. That is great.
A: There is something strange going on.
D: Do not get hung up on it. My advice to you is to do it all yourself No one is going to give us that You have to do everything yourself. no one's going to come up with anything nothing will come and no one will offer anything it's it's not a fairy tale about a master and a margarita themselves come and offer everything not come and offer only yourself. Nevertheless cool Alexandra my respect what a to build and to make it so crazy come and see us again we will help you in any way we can Thank you.
A: Thank you.
D: That is all for today.

View source

hey. I just added Dialogues to Scrolldown.
cool. But what's Scrolldown?
Scrolldown is a new alternative to Markdown that is easier to extend.
how is it easier to extend?
because it's a tree language and tree languages are highly composable. for example, adding dialogues was a simple append of 11 lines of grammar code and 16 lines of CSS.
okay, how do I use this new feature?
the source is below!
chat hey. I just added Dialogues to Scrolldown. cool. But what's Scrolldown? Scrolldown is a new alternative to Markdown that is easier to extend. how is it easier to extend? because it's a tree language and tree languages are highly composable. for example, adding dialogues was a simple append of 11 lines of grammar code and 16 lines of CSS. okay, how do I use this new feature? the source is below!

May 14, 2021 — Dialogues seem to be a really common pattern in books and writings throughout history. So Scroll now supports that.

Here is the Grammar code in the commit that added support for Dialogues:

Links

View source

May 12, 2021 — This post is written for people who already are "partisans" on the issues of copyrights and patents. Here I am not trying to educate newcomers on the pros of Intellectual Freedom. I am writing to those who are already strong supporters of open source, Sci-Hub, the Internet Archive, and others. To that crowd I am trying to plant the seed for a new political strategy. If you think that copyright and patent laws could be a root contributor to some of the big problems of our day, like misinformation (or fake news) and inequality, this post is for you.

I suggest we rally around a simple long-term vision of passing a new Intellectual Freedom Amendment to the U.S. Constitution. I am not positive that if we abolished copyright and patent systems the world would be a better place. Just as I'm not positive that if we switch to clean energy the world would be a better place. Society is a big complex system, and it would be intellectually dishonest to make such a guarantee. But there are reasons to believe abolishing copyright and patent systems would be a good bet based on low level first principles. In my study of the spread of truth and knowledge it seems like more publishers and remixers leads to improved truthflow, education, stability and prosperity. Other people might come with other arguments and perspectives. But big debate is not being had. The problem is the debate is always held on the Intellectual Monopoly Industry's home turf. So when the debate is on details like what is the ideal length of monopolies, or when illogical terms like "Intellectual Property" are used, you've already conceded too much, and are fighting for local maxima. A stronger and more logical place to have the debate is upstream of that: debate whether we should have these systems at all. I think the Amendment Strategy is clear enough, concrete enough, simple enough that you could get critical mass and start moving the debate upstream.

Let's say my hunch is wrong, and that momentum for an Amendment grows, and then in some trial regional experiment it turns out to be a bad idea, society would likely still benefit because the Intellectual Monopoly Industry would have to play defense for once, as opposed to constantly pushing for (and winning) extensions of monopolies. The best defense is a good offense. It's an adage, but there's usually some truth to adages.

An Initial Proposal

The below proposal is 184 characters.

Section 1. Article I, Section 8, Clause 8 of this Constitution is hereby repealed. Section 2. Congress shall make no law abridging the right of the people to publish or peaceably implement ideas.

I have only passed a handful of Amendments to the U.S. Constitution in my lifetime 😉, so if you have suggestions to make that better, pull requests and discussions are welcome.

View source

May 7, 2021 — I found it mildly interesting to dig up my earlier blogs and put them in this git. This folder contains some old blogs started in 2007 and 2009. This would not have been possible without the Internet Archive's Machine heart ❤️.

August 2007 - Running Wordpress on my own domain

It looks like I registered breckyunits.com on August 24th, 2007. It appears I used Wordpress SEP 2007. There's a Flash widget on there. The title of the site is "The Best Blog on the Internet". I think it was a good joke. I had just recently graduated college, and had not yet moved to the West Coast.

July 2009 - Two years of Wordpress

About two years later, my Wordpress blog had grown to many pages JUL 2009.

August 2009 - Switched to Posterous

Looks like I started transitioning to a new site AUG 2009 , and moved my blog from my own server running Wordpress to posterous MAR 2013.

After I moved to posterous, I put up this homepage SEP 2009.

December 2009 - Switched to Brecksblog

In December 2009 I wrote my own blog software called brecksblog. Here's what my new site looked like DEC 2009.

I kept it simple. My current homepage now powered by Scroll evolved from brecksblog.

December 2009 - My "Computer science" blog

It looks like I also maintained a programming blog from December 2009 to January 2012 MAY 2012. Here is that blog migrated to Scroll.

View source

May 6, 2021 — I am aware of two dialects for advice. I will call them FortuneCookie and Wisdom. Below are two examples of advice written in FortuneCookie.

🥠 Reading is to the mind what exercise is to the body.
🥠 Talking to users is the most important thing a startup can do.

Here are two similar pieces of advice written in Wisdom:

🔬 In my whole life, I have known no wise people (over a broad subject matter area) who didn't read all the time – none, zero. Charlie Munger
🔬 I don't know of a single case of a startup that felt they spent too much time talking to users. Jessica Livingston

If you only looked at certain dimensions, you could conclude the FortuneCookie versions are better. They are shorter. They are not attached to an author's name which seems to make them simpler.

But all things considered, the FortuneCookie versions are worthless compared to the Wisdom versions.

✒️ Wisdom is a short piece of advice that is backed by a large dataset, is clear and easily testable.

Like FortuneCookie, Wisdom is some advice that can change your perspective or guide your decision making. No difference there.

Unlike FortuneCookie, Wisdom needs to be backed by a large dataset. For example, in 2009 I wrote:

🥠 to master programming, it might take you 10,000 hours of being actively coding or thinking about coding.

Ten years later, after gathering data I can now write:

🔬 The programmers I respect the most, without exception, all practiced more than 30,000 hours^hours.

Even though the message is the same, the latter introduces a dataset to the problem. More importantly, it is instantly testable.

Wisdom can't just be the inclusion of a dataset. Without the testability, Munger's quote would be FortuneCookie:

🥠 I've met hundreds of wise people who read all the time

That's not the clearest advice. It certainly says that reading all the time won't rule out success, but it provides no guidance as to whether it is a necessary thing. The quote above leaves it ambiguous if he also knows of wise people who don't read all the time (we know from the real quote that he doesn't).

Sometimes you see a FortuneCookie idea evolving into Wisdom, where an advisor hasn't quite made it instantly testable yet but is proposing a way for the reader to test:

🔬 If you look at a broad cross-section of startups -- say, 30 or 40 or more; which of team, product, or market is most important?...market is the most important factor in a startup's success or failure. Marc Andreessen

Coming up with great pieces of Wisdom is hard. Like a good Proof of Work algorithm, Wisdom is hard to generate and easy to test. I know who Charlie Munger is, so I know he's probably met thousands of "wise people". All it would take would be for me to find just a single one that didn't read all the time to invalidate his advice. But I can't come up with any. I know who Jessica Livingston is and I know she's familiar with thousands of startups and I just need to find one who regrets spending so much time talking to users. But I can't think of any.

If you have great experience, I urge you to not put it out there in the form of FortuneCookie, but chew on it until you can form it into Wisdom. These are very valuable contributions to our common blockchain.

Notes

1 There are a lot of programmers who have 10,000 hours of experience that I respect a lot and enjoy working with, but the ones I study the most are the ones who stuck with it (and also just lucky enough to live long lives).

View source

April 26, 2021 — I invented a new word: Logeracy1. I define it roughly as the ability to think in logarithms. It mirrors the word literacy.

Someone literate is fluent with reading and writing. Someone logerate is fluent with orders of magnitudes and the ubiquitous mathematical functions that dominate our universe.

Someone literate can take an idea and break it down into the correct symbols and words, someone logerate can take an idea and break it down into the correct classes and orders of magnitude.

Someone literate is fluent with terms like verb and noun and adjective. Someone logerate is fluent with terms like exponent and power law and base and factorial and black swan.

Someone literate can read an article and determine whether it makes sense grammatically. Someone logerate can read an article and determine whether it makes sense logarithmically.

Someone literate can read and write an address on the front of the envelope. Someone logerate can use the back of the envelope.

Illogeracy

The opposite of logeracy is illogeracy: the inability to think in logarithms. An illogerate person is one who frequently gets the orders of magnitude wrong.

An illogerate person may correctly understand parts 2 and 3 of a 3 term equation but get the first time-dependent part wrong and so get the whole thing wrong.

An illogerate person can be penny wise pound foolish.

An illogerate person treats all parts of an argument as important.

An illogerate person may mistake one part of a sin wave for a trend.

An illogerate is one who may be familiar with exponentials but unfamiliar with sigmoids

Measuring Logeracy

No country or organization measures logeracy yet2. I don't know which countries are the most logerate, but for now I would guess there will be a strong correlation between the engineering prowess of a country and it's level of logeracy.

Countries have been measuring literacy for hundreds of years now. As the chart above shows, the world has made great progress in reducing illiteracy. 200 years ago, ~90% of the world was illiterate. Now that's down to ~10%. If you break it down further by country, you'll see that in countries like Japan and the United States literacy is over 99%.

The upside of logeracy

Logeracy is how engineering works. Good engineers fluently and effortlessly work across scales. If we want to be an interplanetary species, we first must become a more logerate species.

Logeracy makes decision making simple and fast (figure out the classes of the options, and then the decision should be obvious).

You don't get wealthy without logeracy. An illogerate and his money are soon parted. Compound interest is a tool of the logerate. Money doesn't buy happiness, but the logarithm of money does.

Where is logeracy currently taught well?

My knowledge here is limited. I know Computer Science students could be the ones taught logeracy best. We are taught it by a different name. CS students are repeatedly taught to think in Big O notation3. In Computer Science you are constantly working with phenomena across vastly different scales so logeracy is critical if you want to be successful.

Perhaps its electrical engineers, or astronomers, or aerospace engineers. These folks are frequently working with vast scale differences so logeracy is required.

In finance, 100% of successful early stage technology investors I know of are highly logerate.

It would be interesting to see logeracy rates across industries. Perhaps measuring that would lead to progress.

How much logeracy is the average person taught?

My high school chemistry teacher first exposed me to logeracy when she taught me Scientific Notation. That was probably the only real drilling I got in logeracy before getting into Computer Science. Scientific Notation is a handy notation and a great introduction to logeracy, but logeracy is so important that it probably deserves its own dedicated class in high schools where it is drilled repeatedly from many difference perspectives.

What are some good books for improving logeracy?

I would recommend "The Art of Doing Science and Engineering: Learning to Learn" by Hamming. That's maybe the most logerate book I've ever read. I also love Taleb's Incerto series (ie Fooled by Randomness, Black Swan, Antifragile...).

Is logeracy fractal?

Yes4. Some industries, like engineering, demand logeracy. A randomly selected engineer is likely to be 10x+ more logerate than a randomly selected member of the general population. But what about the distribution of logeracy within a field? Only recently did it occur to me how fractal logeracy is. A surprising number of engineers I've worked with seem to compartmentalize their logerate thinking to their work and act illogerate in fields outside of their own. In Hamming's book I was surprised to read over and over again how very few engineers he worked with (at the world's top "logerate" organizations) operated with his level of logeracy. Logeracy seems fractal.

Can you be too logerate?

I don't think so. However, I do believe you can be out of balance. One needs to be linerate5 as well as logerate. We have so many adages for people that focus too much on the dominant-in-time term of an equation and not enough about the linear but dominant-now parts. Adages like "head in the clouds", "crackpot", "ahead of her time". We also have common wisdom for how to avoid that trap "a journey of a single step...", "do things that don't scale", so it is likely that being extremely logerate without lineracy is a real pitfall to be aware of.

What about the term Numeracy?

I read Innumeracy and Beyond Numeracy by Paulos over a decade ago6. I love those books (and it's been too long since I reread them).

Numeracy is a good term. Logeracy is a much better term. Someone logerate but innumerate often makes small mistakes. Someone numerate but illogerate often makes large mistakes.

Numeracy is sort of like knowing the letters of the alphabet. Knowing the letters is a necessary thing on the path to literacy, but not that useful by itself. Likewise, being numerate is a step to being logerate, but the real bang for your buck comes with logeracy.

The Illogeracy Epidemic

Literacy without logeracy is dangerous. My back-of-the-envelope guess is that over 80% of writers and editors in today's media are illogerate (or perhaps are just acting like it in public). 2020 was an eye opening year for me. I had vastly underestimated how prevalent illogeracy was in our society. I am tired of talking about the pandemic, but to this day in the news I see a steady stream of "leaders" obliviously promoting their illogeracy, and walking around outside I see a huge percentage of my fellow citizens demonstrating the same. I would guess currently over 60% of America is illogerate. The funny thing is it may be correlated with education—if you are educated as a non-engineer you perhaps are more likely to be illogerate than a high school dropout, because you rely too much on your literacy and are oblivious to your illogeracy. I am very interested to see data on rates of logeracy.

Let's get to 99%+ logeracy

I wrote my first post on Orders of Magnitudes nearly twelve years ago, back in 2009. At the time I didn't have a concise way to put it, so instead I advised "think in Orders of Magnitude". Now I have a better way to put it: become logerate. I wonder what wonderful things humankind will achieve when we have logeracy rates like our literacy rates.

Notes

1 I was very surprised to be the one to invent the word logeracy(proof). Only needed to change 2 letters in a popular word. All the TLDs including dot coms are still available.

2 As far as I can tell. If you know of population measures of logeracy please email me or send a pull request.

3 Even if you are familiar with Big O Notation, the orders of common functions table is a handy thing to periodically refresh on.

4 Is my guess, anyway.

5 Uh oh, another coinage.

6 In my recollection Innumeracy is too broad a book. This critique applies to 99% of books I read, Hamming's book being one of the exceptions

View source

March 30, 2021 — The CDC needs to move to Git. The CDC needs to move pretty much everything to Git. And they should do it with urgency. They should make it a priority to never again publish anything without a link to a Git repo. Not just papers, but also datasets and press releases. It doesn't matter under what account or on what service the repos are republished to; what matters is that every CDC publication needs a link to a backing Git repo.

How do you explain Git to a government leader?

Git is the "Global Information Tracker". It is software that does three things that anyone can understand 1) git makes lying hard 2) git makes sharing the truth easy 3) git makes fixing mistakes easy.

Why does the CDC need to move to Git?

Because the CDC's publications are currently full of misrepresentations, make it very hard to share the truth, and are full of hard to fix mistakes. Preprints, Weekly Reports, FAQs, press releases, all of these things need links to the Gits.

Who builds on Git?

The whole world now builds on Git. The CDC is far behind the times. Even Microsoft Windows, the biggest proprietary software project in the world, now builds on Git.

What is Git?

Git is an open source, very fast, very powerful piece of software originally created by Linus Torvalds (the same guy who created Linux) and led by Junio C Hamano that makes extensive use of principles from blockchain and information theory.

How can the CDC rapidly move to Git?

Double Down on Internal Leaders

The CDC's GitHub account has 169 repos and 10 people (and I'm told many hundreds more Git users). I would immediately promote every person working on these repos. (There are probably one or two jokers in there but who cares, it won't matter, just promote them all). Give them everything they need to be successful. Give them raises. Tell them part of their new job is to get everything the CDC is invovled with published to Git. This is probably really the only thing you need to do, and these people can lead it from there.

Change Funding Requirements

Provide a hard deadline announcing that you will stop all funding for any current grant recipient, researcher, or company doing business of any kind who isn't putting their sponsored work on a publically available Git server and linking to it in all releases.

Education and Training

The CDC has 10,600 employees, so buying them all $20 worth of great paper books on learning how to use Git would only cost $201,200. For the most part, these are highly educated people who are autodidacts and can probably learn enough with just some books and websites, but for those who learn better via courses or videos you can budget another $30 per person for those. Then budget to ensure everyone is paid for the time spent learning. We are still talking about far less than 1% of the CDC's annual budget.

Why the urgency?

Because the CDC not only failed at it's mission by not stopping COVID, but it continues to mishandle it. Mistake after mistake. Miscommunication after miscommunication. I just shook my head looking over an amateur hour report that they just put out. It's sad and their number one priority should be to regain trust and to do that they need to focus on the most trustworthy tool in information today: git.

Specific Examples

I'm adding two very clear and specific examples to illustrate the problem. But my sense is the problem is prolific.

#1 - COVID-19 in kids vs the Flu

For young children, especially children younger than 5 years old, the risk of serious complications is higher for flu compared with COVID-19. @ CDC

This statement appeared on the CDC's website for more than a year. As it should have. Every big dataset I've looked at agrees with this, from the very first COVID-19 data dump in February 2020.

I started actively sharing and quoting that CDC page in August 2021. Coincidentally or not, within days they removed that quote. There is no record of why they made the change. In fact the updated page misleadingly states "Page last reviewed: June 7, 2021", despite the August edit*.

To recap, they quietly reversed the most critical piece of contextual advice on how parents should think about COVID-19 in relation to their children. No record, no explanation. (In case you are wondering, the data has not changed, and the latest data aligns with the original statement which they removed. Perhaps the change was made for political reasons).

#2 - Changing the definition of vaccine

The second example is well documented elsewhere, but the CDC changed their online definition of the word "vaccine", again perhaps for political reasons. That sort of thing seems like the kind of change that maybe should have some audit trail behind it, no?

Conclusion

I used to take it for granted that we could trust the CDC. That made life easier. Health is so important but so, so complex. I would love to trust them again, and would have more confidence if they were using the best tools for trust we have.

Notes

  • Perhaps by "reviewed", they mean experts last reviewed it then, and all subsequent changes were made by an intern, on their own, which I also don't think would be a good explanation.

View source

Introduction

March 11, 2021 — I have been a FitBit user for many years but did not know the story behind the company. Recently came across a podcast by Guy Raz called How I Built This. In this episode he interviews James Park who explains the story of FitBit.

I loved the story so much but couldn't find a transcript, so made the one below. Subtitles (and all mistakes) added by me.

Transcript of How I Built This with James Park

Guy: From NPR, It's How I Built This. A show about innovators, entrepreneurs, idealists, and the stories behind the movements. Here we go. I'm Guy Raz, and on the show today, how the Nintendo Wii inspired James Park to build a device and then a company that would have a huge and lasting influence on the health and fitness industry, Fitbit.

It's taken me a few weeks to get motivated about exercise. This whole pandemic thing just had me in a state of anxiety and it messed with my routine, but I was inspired to jump back into it about two weeks ago, after watching my 11-year-old proudly announce his daily step count recorded on his Fitbit. Now, fitness isn't all that important to him. He's 11. But the gamification of fitness, the idea that it could be fun to hit 5,000 or 10,000 steps a day, that's what matters.

This is the stroke of insight James Park had soon after he stood in line at a Best Buy in San Francisco to buy the brand new video game system called Nintendo Wii. And you'll hear James explain the story a bit later, but what he realized playing the Wii is that you could actually change human behavior around exercise if you turned it into a game. And the thing is, up until James Park and his co-founder Eric Friedman founded Fitbit in 2007, there really weren't any digital fitness trackers that were designed that way. It took a few years for James and Eric to gain traction, but by 2010, 2011, Fitbit took off. At one point, their fitness devices accounted for nearly 70% of the market. And by 2015, the company was valued at more than $10 billion. But that same year, the Apple Watch was released, and Fitbit and its market share got hammered. When I spoke to James Park a few days ago, he was in San Francisco, living in an Airbnb.

James: I'm in a temporary Airbnb because the place that I typically live in has been flooded out by a malfunctioning washing machine. I woke up at it 1:00 AM

Guy: In the middle of this whole thing, flooded washing machine went... You woke up in the middle of the night and there was water everywhere?

James: I know. Amazing timing. Yeah. I woke up at 1:00 AM, and I just woke up to the sound of water gushing everywhere. It was coming through the ceiling. It was a massive flood.

Guy: Okay. So on top of sheltering in place and running his company remotely, James had to move out of his apartment in the middle of the night and then set up the microphone and gear we sent him for this interview. He started to tell us about his parents who immigrated from Korea when James was four. Back in Korea, his dad had been an electrical engineer and his mom was a nurse. But as with many immigrants, they had a hard time getting those same jobs in the US. So instead, his parents became small business owners.

Growing Up (1977)

James: The first conscious memory I have is, my parents actually own the wig shop in downtown Cleveland.

Guy: Wow. How did they get into that? Was it just a way to earn a living?

James: Yeah. I think a way to earn a living and the typical immigrant story is you have friends who live in the country that you're immigrating to. And I think my dad had a friend who worked in wig wholesaling. That's where he started out. There were selling wigs to people who live in downtown Cleveland, African-Americans, mostly women. And I remember my mom, she'd spend a lot of time just looking through black fashion magazine, styling hair, beating them, et cetera.

Guy: Wow.

James: They had a wig shop, dry cleaners, a fish market. At one point we moved to Atlanta and they ran an ice cream shop there. We sold track suits, starter jackets, fitted baseball caps, thick gold chains.

Guy: Sort of hip-hop urban wear, right? Like FUBU, and stuff like that?

James: Yeah. Yeah. Yep. They sold FUBU jeans. Yep. I remember that. And they could switch from one genre or one type of business to another and really not skip a beat.

Guy: And were your parents, did they expect you perform well at school? Was that just a given?

James: I think they had incredibly high expectations then as a kid. I think I remember my mom telling me when I was pretty young, I don't know, five, six, seven, that she expected me to go to Harvard.

Guy: Wow.

James: Yeah. I don't think I quite knew what that meant back then, but you could tell that their expectations were pretty high from the very beginning.

Dropping out of Harvard (1998)

Guy: James did in fact meet his mom's expectations. He did go to Harvard. He put in three years studying computer science, but after his junior year, he got a summer internship at Morgan Stanley and then ended up deciding to start his own business. And then we had hoped to finish his college degree, he never went back.

James: I always had a little bit of a stubborn streak, and that was when I was trying to figure things out, try to think of ideas. I think there is a lot of opportunity, a lot of problems to be solved. I was also looking for a co-founder at the time. So those are two critical ingredients, an idea and a co-founder.

Guy: This is 1998. This is not 2015 when these kinds of conversations seem so common. This was unusual in 1998 for a young person. It was just less common for a young person to just sort of say, "I'm going to look into a tech startup and try to find a co-founder and just take some time to think about these things." I would imagine your parents were nervous. I'd be nervous if my 20-year-old said to me, "I'm not going to go back to college and I don't really know what I'm going to do, but I'm just going to think about it."

James: Yeah. They were understandably pretty upset, angry even I'd say. And the irony is that, they probably took away more incredible personal risk moving from Korea to United States and running these series of businesses, which are commonly done, but not easy in themselves and pretty high risk. But I do understand obviously the perspective at the time.

The eCommerce Start Up

Guy: Okay. You decide you want to start something up, and I think you eventually landed on e-commerce, right?

James: Yeah. That was not a groundbreaking thing at the time. Obviously, Amazon was around, et cetera. A lot of e-commerce startups, but settled on this idea of making e-commerce a lot more seamless and frictionless and came up at this idea of a electronic wallet that would automatically make purchases for you. It would work with a lot of different e-commerce sites and the goal there was that we would take a cut of every transaction.

Guy: Right. And what was the company called?

James: That was interesting. We originally named it Kapoof, that was how it was incorporated, until a lot of people said, "That might not be the best name for a company." Sounds like, we called it Kapoof because it sounded like magic, et cetera, Kapoof. Things are done. Your transaction is completed by Kapoof.

Guy: It sounds like, "Kapoof, your money is gone."

James: Yeah, exactly.

Guy: "You've no more money."

James: Exactly. Time of crazy names like Yahoo, et cetera. But we decided to change our name at some point and we changed it to Epesi, which was so Swahili for fast. And so that was the ultimate name of the company.

Raising a "few million" dollars

Guy: And you guys were actually able to raise a fair amount of money. Right?

James: We did. We ended up raising a few million dollars from some individuals and some from some venture capital firms as well. And we hired some people. We found a cool renovated firehouse. That was-

Guy: Nice. Nice

James: ... Really amazing place to hang out in for many, many, many hours of the day. And we hired up to, it was close to about 30 people.

Meeting his eventual FitBit cofounder

Guy: Wow. One super important thing that happened there was you met Eric Friedman, right? The guy that you would eventually launch Fitbit with.

James: I did. And that's probably one of the more fortunate turns in my life. Eric, we didn't know each other at all before the company, Epesi. He was actually just graduating from Yale in computer science. And I interviewed him. I liked him a lot. And he ended up ultimately becoming the first employee at the company.

Guy: Okay. So you hire Eric, and I think the company lasted 18 months, or a little less than two years.

James: Yeah. About two years, and a lot of ups and downs during that period. If I had to think back, I would attribute two-thirds of the challenges and problems we faced as a business to myself, just because I had never managed people. I didn't really know how to run a business, even it was only the technology side. And at some point the dot-com crash happened. And all of our potential customers, the whole industry, the whole economy started taking a downturn.

Dot Com Bust (2001)

Guy: So this company spirals out in 2001. And when that happened, did you think, "Okay, I should go back to college now and finish my degree." Or, "I got to start something else." Where was your head at that point?

James: Well, it was a really challenging personal time for me. Towards the end of the company, we obviously had to lay off most of the company, and trying to doing it in a way that was compassionate was really, really difficult. I don't think the thought of entering school or going back to school popped back into my head at all. And I don't know why. I think it was because, despite this very emotional failure, I knew this was what I wanted to do. I had a firm conviction about that. And so I knew I wasn't going to go back.

Guy: So what'd you do?

Getting Real Jobs

James: We all ended up working at the same place actually. It was a company, a pretty large company called Dun & Bradstreet at the time. Very stable company. And we were all fortunate to be able to find work there as engineers.

Guy: So daytime working at Dun & Bradstreet, and then what? At night sitting around just-

James: Brainstorming. Yeah. We go into work during the daytime and then we'd come home in the evenings, code different things, try different things out. So it was a pretty intense. I think, in terms of the numbers of hours, I don't think anything changed from our first startup to trying to figure this next one out.

Startup #2 (2002)

Guy: And before too long, you decide to do another startup. This time, with Eric Friedman from your previous company, and then another guy named Gokhan Kutlu. I think this was what? 2003, 2004?

James: Yeah. This was about 2002 actually.

Guy: Okay. And this time the startup was a photo editing platform, sharing platform. What was it called?

James: The company's name at the time was called HeyPix and the product itself was called electric shoe box because a lot of people put their old photos in shoe boxes and this was just going to be a digital.

Guy: Yes. I still have them in shoe boxes.

James: You'll digitize them probably.

Guy: I should. I know.

James: Yeah. And so electric shoe box, which is going to be a digital version of your shoe box.

Guy: And what could you do?

James: Well, there are digital cameras were coming about back then. It still wasn't easy to connect them, upload photos. It was getting easier, but nowhere near what it is today, obviously. The whole idea of electric shoe box was to make the whole process of getting photos off your camera a lot easier. And more importantly, we wanted to make the process of sharing these photos with your friends and family a lot easier.

Guy: So did you raise money for the product, for the electric shoe box?

James: We did. We ended up raising money primarily from one of my friends from middle school who was a mutual fund manager in Boston. And so, he put in a bit of money, not a lot. I think about, at least for him, it was about 100,000. And we had a bunch of savings ourselves that we were going to use. And in anticipation, I also opened up a few more credit cards as well.

Guy: And it was just really the three of you, sitting at your computers and just tapping the keys all night?

James: You pretty much nailed it. I mean, all we did was, we would wake up in the morning, walk over to the third bedroom and just start typing away for 12 hours. We'd take meal breaks. I remember Eric did a lot of cooking. So we'd eat our dinners on some TV stands watching TV. That was a good break for us, watching Seinfeld, and then go to bed and then repeat it the following day.

Guy: Wow. All right. So you come up with this product, and by the way, how are you going to make money off of this thing? This is a free service. How were you going to pay for it?

James: I guess, it would be called freemium software. It would be free for a period of time, and the trial period would end and then you'd have to submit your credit card information to continue using the software.

Guy: Got it. Got it.

The pivotal $300 Press Release

James: And so, our primary goal was making sure that a lot of people knew about the software. So we put it on shareware sites, et cetera. And then we spent a lot of time debating, "Should we send out a press release?" And I remember it was a huge debate because sending out a press release was going to be about $300. And that was the level of expense that required a vigorous debate at the time. So we said, "You know what? Without getting the product known, how are we going to be successful?" So we wrote up a press release and we put it out. And actually it was probably the most pivotal decision we ever made in that company's history.

Guy: Because?

James: The first email came in a few hours later. I think the second one came in a day later. But we got two emails, one from CNET, which is a huge digital publishing company. And then we got another email from Yahoo saying, "Hey, we just heard about this launch of this software product. And we'd like to talk to you guys more about it."

Guy: Wow.

James: Exactly. This was coming from their corporate development arms, which typically deals with M&A, with buying, buying companies.

James: Yeah, exactly. We were like, "Whoa, this is magic. How did this happen?"

Bought by CNET for millions (2005)

Guy: 2005, it gets purchased by CNET. They make an offer to buy this company, buy this product from you guys and you sell it to CNET. Was that life-changing money? Did that mean that you never had to work again?

James: It was definitely a good acquisition for all of us at the time. Remember we were three guys working out of our apartments. I was at the time about $40,000 in credit card debt as well. We were down to some desperate times and we were negotiating numbers and they threw out a number which was, their first offer was 4 million, and we were like, "Whoa, that's amazing."

Guy: Wow.

James: Like, "God, I can't believe we built something that's worth this much at the time." We were just stunned. And then, we quickly got to, "Okay, how do we negotiate something better?"

Guy: So you sell your company to CNET in 2005 and you've got some money in your pocket. And you move to San Francisco to work for CNET. Did you enjoy it? I mean, it was probably a huge company at this point, right?

James: It was a huge company, but I think the moment, at least for me, that I moved to San Francisco, I instantly fell in love with the city. And CNET, even though was a larger company, I actually found it to be an amazing time. I learned a lot. I got some management training. I ended up managing a small team of people. Learned a lot about how technology scales to millions and millions of users. How you market products. I really enjoyed my experience there. I think it was pretty formative.

Nintendo Wii and Leaving CNET (2007)

Guy: Why did you leave CNET?

James: We left CNET just because of, I guess you could call it a bolt of lightning in some ways. It was December of 2006 and Nintendo had just announced the Nintendo Wii. And I remember coming home, putting it together. At the time Nintendo had come up with this really innovative control system, using motion sensors, accelerometers to serve as inputs into a game. And after using it, especially in Wii Fit, which was a sports game. I thought, "Wow, this is incredible. This is amazing. This is magical. You can use sensors in this way. You can use it to bring people together." Particularly for Wii Fit, it was a way of getting people active, of getting them moving together. And I was just blown away by this whole idea, really excited about. I couldn't stop thinking about it.

James: And after some time of playing Wii Fit and the Wii and a lot of other games, I thought, "This is great. It's in my living room, but what if I want to take this outside of the living room?" And I kept thinking about that idea, like-

Guy: "How do you take Wii Fit outside?"

James: Outside. Exactly.

Guy: Wow.

The Genesis of Fitbit (2007

James: I couldn't let it go. And I ultimately ended up calling up Eric and we started talking about this idea for hours and hours and we couldn't stop talking about it. It's like, "How do we capture this magic and make it more portable? How do we give it to people 24-7?" And that was really the Genesis of Fitbit.

Guy: So the technology, I mean, pedometers have been around forever. Was that where your head was going, or thinking, "Okay, maybe we just create an electronic pedometer?" But I think even electronic pedometers were around in 2007, right?

Existing Pedometer

James: Yeah. Pedometers were definitely around back then. Actually, they had been around for probably 100 years. One of the things though is that, they weren't something that people would want to use or to wear. They were very big. They were pretty ugly. They looked like medical devices.

Guy: A lot of senior citizens used them.

James: Yeah. They weren't a very aspirational device. It wasn't something that people were excited to use. And so, I think that's why that whole category of device just never really had any innovation. And there are also much higher-end devices. You could buy much fancier running watches, like GPS watches, et cetera. But those are really expensive for people. There are 300, $400 at the time.

Guy: So you had this idea, and that means you had to raise money. And this is going to be the third time now that you've had to do that for a business. And I think I read that you raised $400,000 to launch this. I mean, I don't know a lot about hardware, but that doesn't seem like it was going to take you very far in building a physical product.

James: As we quickly found out, yes, we had grossly underestimated the cost of taking this to market.

Guy: And what did that initial amount of money, how far did they get you into actually conceding of what this product was going to be?

James: It got us to a prototype, write some rudimentary software, get some industrial design concepts done and some models.

Guy: What did the prototype look like? Did it look like a Fitbit?

James: It looked absolutely nothing like a Fitbit. There are two things, there was actual, somewhat working prototype and then there was an industrial design model.

Guy: Which was a piece of plastic.

James: Plastic, and metal that was supposed to look like the ultimate product. And so, that actually looked really, really nice.

Guy: But it didn't work?

James: Yeah. It was totally nonfunctional. And we'd always have to tell people before showing, "This doesn't work here." Because they get all excited looking at the model. "No, no, no. That doesn't work." The thing that actually worked looked like something that came out of a garage, literally.

Guy: What did it look like?

The prototype

James: It was rectangular circuit board, a little bit smaller than your. And it had a motion sensor, it had a radio, it had a microcontroller, which was the brains of the product. And it had a rudimentary case, which was a balsa with box.

Guy: Wow. So you would take to investors, a circuit board and a balsa wood box as your prototype?

James: Yeah. That was the prototype. And actually that was what we had demoed. When we first announced the company, that was the prototype that was actually being used at the announcement.

Guy: Wow. I mean, how did you even get it to that point? Because you guys are both software engineers, how did you develop a physical product that even such a crude prototype could track movement? Did you have other people help you do that?

James: That was our big task was to find the right people who could help us. I knew the founder of a really great industrial design firm in San Francisco called New Deal Design. His name is Gadi Amit. And then on the algorithm side, because it was going to take a lot of sophisticated algorithms to translate this motion data to actual data that users would be able to understand, I ended up asking my best friend from college, because he was in grad school at Harvard at the time. And he said, "Wait, I think I might know somebody." And it ended up being his teaching fellow, his name was Shelton. And we talked and I was like, "Wow, this guy is super smart. We need to get him working on the algorithms." So he ended up working on the side while doing his PHD, helping us out with a lot of the software.

Guy: I mean, you leave CNET in 2007, and you've got 400,000 to come up with a prototype that quickly run out of that. So it's 2008, and you're trying to raise money, how much did you raise?

Raising $2M (2008)

James: I think our first round was about $2 million.

Guy: Which was not going to take you that far if you wanted to develop a physical product that was super sophisticated, a piece of hardware.

James: We thought we could do it. We thought we knew a little bit more about the hardware business. We put together another business plan budget. It was actually a pretty challenging time to raise money as well because-

Guy: Oh, with the financial crisis. Yeah.

James: Exactly. It was the fall of 2008, when we were trying to raise money. One of the, I guess the good and bad things about VCs is, the good thing about VCs is they're incredibly healthy people. They're super fit. But it also made it difficult for a lot of them to understand the value of the product because what we were trying to do was, it wasn't a product meant for super athletic people, it was really meant to help normal people become more active, become healthier, et cetera. And it was hard for a lot of them to grasp why that was valuable. They'd ask, "Well, did it do X or did it do Y and did it do Z?" And we'd say, "No, it doesn't do any of that." And so it was very difficult for a lot of these super-fit VCs to understand the value of the product, even though a lot of them claim they don't try to put their own bias on these products. It's naturally human to do that.

Guy: And did you know right away that this was going to be... I mean, now Fitbit's are watches mainly. They're wrists, there on your wrist. But at that time, you were thinking that this was just going to be something you would clip to your clothing?

James: Yeah. Something to clip to your clothing for men. And then what we found out in talking to a lot of women was that they wanted to tuck it away somewhere hidden. They didn't want people to see it. And we said, "Okay, where would you want to put it?" And said, "Well, a lot of our pants don't have pockets, so it can't be in our pocket." And so the preferred place was actually on their bra. So a lot of the physical design that we had to think about in the early days was how to come up with a product that would be very slim, slender, and clipped people's bra.

Guy: And hidden.

James: And hidden and clipped the bras pretty easily.

Guy: And by the way, how did you come up with the name Fitbit?

Buying FitBit.com from a Russian for $2,000

James: It's never easy to name a company, and it's even more challenging just because of domain names. That's typically a lot of the limiting factor in naming a great company. And so, we would spend hours and hours and days just going through different permutations of names, and some awful ones as well. At some point we got onto a fruit theme. So we were thinking like Fitberry or Berryfit or Fitcado. Just some really awful names.

Guy: The Fitcado.

James: The Fitcado. Yes. History might've turned out a lot differently for sure. I was just taking a nap in my office one afternoon. I think I was actually napping on the rug because I was so tired. And I woke up and it just hit me, it was Fitbit. And the next challenge was actually the domain name. The domain name was not available. And it was owned by the guy in Russia. And I'm like, "Oh my god, how are we going to get this domain name? We'll just email the guy and see what happens." And he said, "Well, how much are you willing to offer?" And I said, "Oh god, I don't know. How about 1,000 bucks?" And he's like, "Oof, how about 10,000?" And I said, "Oh, I don't know. That sounds like a lot. How about 2,000?" And he's like, "Oh, okay. 2000, deal." I think it was literally two or three emails that we sent back and forth in this negotiation.

Guy: Probably the best $2,000 you ever spent in your life, except for the 300 you spent on the press release a couple years earlier.

James: Yeah, yeah. Definitely a good return.

Guy: You've probably spent many millions of dollars on other things in your life that were not as good of a deal as that $2,000.

James: Yeah. It's tens of thousands on naming consultants and focus groups and trademark searches and all of that. It's kind of funny.

Guy: Hey, as they say, small companies, small problems, big company, big problems.

James: Exactly.

Doing a consumer hardware startup before KickStarter

Guy: So where do you begin? I mean, you got to make it, you got to find a factory, you got to find designers. Where do you go?

James: Very good question. We obviously had zero connections. The challenge though, was not actually the connections to the manufacturers, but finding a manufacturer who we could actually convince to build this product because we didn't have a background in hardware. And so, would they actually want to work with us? That was the biggest concern at the time.

Guy: So how did you find them?

James: We went out to China. We went out to Singapore. And we were never going to be able to get the Foxconn's.

Guy: You had to go to a smaller place.

James: We had to go to a smaller place, who'd be more nimble, more flexible, who'd want to take a financial risk. And we finally found a great manufacturer based in Singapore called Racer Technologies. And the good thing is actually, it was the best of all worlds, the headquarters was in Singapore. Most of the management team and the engineering staff was in Singapore, but they had manufacturing facilities that were in Indonesia. The labor there was going to be lower cost than in Singapore.

Guy: All right. So 2008, you've got the name Fitbit, you go to TechCrunch50 to present, to unveil this product. And what was the product that you were offering? Well, you said, "All right, we've got to think of the Fitbit and it does this." What did you say it did at that point?

James: Our pitch to the crowd at TechCrunch, and ultimately to our consumer was that, it was a product that would track your steps, distance, calories, and how much you slept and would answer some basic questions about your health, "Was I active enough today? Did I get enough sleep? What do I need to do to lose weight," et cetera. And one of the more important aspects was this idea of a community as well. "Join other people who own Fitbits, your friends and family, and you could compete with each other." And it was all wireless. You didn't really have to do anything. All you'd have to do is wear this device, don't even think about it, and all this magic would happen. That was the promise of Fitbit at the time.

Guy: There was a lot of excitement there, but I'm wondering, were you nervous to do these presentations? Did you have to prepare like crazy, or did you just find your ability to be this person you had to be on stage when you got up there?

James: Yeah, I think there was no other choice. It was just something we had to do. And I think-

Guy: Are you better at it than Eric, or is Eric better at it than you?

James: I think we're both good in our different ways. It just fell upon me. I don't even know how we decide those things. But actually, what was running through our minds, was not what we were going to say and how we're going to say it, but whether the demo would actually work on stage, because again, it was a little sketchy. It was still very early. It was still in the wooden box.

Guy: In the balsa wood box.

James: Balsa wood box phase. So we were just worried that the demo would just fail or crash.

Guy: But it worked.

The TechCrunch pitch and 2,000 preorders

James: It worked, and actually it did crash in the middle of the presentation because the whole demo was about me walking on stage, the device would be collecting stats. And at one point I would turn to Eric and say, "Hey, Eric, why don't you refresh the page and show that all the stats have been uploaded." Magically, do this wireless connection. And so, the demo actually crashed while I was talking, and Eric was fiercely trying to reboot his computer during this period and I don't even know anything about it. But ultimately, the demo did work. And so, to many people, it seemed like magic. Literally, people started clapping. It was really amazing.

James: Originally, right before TechCrunch, Eric and I, we made just a verbal bet. "How many pre-orders are we going to get after this conference when we announce and make the company public?" And I think Eric said, "I think we'll get like five pre-orders." So it's like, "The device isn't even available. People are going to have to give us their credit card information." And I said, "Nah, you know what? I'm not as pessimistic. I think there's going to be like 10, 15, 20." And so we got off stage, and by the end of the day, we had about 2000 pre-orders.

Guy: Wow. When we come back in just a moment, James and Eric have a prototype in the balsa wood box and they don't exactly know how they are going to get from there to filling thousands of pre-orders. But a lot of people are expecting them in time for Christmas. Stay with us, I'm Guy Raz, and you're listening to How I Built This from NPR.

Guy: This message comes from NPR sponsor, Sell on Amazon. When you sell on Amazon there's room to grow, ways to move and countless reasons to believe that your brand can do more. Amazon can help you set up your products and tell your brand's story. Give you access to insightful analytics and even help with shipping, customer service and returns through Fulfillment by Amazon. Sign up at sell.amazon.com/npr and get ready to grow, ship, sell, and thrive. Start selling on Amazon today. This message comes from NPR sponsor, ClickUp. You don't need to exist on four hours of sleep to be productive. Enter ClickUp a completely customizable work platform with all the features you've ever needed to build your business. All in one place. Join 100,000 teams across companies, like Airbnb, Google, and Uber who are already using ClickUp to level up their productivity and save one day a week guaranteed. ClickUp is free forever. So get started at clickup.com/npr today.

Guy: Hey, welcome back to How I Built This from NPR, I'm guy Raz. So it's 2008, and James and his co-founder Eric Friedman show off their Fitbit prototype at TechCrunch, and it makes a huge splash. The problem is, they have no finished product. They haven't even figured out how they're going to make it and pre-orders are pouring in.

Being Open and Transparent

James: And they just kept coming in. It was crazy. We were like, "Oh my god, it's not just dozens of these units we have to build, it's now thousands, and more and more every day." And so we were still thinking Christmas of that year that we were going to start shipping out units, and it rapidly became clear to us that we weren't going to make Christmas. And so, we're thinking, "Okay, how do we keep all these people happy while we pull this off?" So this was before Kickstarter and Indiegogo and all that. We had to improvise. We were like, "Okay, why don't we just blog about the whole process and just be very open and transparent about it." So we started a blog, and I wrote maybe weekly updates on how things were going, challenges and delays that we were facing.

James: And I was really surprised, actually, it worked. It made people understand what we were going through. They're literally seeing the thing being made, the sausage being made behind the scenes. And I think that kept people really engaged throughout the process.

Guy: So you have basically a bunch of contractors and freelancers and you guys are going back and forth to Asia. You got people working on the software to transmit the data to the web. You've got some people working on the hardware, presumably, in Singapore trying shrink down the motherboard to something that is two inches by one half inch. And were you just constantly running into failures? You would think that, "Oh, here it is." And then somebody would hit the go button and then it would just fizzle out, it wouldn't work?

James: Yeah. I can't even enumerate the number of challenges with the product that we had.

Guy: Please start.

Manufacturing in Asia

James: In some ways a lot of people, I think when you think about hardware, it's like, "Oh, I'll find a manufacturer in China. I'll throw over a design."

Guy: Yeah, right. No problem.

James: "They'll just run with it."

Guy: And then, "Just send me the bill," and then it's done.

James: And they'll just crank out thousands, tens of thousands of this. But that's never-

Guy: And that works if it's a suitcase, we've done a way. It works if it's that thing.

James: If it's that thing or something that's very similar to something that they've built before.

Guy: Right.

James: Well, that's a different story than this thing that this manufacturer never had built before.

Guy: So they would send you things and say, "Yep, we got it." And then you would get it and it sucked. It just didn't work.

James: Yeah. We wouldn't wait for them to send it. I mean, either myself or Eric would be in Indonesia or Singapore at any given time. We'd trade off different weeks. And we were out there on the production lines pretty much inspecting every part of the process.

Guy: But were you convinced this thing was going to work or did you have doubt?

James: I was absolutely convinced that it was going to happen.

Guy: You had no doubts that this-

James: I had no doubts because we were getting proof every day that this was something that was going to be big. And I think the first evidence of that was at TechCrunch where we had 2,000 pre-orders and we were getting pre-orders every day. I think by the summer time, we had about 25,000 pre-orders at $100 per unit. That's a fair amount of revenue if we could ship these units.

Guy: And how much was it going to cost you to make each unit?

James: That was a very good question. We didn't know that. Hopefully, under $100.

Guy: You didn't know? You were selling them for $100, but you didn't know how much it was going to cost you.

James: We had a sense of the bill of materials. I think we were trying to shoot for a gross margin of about 50%. So we're targeting the full cost of the product, including shipping, et cetera, being no more than $50. That's what we were targeting.

Guy: Which is a lot. That's high. It's a high cost.

James: It's a high cost, but that was a cost at which we felt we could sustain ourselves as a business.

Guy: How did you and Eric manage your relationship and friendship? I mean, with the stress of this delay and inability to meet demand and all these, was there tension at all between the two of you, or you guys totally are on the same page?

Importance of cofounders

James: I don't think there was that much tension. I mean, a lot of stress, but not tension. I think we trust in our ability to help each other out. And there are periods when either of us would be pretty down on the company and the product. And luckily, we weren't down both at the same time. And that's why it helps, I think, to have a co-founder.

Guy: So there were times where you were really down and he could give you a pep talk and.

James: Exactly. And then I'd wonder why he wasn't down. And there're some pretty dark times right before we shipped. I remember we were months before we thought we could finally get the first unit off the production line. And I was sitting in my hotel room in Singapore, and I was testing out one of the prototype builds that that Racer had produced and the radio range was not good at all. It was supposed to have a range-

Hardware is Hard

Guy: 10 feet or 15 feet?

James: That was the hope that it would have 15 to 20 feet range, but the range was actually two inches.

Guy: Oh god. Wait, so the antenna in the device had a two inch range.

James: Yeah. It would only work at two inches. And I'm thinking, we've got to ship this holiday season. I've got tens of thousands of these people waiting.

Guy: Oh god.

James: And so, I'm just freaking out in hotel room.

Guy: You might as well have a cord and just plug it in.

James: Exactly, exactly. I couldn't sleep that night obviously. And I took the unit apart. I had a multimeter and I was measuring different voltages and currents. And what I realized was, huh, the cable for the display was flexible and long enough that maybe it was actually trooping down and touching the antenna and that was causing-

Guy: That was creating interference.

James: Creating interference. And I could see that when you put the whole thing together, that it might troop down. And I thought, "Okay, how do I create a shim that would prop the antenna?" So I went to the bathroom, grabbed some toilet paper, rolled a little bit of it in a ball and stuffed it between the antenna and the display cable, put the device back together. And it started working. The range was great.

Guy: Wow. So you had to separate one wire from the antenna and that was it, with toilet paper?

James: With toilet paper. Yeah, that was it.

Guy: Wow.

James: And I still couldn't sleep. So as early as possible, the following morning I raced into our manufacturing and said, "Okay, I think I found the problem," but obviously a toilet paper is not a scalable high volume situation. So they went back and figured out how they could make this manufacturable. So they ended up creating these little tiny tie cut pieces of rubber that they would glue onto the circuit board to keep the antenna away from the display cable.

Guy: Wow. Wow. So that was, basically, was just, inserting something in there and then it worked?

James: Yeah, it wasn't exactly duct tape, but that was the equivalent of duct tape

Guy: It was pretty close.

James: It was pretty close. Yeah.

Guy: So you guys launched this product in Christmas of 2009, and it was a pretty successful product launch. You had 25,000 orders and sounds like you're off to the races, but I guess even with this success, when you went out to raise money, this is 2010, were investors more excited or was it still a challenge to get more investors in?

James: It was still a challenge. And at the time, it wasn't "Okay, I guess you guys are having some success, consumers are buyin the product, et cetera." And they congratulated us on that. But the were very scared of hardware businesses. I think there had been a lo of really high profile failures in the consumer electronics industry And so, it was very difficult for us to raise money. I remember, w had a spreadsheet of target VCs. I think there are 40 names that w put on that list. And literally, we went to number 40 before we wer able to raise money.

Raising money from VCs: 40 nos.

Guy: And just giving the same pitch, again, again, answering the same questions?

James: Same pitch. We're in San Francisco driving down 101 to Sand Hill Road, constantly giving the same pitch to 40 VCs. That's probably the one thing I didn't like about that whole time period was, I hate giving the same pitch over and over and hearing the same questions and same objections, et cetera. That was not a fun or stimulating time for me.

Raising $8M (2010)

Guy: All right. Eventually, the 40th investor does decide to give you some money. I think you raised about $8 million. And at this point, were you able to then have a proper office and a staff. Were you able to begin to recruit real full-time engineers and developers and people like that?

James: We were. We did that with the round that was right after our first $2 million institutional round. We hired a bunch of customer support personnel. I interviewed and hired our first head of sales. I interviewed and hired someone to finally run all of our manufacturing and operations, which was still a job that I was doing. I was still issuing all the [POS] and managing the inventory. And I think we were really fortunate because the early management team that we hired in those days pretty much made it up to and passed our IPO, which I think rarely happens.

Building a community

Guy: It's so crazy to think about it now. But I think early on, with the Fitbit, the idea was to be part of a bigger community. Like the data from your activity would be available. You would just go to a site and you could see it and you could see everybody else's because the idea was, "We're all part of this together." But I think early on, some users were tracking sex. And when you started to hear about these things was your reaction like, "Oh my god, I never even thought about this being a privacy thing. I always thought that people would just want to share stuff."

James: Yeah. This was still the early days of sharing things like that. And I found out about it because I saw this tweet about someone going, "Hey, if you do this Google search, you'll see," because Google was indexing all our public pages where people are logging things that people had made public. "You could find out all the sexual activities that people are logging on Fitbit." And I saw that, I'm like, "Oh my god, this is not good." That ended up being the first real PR crisis for the company. And it was happening over the 4th of July weekend. So I had to call an emergency board meeting. We had to scramble to delete all that stuff, turn everything private.

Guy: Because the default setting, initially, when you got to Fitbit was, it's not private, it's open. Because the idea was, it was going to be a big community of people trying to get fit.

James: Yeah. I mean, we made a lot of things private by default. We made sure that people's weight was private because we thought that would be sensitive, but we didn't think that people's activities, there wasn't any harm in doing that and we just didn't realize that people would start logging that.

Guy: And just to be clear, people who logged sexual activity, this was not a category that you offered up, it was just people were voluntarily deciding to just log that as one of their activities.

James: Well, it was a category, but it wasn't something that we had realized. We use this database from the government that was thousands of different activities that people would do.

Guy: Oh, I see.

James: And so, it was an option. We just didn't think people would with log that.

Guy: You were just naive about that.

James: We were naive. We were like, "Okay, this is a government database of activities. It must be fine." That was quite a shock and a wake up call for us.

$76M in revenue (2012)

Guy: Fitbit for the first couple of years was, a, still a clip. Mainly a clip. And then, I think really 2011, you released the first product, Christmas of 2009, you've got 2010. By 2011, just business exploded, 5X growth from 2011, 2012. You went from $15 million in revenue to $76 million in revenue. What was going on? Was it just this self-generating phenomenon? Were you surprised by it? Were you've investing in marketing? Was it just unearned media, just people reporting on it? What was going on?

James: I think, the primary reason is, because we had baked in this social element, this community element into it from the very beginning, it ended up being a very viral product. So one family member would get it, and to really realize the potential, the community aspect and the competitive aspect, you had to have someone else as well. So they'd either buy it for their spouse or their parents and they would start competing and then they'd buy it for their friends and they'd try to get their friends to buy the product.

Guy: So they could each see how many steps you were... Because I remember this, I remember this in NPR. People were wearing Fitbits and they were talking and there was, I think there was even, people were encouraged to get Fitbits.

James: Exactly. It was very driven by word of mouth. And this viral spread was a huge driver of our growth in those days.

Worrying about Competition (2013)

Guy: I think by 2013, you had some competitors coming in. Nike was making one and Jawbone was making one. I mean, I remember going to the TED Conference in 2013 and getting a Jawbone in my gift bag. Were you worried about the competition at that point, or not really?

James: Yeah. At that time I think people were looking at the success and there was even a name, coined for the whole category, which is quantified self. "How do I use sensors, et cetera, to measure everything that I'm doing in my entire life?" And so that attracted a lot of competition that you said. And I'd have to say the competitive aspect was definitely worrying at the time, especially with Nike and Jawbone.

Guy: Because they're so huge.

James: There are huge. I mean, Nike, obviously, it's a multi-billion dollar, multi-national company with a lot of media dollars. I remember when they announced the FuelBand, they had all these celebrity athletes at the announcement and we're like, "Oh god, that's insane."

Guy: And yet, by 2014 you had 67% of the activity tracking marketplace. I mean, Fitbit was just totally dominating the marketplace. I mean, were you and Eric doing victory laps and high-fiving each other and thinking back to all those doubters? I mean, what was going on?

James: I think we were still pretty, I don't know if scared is the right word. I think, it's still very, very cautious. Nothing was guaranteed. There was a lot of competition that was emerging. We still had a lot of internal challenges in the business, scaling production, scaling the company, et cetera. Again, a lot of fires for us to be solving on a day-to-day basis. And I remember occasionally we'd always check in and say, "Hey, when do you think we'll know we're going to make it?" And we'd say, "I think we'll know in six months." And we kept saying that every six months. It was pretty much an ongoing thing, pretty much up to the IPO.

The IPO (2015)

Guy: 2015 was a huge turning point for you in many ways. You go public, I think your market cap, I read a certain point, reached $10 billion. That year, 2015, the Apple Watch is released and they stopped selling Fitbit in their stores. At the time you were quoted saying, I'm not really worried about this because it's a huge market. It's a $200 billion market. The Apple Watch is just crammed with a bunch of stuff, or smartwatches are crowned with a bunch of stuff. And what we're doing as something simpler." Was that, what you were saying publicly, because I don't know, you be felt like you should be saying that or did you really think that was true that the Apple Watch wouldn't actually have much of an impact?

The Apple Watch (2015)

James: We were definitely concerned with Apple. I mean, this was the preeminent technology, and especially a hardware company at the time with an amazing brand. We had faced off Philips and Nike and Jawbone, which were in their rights, very big competitors, especially Nike. We did feel very strongly that our product had very clear advantages. It was a simpler product. If you looked at the Apple Watch, that was announced at that time, I think everyone will admit maybe even Apple, that it was a product that didn't quite know what it was supposed to be used for. With the launch of the first Apple Watch, I don't really think that that had an actual impact on the trajectory of the business. It wasn't the product that it would later become. And the industry wasn't where it would eventually evolve either.

Guy: I mean, but eventually, the industry did change. I mean, Apple Watch got really popular. I think, by 2016, Fitbit's stock had dropped by 75% over the course of a year. I mean, you and Eric were running a publicly traded company and the stock was just tumbling. What did you think? I mean, I can't imagine that was pleasant for you.

Running a Public Company

James: No, it was definitely a stressful period. And you could argue, well, maybe we shouldn't even have been valued at 10 billion in the first place. And I think in a lot of times it's a question of perception. If we had never hit that 10 billion and we had steadily grown into the 2 billion, I think people's perceptions and just psychology about the whole situation would have been different than going to 10 and falling to two. And it was a very challenging period because as a private company, despite challenges, your valuation doesn't change very often. It only changes when you raise money, which could happen once a year, once every two years. So if you hit a bump in the road, your employees don't really feel it.

James: We had a product recall where, if we had been a public company, our valuation would have plummeted immediately, but at the time we were private. So we just told the employees, "Hey, look, this is the challenge. It's pretty serious, but here are the steps that we're going to take to get through it." And everyone rallied together. But when you're being measured every day in real-time-

Guy: By the stock price.

James: ... By the stock price, you're not really given a lot of breathing room to try to fix things.

On Critical Feedback

Guy: Even though you were introducing new products, revenue was declining every year from the time you went public. And I read an article about something that you did in 2017. And I'm really just curious to get your take on it, because I actually think it's really courageous, but also probably super stressful and difficult, which is, you asked your employees to submit an evaluation of the company and of you. And then you sat in front of them to hear the results of this evaluation and it wasn't good. You even had some employees who wrote letters to the board asking that you be removed as CEO. I can't imagine that was easy for you to hear.

James: I don't know if I've heard that particular feedback directly, but clearly the survey results were not great. I have jokingly think, probably used to hearing very critical feedback because of my parents. I don't think there was a moment where they're truly happy with anything that I did. I remember even when I took the SATs and I got my score back, it was a pretty good score, but my dad just honed in on clearly the areas that I had not done well. I don't think I have a huge ego. I mean, I do have an ego, I think it's human to have one, but my primary focus was, "How do I get things back on track?"

Guy: You had, there was a quote from somebody in an article. I was an anonymous quote. It said, "At a certain point, we're focused o the right things. We had the ability, and have the ability to know lot about our users, which you do, but our users don't want to be tol what they did." In other words, they don't want to be told, "Hey, yo exercise, you did 10 steps today." They want to be told what to do Like how to get better. And the quote was, "This was the greates missed opportunity." And I know you've made a pivot since then, bu was that a fair assessment at the time in 2017, that you were to focused on telling people what they've accomplished rather tha telling them what they need to do?

James: Yeah. I think there are ultimately two big things that were driving the headwinds in the business. First of all, I think we were really behind in launching a competitive smartwatch at the time. People were-

Guy: Competitive to...

James: Competitive to Apple. It was clear that the industry, consumers were moving to that category and we were seeing that in our sales. So in a very short period of time, our tracker business fell by $800 million in revenue. And at the time, at our peak, we were doing about 2.1 billion in revenue.

Guy: Wow.

James: So we had $800 million hole, and we finally launched our smartwatch, but it was only sufficient to fill that hole very barely. We hadn't transformed the software into giving people guidance and advice. And it also ties to our failure at the time to quickly diversify our revenue stream beyond just hardware to a services business that-

Guy: Like a subscription.

The Subscription Service

James: Exactly. We were so focused on growing our hardware business because that was what was bringing in the money, that was what the retailers wanted, et cetera. And one of the mistakes I made was not setting up enough time, enough focus to building the subscription part of the business that actually answered those pivotal questions for our users.

Guy: As many, many companies find themselves in a successful companies that have a successful legacy product, this is crazy talking, but a legacy product for your company, which is only 10 years old or 12 years old. You could argue that the Fitbit product is your legacy product, right? And that, as any company with a legacy product realizes, they've got to make a pivot. Like for American Express, it was traveller's checks for 100 years. That's how they made their money. And they had to pivot into other things, there is travel services and credit cards and so on. It sounds like in 2019, you really made a pivot into thinking about Fitbit, not as a hardware company that makes a tracker, watch, or device, like smartwatch, but a company that really is about healthcare and is designed to pivot more into healthcare data and analysis. Is that fair? Is that right?

James: Yeah. I think that's fair. I think we stopped thinking of ourselves as a device company and more of as a behavior change company because that's effectively what people were buying our products and services to do, was to change their behavior in a really positive way. And not only individual people, but companies as well. Companies who in the US, especially bear the direct costs of the care of their employees. We started thinking about ourselves as a behavior change company and figuring out what are the products and services that really deliver that both to people and to businesses.

The Google Acquisition (2019)

Guy: So we get to the end of last year where Google announces that they were going to buy Fitbit, $2.1 billion. We should mention that, at the time of this recording, it hasn't closed yet. To me, it makes perfect sense. If I'm you or Eric, I would have done it. I would've said, "$2.1 billion, that's great. That's a great outcome because now with Google, we've got access to their dollars and their research labs and all the people who work there and the analytics and our ability to really go to the next level." Why did it make sense from your perspective to sell to Google?

James: Yeah, that's a very complicated and emotionally fraught question, but last year our board met and it was pretty clear to everybody that we had a lot of challenges in the business. We weren't profitable. There was a lot of competition out there from the likes of Apple, from Samsung, some emerging Chinese competitors, but there was a lot of just great things going on in the company. I was so excited about our product roadmap, about things that were in our pipeline, all the advanced research that we were doing around health and sensors. I would look at our product roadmap every day and just come away super excited about that. And then also be confronted with a lot of the business challenges as well.

James: And for me, most importantly it was about a legacy and I wanted the Fitbit brand and what we did to continue onwards for a very, very long time. And we just had to figure out the best way to do it, whether it was as an independent company or within a larger company. That was really what was most important.

Guy: I imagine that there are some details you can't talk about for obvious reasons, but as of this recording, we're talking in mid-April, there is a hold on the Google acquisition. The department of justice is doing an investigation because there's some interest groups who have said, "Hey, we don't think that Google should have access to all of this data. That Fitbit has 28 million users. This is incredible trove of health data." Is that causing you stress right now that there is this justice department holdup on the acquisition?

James: No. And it's because sometimes the press does like to sensationalize things, but the process that we're undergoing right now with the department of justice and also with the EU and some other countries around the world is pretty normal for acquisitions of the size. In fact, it's required. Really, the whole reviews about the anti-competitive element, and especially around the wearable market share. That's just something that we have to convince regulators that, "This doesn't reduce competition in the marketplace."

Guy: As far as you know, the situation now with the lockdowns and the pandemic does not have any impact on Google's interest or commitment to making this happen.

James: No, I think everyone's thinking towards the long-term, fingers crossed is that we do find ourselves through this COVID-19 situation and that there is life beyond that. Maybe it comes back slowly, but I think everyone is thinking, "What does this whole category look like in time span of years? How?" And I think what, one of the things that COVID-19 has shown is that, especially if you look at healthcare, this idea of remote health care, remote monitoring, people healthy outside of a hospital setting is actually really important.

Guy: Super. It's going to totally change... I've had a video call with my doctor just for a quick question. It's actually super convenient.

The Future of Medicine

James: Exactly. And if during these telemedicine visits, if they have a snapshot in summary of what you've been up to and what your health has been outside of that visit, and almost be predictive in that way, I mean, I think that can be really groundbreaking in the way medicine gets practiced. And this whole time period is merely accelerating that transition.

Guy: When you think about all of the things that you have done professionally and your successes, you made a lot of money. I mean, you're extremely wealthy and wealthier than your parents could have ever imagined you would be, or they would be. They took a huge risk to come to the US and had all these little mom-and-pop stores. How much of that do you think is because of your intelligence and skill and how much do you attribute to luck?

Closing thoughts

James: Yeah, that's always a tricky question to answer. I think, very fortunate to have grown up with my parents. Just having seen them persevere through life, you get the realization that nothing really comes easy. That it does take a lot of just grinding away at things that at the time seem unpleasant. I think those are good traits and very fortunate to have parents like that who sacrificed a lot to put me in great schools over time, even though they started from some humble beginnings. But also, have learned a lot of ways, gotten some lucky breaks where things could have gone the wrong way very, very quickly. Ultimately, I attribute it to a little bit of all of that. I think it's not fair to say that everything is luck because then I think you start to discount the actual things, actions that you can take on your own to affect the future. And that's really important.

Guy: That's James Park, co-founder of Fitbit. And here's a number for you, 34,642,772, that is how many steps James has tracked since he first put on that balsa wood Fitbit prototype. At least as of this recording. It's about 15,430 miles or 24,832 kilometers. And thanks so much for listening to the show this week, you can subscribe wherever you get your podcasts. You can also write to us at hibt@npr.org">hibt@npr.org. And if you want to send a tweet, it's @HowIBuiltThis or @guyraz. This episode was produced by James Delahoussaye, with music composed by Ramtin Arablouei. Thanks also to Sarah Saracen, Candice Lim, Julia Carney, Neva Grant, Casey Herman, and Jeff Rogers. I'm Guy Raz, and you've been listening to How I Built This. This is NPR.

View source

February 28, 2021 — I read an interesting Twitter thread on focus strategy. That led me to the 3-minute YouTube video Insist on Focus by Keith Rabois. I created the transcript below.

One of the fundamental lessons I learned from Peter Thiel at PayPal was the value of focus. Peter had this somewhat absurd, but classically Peter way of insisting on focus, which is that he would only allow every employee to work on one thing and every executive to speak about one thing at a time, and he distributed this focus throughout the entire organization. So everybody was assigned exactly one thing, and that was the only thing you were allowed to work on, the only thing you were allowed to report back to him about.
My top initiatives shifted around over the years, but I'll give you a few. One was initially Visa, MasterCard really hated us. We were operating at the edge of their rules at the time. My number one problem was to stop MasterCard particularly, but Visa a bit from killing us. So until I had that risk taken off the table, Peter didn't want to hear about any of my other ideas.
Once we put Visa, MasterCard into a pretty stable place than eBay also wanted to kill us. Wasn't very happy with us processing 70% of the payments on their platform, so that was my next problem.
Then 9/11 happened and the US Treasury Department promulgated regulations, which would require us among other things to collect Social Security numbers from all of our buyers, which would have suppressed our payment volumes substantially. So then my number one initiative became convincing the treasury department to not propagate these regulations, right post 9/11.
At some point, we also needed to diversify our revenue off of eBay. So that became another initiative for me. That one I did not solve that well, which in some way led to us eventually agreeing to be acquired.
I had another number one problem, which was this publication called the Red Herring, had published this set of unflattering articles about us and how to fix that and rebuild the communications team.
Peter would constantly just assign me new things. He didn't like the terms of our financial services relationship with the vendors that we were using, so I took on that team and fixed the economics of those relationships, et cetera, et cetera, but they were not done in parallel. They're basically sequential. The reason why this was such a successful strategy is that most people, perhaps all people tend to substitute from A-plus problems that are very difficult to solve, to be a plus problems, which you know a solution to, or you understand the path to solve.
You have a checklist every morning. Imagine waking up, and a lot of people write checklists and things to accomplish. Most people have an A-plus problem, but they don't know the solution so they procrastinate on that solution? And then they go down the checklist to the second or third initiative where they know the answer and they'll go solve those problems and cross them off. The problem is if your entire organization is always solving the second, third or fourth most important thing, you never solve the first.
So Peter's technique of forcing people to only work on one thing meant everybody had to work on the A-plus problems. And if every part of the organization once in a while can solve a problem that the rest of the world thinks is impossible, you wind up with an iconic company that the world's never seen before.

I absolutely love the math behind this strategy. There are a few other terms to get right, but there's a fantastic idea here.

View source

February 28, 2021 — I thought it unlikely that I'd actually cofound another startup, but here we are. Sometimes you gotta do what you gotta do.

We are starting the Public Domain Publishing Company. The name should be largely self-explanatory.

If I had to bet, I'd say I'll probably be actively working on this for a while. But there's a chance I go on sabbatical quick.

The team is coming together. Check out the homepage for a list of open positions.

View source

February 22, 2021 — Today I'm launching the beta of something new called Scroll.

I've been reading the newspaper everyday since I was a kid. I remember I'd have my feet on the ground, my body tilted at an angle and my body weight pressed into the pages on the counter. I remember staring intently at the pages spread out before me. World news, local news, sports, business, comics. I remember the smell of the print. The feel of the pages. The ink that would be smeared on my forearms when I finished reading and stood back up straight. Scroll has none of that. But it does at least have the same big single page layout.

Scroll brings back some of the magic of newspapers.

In addition to the layout, Scroll has two important differences from existing static publishing software.

First, Scroll is built for public domain sites and only public domain sites. Builders of Scroll will spend 100% of the time building for the amazing creators who understand and value the public domain.

Second, Scroll is a Tree Language. Unlike Markdown, Scroll is easily extensible. We can create and combine thousands of new sub languages to help people be more creative and communicate more effectively.

I've had fun building Scroll so far and am excited to start working on it with others.

View source

December 9, 2020 — Note: I wrote this early draft in February 2020, but COVID-19 happened and somehow 11 months went by before I found this draft again. I am publishing it now as it was then, without adding the visuals I had planned but never got to, or making any major edits. This way it will be very easy to have next year's report be the best one yet, which will also include exciting developments in things like non-linear parsing and "forests".

In 2017 I wrote a post about a half-baked idea I named TreeNotation.

Since then, thanks to the help of a lot of people who have provided feedback, criticism and guidance, a lot of progress has been made flushing out the idea. I thought it might be helpful to provide an annual report on the status of the research until, as I stated in my earlier post, I "have data definitively showing that Tree Notation is useful, or alternatively, to explain why it is sub-optimal and why we need more complex syntax."

My template for this (and maybe future) reports will be as follows:

  • 1. High level status
  • 2. Restate the problem
  • 3. 2019 Pros
  • 4. 2019 Cons
  • 5. Next Steps
  • 6. Status of Predictions
  • 7. Organization Status

High Level Status

I've followed the "Strong Opinions, Weakly Held" philosophy with this idea. I came out with a very strong claim: there is some natural and universal syntax that we could use for all of our symbolic languages that would be very useful—it would let us remove a lot of unnecessary complexity, allow us to focus more on semantics alone, and reap a lot of benefits by exploiting isomorphisms and network effects across domains. I've then spent a lot of time trying to destroy that claim.

After publishing my work I was expecting one of two outcomes. Most likely was that someone far smarter than I would put the nail in Tree Notation's coffin with a compelling case for why a such a universal notation is impossible or disadvantageous. My more optimistic—but less probable—outcome was that I would accumulate enough evidence through research and building to make a convincing case that a simplest universal notation is possible and highly advantageous (and it would be cool if Tree Notation evolves into that notation, but I'd be happy for any notation that solves the problem).

Unfortunately neither of those has happened yet. No one has convinced me that this is a dead-end idea and I haven't seen enough evidence that this is a good idea1. At times it has seemed like a killer application of the notation was just around the corner that would demonstrate the advantages of this pattern, but while the technology has improved a lot, I can't say anything has turned out to be so compelling that I am certain of the idea.

So the high level status remains: strong opinion, weakly held. I am sticking to my big claim and still awaiting/working on proof or refutation.

Restating the Problem

What is the idea?

In these reports I'll try and restate the idea in a fresh way, but you can also find the idea explained in different places via visuals, an FAQ, a spec, demos, etc.

My hypothesis is that there exists a Simplest Universal Notation for Symbolic Abstraction (SUNSA). I propose Tree Notation as a potential candidate for that notation. It is hard to assign probabilities to events that haven't happened before, but I would say I am between 1% and 10% confident that a SUNSA exists and that Tree Notation is somewhat close to it2. If Tree Notation is not the SUNSA, it at least gives me an angle of attack on the general problem.

Let's define a notation as a set of physical rules that can be used to represent abstractions. By simplest universal notation I mean the notation that can represent any and every abstraction representable by other notations that also has the smallest set of rules.

You could say there exists many "UNSAs", or Universal Notations for Symbolic Abstractions. For example, thousands of domain specific languages are built on the XML and JSON notations, but my hypothesis is that there is a single SUNSA. XML is not the SUNSA, because an XML document like <a>b</a> can be equivalently represented as a b using a notation with a smaller set of rules.

Where would a SUNSA fit?

Inventions aren't always built in a linear fashion. For example, when you add 2+3 on your computer, your machine will break down that statement into a binary form and compute something like 0010 + 0011. The higher level base 10 are converted into the lower level base 2 binary numbers. So, before your computer solves 2+3, it must do the equivalent of import binary. But we had Hindu-Arabic numerals centuries before we had boolean numerals. Dependencies can be built out of order.

Similarly, I think there is another missing dependency that fits somewhere between binary the idea and binary the symbolic word.

Consider Euclid's Elements, maybe the most famous math book of all time written around 2,500 years ago. The book begins with the title "Στοιχεῖα"3. Already there is a problem: where is import the letter Σ?. Euclid has imported undefined abstractions: letters and a word. Now, if we were to digitally encode the Elements today from scratch, we would first include the binary dependency and then a character encoding dependency like UTF-8. We abstract first from binary to symbols. Then maybe once we have things in a text stream, we might abstract again to encode the Elements book into something like XML and markdown. I think there is a missing notation in both of these abstractions: the abstraction leap from binary to characters, and abstraction leap from characters to words and beyond.

I think to represent the jumps from binary to symbols to systems, there is a best natural notation. A SUNSA that fits in between languages that let's us build mountains of abstraction without introducing extra syntax.

To get a little more concrete, let me show a rough approximation of how using Tree Notation you could imagine a document that starts with just the concept of a bit (here denoted on line 2 as ".") and work your way up to defining digits and characters and words and entities. There is a lot of hand-waving going on here, which is why Tree Notation is still, at best, a half-baked idea.

. ... 0 1 . ... Σ 10100011 ... Στοιχεῖα ... book title Elements ...

Why would a SUNSA be advantageous?

Given that I still consider this idea half-baked at best; given that I don't have compelling evidence that this notation is worthwhile; given that no one else has built a killer app using the idea (even though I've collaborated publicly and privately with many dozens of people on project ideas at this point); why does this idea still excite me so much?

The reason is because I think IF we had a SUNSA, there would be tremendous benefits and applications. I'll throw out three potential application domains that I personally find very interesting.

Idea #1: Mapping the Frontiers of Symbolic Science

A SUNSA would greatly reduce the cost of a common knowledge base of science. While it may be possible to do it today without a SUNSA, having one would be at least a one order of magnitude cost reduction. Additionally, if there is not a SUNSA, than it may take just as long to come to agreement on what UNSA to use for a common knowledge base of science as it would to actual build the base!

By encoding all of science into a universal syntax, in addition to tremendous pedagogical benefits, we could take analogies like this:

And make them actual concrete visualizations.

Idea #2: Law (and Taxes)

This one always gets me excited. I believe there is a deep connection between simplicity, justice, and fairness. I believe legal systems with unnecessary complexity are unfair, prima facie. While legal systems will always be human-made, rich in depth, nuanced, and evolving, we could shed the noise. I dream of a world where paychecks, receipts, and taxes are all written in the same language; where medical records can be cut and pasted; and where when I want to start a business I don't have to fill out forms in Delaware (the codesmell in that last one is so obvious!).

I believe a SUNSA would give us a way to measure complexity as neatly as we measure distance, and allow us to simplify laws to their signal, so that they serve all people, and we don't suffer from all that noise and inefficiency.

Idea #3: Showcasing the Common Patterns in Computing From Low Level to High Level

I love projects like godbolt.org, that let you jump up and down all the levels of abstraction in computing. I think there's an opportunity to do some incredible things if there is a SUNSA and the patterns in languages at different layers of computing all looked roughly the same (since they are roughly the same!).

What would the properties of a SUNSA be?

Tree Notation might not be the SUNSA, but it has a few properties that I think a SUNSA would have.

  1. 2 or more physical dimensions: Every symbolic abstraction would have to be contained in the SUNSA, so to include an abstraction like the letter "a" would require a medium have at least more than one physical dimension.
  1. Directional: A SUNSA would not just define how symbols are laid out, but it would also contain concepts of directionality.
  1. Scopes: Essential for growth and collaboration.
  1. Brevity: I think a SUNSA will have fewer components, not more. I often see new replacements for S-Expressions or JSON come out with more concepts, not less. I don't think this is the way to go—I think a SUNSA will be like a NAND gate and not a suite of gates, although the latter are handy and pragmatic.

I also will list one thing I don't think a SUNSA will have:

  1. A single entry point. Currently most documents and programs are parsed start to finish in a linear order. With Tree Notation you can parse things in any order you want—start from anywhere, move in any direction, or even start in multiple places at the same time. I think this will be a property of a SUNSA. Maybe SUNSA programs will look more like this than that.

So those are a few things that I think we'll find in a SUNSA. Will we ever find a SUNSA?

Why might there not be a SUNSA?

I think a really good piece of evidence that we don't need a SUNSA is that we've seen STUPENDOUS SUCCESS WITH THOUSANDS OF SYNTAXES. The pace of progress in computing in the 1900's and 2000's has been tremendous, perhaps because of the Smörgåsbord of notations.

Who's to say that a SUNSA is needed? I guess my retort to that, is that although we do indeed have thousands of digital notations and languages, all of them, without exception, compile down to binary, so clearly having some low level universal notation has proved incredibly advantageous so far.

2019 Pros

So that concludes my restatement of the Tree Notation idea in terms of a more generic SUNSA concept. Now let me continue on and mention briefly some developments in 2019.

Here I'll just write some bullet points of work done this past ~ year advancing the idea.

  • Types and Cells
  • Tree Notation as a Subset of Grid Notation
  • New homepage
  • TreeBase
  • CopyPaster
  • Dozens of new Tree Languages
  • More feedback than ever. Tens of thousands of visitors. Hundreds of conversations.

2019 Cons

Here I just list some marks against this idea.

  • It still sucks.
  • No killer app yet.
  • No good General Purpose Tree Language.
  • No good Assembly Tree Language.
  • No good LISP Tree Language.
  • No good LLVM IR tie in yet.
  • One argument put to me: "there's no need for a universal syntax with deep learning—complexity IS the universal syntax."
  • Another argument put to me: "sure it is still simple BUT there are 2 types of inventions: ones that get more complex over time and ones that no one uses"

Next Steps

Next steps is more of the same. Keep attempting to solve problems by simplifying the encoding of them to their essence (which happens to be Tree Notation, according to the theory). Build tools to make that easier and leverage those encodings. This year LSP will likely be a focus, Grid Notation, and the PLDB.

Tree Notation has a secret weapon: Simplicity does not go out of style. Slippers today look just like slippers in Egypt 3,000 years ago

Status of Predictions in Paper

My Tree Notation paper was my first ever attempt at writing a scientific paper and my understanding was that a good theory would make some refutable predictions. Here are the predictions I made in that paper and where they stand today.

Prediction 1: no structure will be found that cannot serialize to TN.

While this prediction has held, a number of people have commented that it doesn't predict much, as the same could really be said about most languages. Anything you can represent in Tree Notation you can represent in many encodings like XML.

What I should have predicted is something along the lines of this: Tree Notation is the smallest set of syntax rules that can represent all abstractions. I think trying to formalize a prediction along those lines would be a worthwhile endeavor (possibly for the reason that in trying to do what I just said, I may learn that what I just said doesn't make sense).

Prediction 2: TLs will be found for every popular 1DL.

This one has not come true yet. While I have made many public Tree Languages myself and many more private ones, and I have prototyped many with other people, the net utility of Tree Languages is not high enough that people are rushing to design these things. Many people have kicked the tires, but things are not good enough and there is a lack of demand.

On the supply side, it has turned out to be a bit harder to design useful Tree Languages than I expected. Not by 100x, but maybe by as much as 10x. I learned a lot of bad design patterns not to put in Tree Languages. I learned that bad tooling will force compromises in language design. For example, before I had syntax highlighting I relied on weird punctuation like "@" vs "#" prefixes for distinguishing types. I also learned a lot of patterns that seem to be useful in Tree Languages (like word suffixes for types). I learned good tooling leads to simpler and better languages.

Prediction 3: Tree Oriented Programming (TOP) will supersede Object Oriented Programming.

This one has not come true yet. While there is a tremendous amount of what I would call "Tree Oriented Programming" going on, programmers are still talking about objects and message passing and are not viewing the world as trees.

Prediction 4: The simplest 2D text encodings for neural networks will be TLs.

This one is a fun one. Definitely has not come true yet. But I've got a new attack vector to try and potentially crack it.

Status of Long Bet

After someone suggested it, I made a Long Bet predicting the rise of Tree Notation or a SUNSA within ten years of my initial Tree Notation post. Clearly I am far off from winning this bet at this point, as there are not any candidate languages even noted in TIOBE, never mind in the Top 10. However, IF I were to win the bet, I'd expect it wouldn't be until around 2025 that we'd see any candidate languages even appear on TIOBE's radar. In other words, absence of evidence is not evidence of absence.

As an aside, I really like the idea of Long Bet, and I'm hoping it may prompt someone to come up with a theoretical argument against a SUNSA that buries my ideas for good. Now, it would be very easy to take the opposing side of my bet with the simple argument that the idea of 7/10 TIOBE languages dropping by 2027 won't happen because such a shift has never happened so quickly. However, I'd probably reject that kind of challenge as non-constructive, unless it was accompanied by something like a detailed data-backed case with models showing potential speed limits on the adoption of any language (which would be a constructive contribution).

Organization Status

In 2019 I explored the idea of putting together a proper research group and a more formal organization around the idea.

I put the breaks on that for three reasons. The first is I just don't have a particularly keen interest in building an organization. I love to be part of a team, but I like to be more hands on with the ideas and the work of the team rather than the meta aspect. I've gotten great help for this project at an informal level, so there's no rush to formalize it. The second reason is I don't have a great aptitude for team building, and I'm not ready yet to dedicate the time to that. I get excited by ideas and am good at quickly explore new idea spaces, but being the captain who calmly guides the ship toward a known destination just isn't me right now. The third reason is just the idea remains too risky and ill-defined. If it's a good idea, growth will happen eventually, and there's no need to force it.

There is a loose confederation of folks I work on this idea with, but no formal organization with an office so far.

Conclusion

That's it for the recap of 2019! Tune in next year for a recap of 2020.

Notes

1 Regardless of whether or not Tree Notation turns out to be a good idea, as one part of the effort to prove/disprove it I've built a lot of big datasets on languages and notations, which seem to be useful for other people. Credit for that is due to a number of people who advised me back in 2017 to "learn to research properly".

2 Note that this means I am between 90-99% confident that Tree Notation is not a good idea. However, if it's a bad idea I am 40% confident the attempt to prove it a bad idea will have positive second-order effects. I am 50% confident that it will turn out I should have dropped this idea years ago, and it's a crackpot or Dunning–Kruger theory, and I'd be lying if I said I didn't recognize that as a highly probably scenario that has kept me up some nights.

3 When it was first coming together, it wasn't a "book" as we think of books today and authorship is very fuzzy, but that doesn't affect things for my purposes here.

View source

March 2, 2020 — A paradigm change is coming to medical records. In this post I do some back-of-the-envelope math to explore the changes ahead, both qualitative and quantitative. I also attempt to answer the question no one is asking: in the future will someone's medical record stretch to the moon?

Medical Records at the Provider

Medical records are generally stored with healthcare providers and currently at least 86%-96% of providers use an EHR system.

Americans visit their healthcare providers an average of 4 times per year.

If you were to plot the cumulative medical data storage use for the average American patient, it would look something like the abstract chart below, going up in small increments during each visit to the doctor:

A decade ago, this chart would not only show the quantity of a patient's medical data stored at their providers, but also the quantity of all of the patient's medical data. Simply put: people did not generally keep their own medical records. But this has changed.

Medical Records at Home

Now people own wearables like FitBits and Apple Watches. People use do-it-yourself services like 23andMe and uBiome. And in the not-too-distant future, the trend of ever-miniaturizing lab devices will enable advanced protocols at home. So now we have an additional line, reflecting the quantity of the patient's medical data from their own devices and services:

When you put the two together you can see the issue:

Patients will log far more medical data on their own than they do at their providers'.

Implication #1: Change in Ownership

It seems highly likely then that the possession of medical records will flip from providers to patients. I now have 120 million heart rate readings from my own devices, while I might have a few dozen from my providers. The gravity of the former will be harder and harder to overcome.

Patients won't literally be in possession of their records. While some nerdy patients—the kind of people who host their own email servers—might host their own open records, most will probably use a service provider. Prior attempts at creating personal health record systems, including some from the biggest companies around, did not catch on. But back then we didn't have the exponential increase in personal medical data, and the data gravity that creates, that we have today.

I'm noticing a number of startups innovating along this wave (and if you know of other exciting ones, please share!). However, it seems that Apple Health and FitBit are in strong positions to emerge as leading providers of PHR as-a-service due to data gravity.

Implication #2: Change in Design

Currently EHR providers like Epic design and sell their products for providers first. If patients start making the decisions about which PHR tool to use, product designers will have to consider the patient experience first.

I think this extends beyond products to standards. While there are some great groups working on open standards for medical records, none, as far as I'm aware, consider patients as a first class user of their grammars and definitions. I personally think that a standards system can be developed that is fully understandable by patients without compromising on the needs of experts.

One simple UX innovation in medical records that I love is BlueButton Developed by the V.A. in 2010, BlueButton allows patients to download their entire medical records as a single file. While the grammar and parse-ability of BlueButton leave much to be desired, I think the concept of "your entire medical history in a single document" is a very elegant design.

Implication #3: Change in Scale

As more and more different devices contribute to patients' medical documents, what will the documents look like and how big will they get? Will someone's medical records stretch to the moon?

I think the BlueButton concept provides a helpful mental model here: you can visualize any person's medical record as a single document. Let's call this document an MRD for "Medical Record Document".

Let's imagine a 30 year old in 2050. They'd have around 11,200 days worth of data (I included some days for in utero records). Let's say there are 4 "buckets" of medical data in their MRD:

  • Time series sensor data
  • Image and point cloud data
  • Data from microbio protocols like genomic and metabolomic data
  • Text data

This is my back of the envelope math of how many megabytes of data might be in each of those buckets:

I am assuming that sensor development advances a lot in 40 years. I am assuming our patient of the future has:

  • 1,000 different passive 1-D biomedical sensors recording a reading once per second
  • 10 different passive photo and 3-D cameras capturing 100 frames per day each
  • 100 passive microbio systems generating 1GB of data per protocol (don't ask me how these will work, maybe something like this)
  • For good measure I throw in a fourth bucket of 100k characters a day of plain text data

By my estimate this person would log about 100GB of medical data per day, or about 100 petabytes of data in 30 years. That would fit on roughly 1,000 of today's hard drives.

If you printed this record in a single doc, on 8.5 x 11 sheets of paper, in a human readable form—i.e. print the text, print the time series data as line charts, print the images, and print various types of output for the various protocols—the printed version would be about 138,000,000 pages which laid end-to-end would stretch 24,000 miles. If you printed it double-sided and stacked it like a book it would be 4.2 miles high.

So for a 120 year old in 2140, their printed MRD would not reach the moon. Though it may make it halfway there.

View source

March 2, 2020 — I expect the future of healthcare will be powered by consumer devices. Devices you wear. Devices you keep in your home. In the kitchen. In the bathroom. In the medicine cabinet.

These devices record medical data. Lots of data. They record data from macro signals like heart rate, temperature, hydration, physical activity, oxygen levels, body temperature, brain waves, voice activity. They also record data from micro signals like antibodies, RNA expression levels, metabolomics, microbiome, etc.

Most of the data is collected passively and regularly. But sometimes your Health app prompts you to take out the digital otoscope or digital stethoscope to examine an unusual condition more closely.

This data is not stored in a network at the hospital you don't have access to. Instead you can access all of that data as easily as you can access your email. You can see that data on your wrist, on your phone, on your tablet.

You can understand that data too. You can click and dive into the definitions of every term. You can see what is meant by macro concepts like "VO2 max" and micro concepts like "RBC Count" or "BRC1 expression". Everything is explained precisely and thoroughly. Not only in words but in interactive visualizations that are customized to your body. The algorithms and models that turn the raw signals into higher level concepts are constantly improving.

When you get flu like symptoms, you don't alternate between anxiously Googling symptoms and scheduling doctor's appointments. Instead, your Health app alerts you that your signals have changed, it diagnoses your condition, shows you how your patterns compare to tens of thousands of people who have experienced similar changes, and makes recommendations about what to do next. You can even see forecasts of how your condition will change in the days ahead, and you can simulate how different treatment strategies might affect those outcomes.

You can not only reduce illness, but you can improve well-being too. You can see how your physical habits, social habits, eating habits, sleeping habits, correlate with hundreds of health and other signals.

Another benefit to all of this? Healthcare powered by consumer devices seems like it will be a lot cheaper.

View source

February 25, 2020 — One of the questions I often come back to is this: how much of our collective wealth is inherited by our generations versus created by our generations?

I realized that the keys on the keyboard in front of me might make a good dataset to attack that problem. So I built a small little experiment to explore the history of the keys on my keyboard.

The Five Waves of Symbols

Painting with broad strokes, there were approximately five big waves of inventions that have left their mark on the keyboard:

  • 1. The first wave was the invention of the phonetic alphabet letters.
  • 2. The second wave was the Hindu-Arabic Numerals.
  • 3. The third wave was the mathematical punctuation of the Enlightenment period.
  • 4. The fourth wave was the invention of the typewriter.
  • 5. And the fifth and most recent wave was the invention of the personal computer.

I haven't made any traditional charts yet with this dataset, but you can roughly make out these waves in the interactive visualization by moving the slider around.

Concentric Circles

An interesting pattern that I never saw before is how the five waves above are roughly arranged in circles. The oldest symbols (letters) are close to the center, followed by the Hindu-Arabic Numbers, surrounded by the punctuation of the Englightenment, surrounded by the keys of the keyboard, surrounded by the recent additions in the P.C. era. Again, painting with broad strokes, but I found that to be an interesting pattern.

Standing on the Shoulders of Giants

All of these waves happened invented before my generation. Almost all of them before any generation alive today. The keyboard dataset provides strong evidence that most of our collective wealth is inherited.

Build Notes

I got this idea last week and couldn't get it out of my head. Yesterday I took a quick crack at it. I didn't have much time to spare, just enough to explore the big ideas.

I started by typing all the characters on my keyboard into a Tree Notation document. Then I dug up some years for a handful of the symbols.

Next I found the great Apple CSS keyboard. I stitched together the two and it seemed to be at least mildly interesting so I opted to continue.

I then flushed out most of the dataset.

Finally I played around with a number of visualization effects. At first I thought heatmaps would work well, and tried a few variations on that, but wasn't happy with anything. I posted my work-in-progress to a few friends last night and called it a day. Today I switched to the "disappearing keys" visualization. That definitely felt like a better approach than the heatmap.

I made the thing as fun as I could given time constraints and then shipped.

View source

February 21, 2020 — One of the most unpopular phrases I use is the phrase "Intellectual Slavery Laws".

I think perhaps the best term for copyright and patent laws is "Intellectual Monopoly Laws". When called by that name, it is obvious that there should be careful scrutiny of these kinds of laws.

However, the industry insists on using the false term "Intellectual Property Laws."

Instead of wasting my breath trying to pull them away from the property analogy, lately I've leaned into it and completed the analogy for them. So let me explain "Intellectual Slavery Laws".

As far as I can figure, you cannot have Property Rights and "Intellectual Property" rights. Having both is logically inconsistent. My computer is my property. However, by law there are billions of peaceful things I cannot do on my computer. Therefore, my computer is not my property.

Unless of course, the argument is that my computer is my property, but some copyright and patent holders have property rights over me, so their property rights allow them to restrict my freedom. I still get rights over my property. But other people get rights over me. Property Rights and Intellectual Slavery Laws can logically co-exist! Logical inconsistency solved!

We can have a logical debate about whether we should have an Intellectual Slavery System, Intellectual Slavery Laws, Intellectual Slavery Law Schools, Intellectual Slavery Lawyers, etc. But we cannot have a logical debate about Intellectual Property Laws. Because the term itself is not logical.

I know, having now used this term with a hundred different people, that this is a not a popular thing to say. But I think someone needs to say it. Do we really think we are going to be an interplanetary species and solve the world's biggest challenges if we keep 99+% of the population in intellectual chains?

Errata

  • "They are stealing my IP." What would your "IP" be if you weren't "stealing" inventions like words, the alphabet, numbers, rules of physics, etc, that were developed and passed down over thousands of years?
  • "But shouldn't creators be paid for their work?" Yes. Pay them upon delivery. No need for monopolies. Does a janitor, after cleaning a room, get to charge everyone who enters a royalty for 100 years?
  • "Not a big deal—rights expire after a certain time." The fact that Copyrights and Patents expire on an arbitrary date is more proof that these should not be called property rights.
  • "This is not an urgent problem." I think Intellectual Slavery Laws have deep, direct connections to major problems of our time including healthcare, education, and inequality problems.
  • "This is anti-capitalist." This is pro-property rights.
  • "What about trademarks?" Centralized naming registries like Trademarks are fine, as long as anyone can start a registry. Posing as someone else isn't an IP violation, it is fraud. Already consequences for that.
  • "If you think the U.S. is bad, go visit China." I acknowledge that we have tremendous intellectual freedoms in the U.S., especially compared to other countries. I don't take freedom of speech and freedom of press for granted. However, I believe we are capping ourselves greatly by not legalizing full intellectual freedom.
  • "This is offensive to people suffering from physical slavery or its lingering effects." The people who would benefit the most from abolishing Intellectual Slavery laws are the same people who have suffered the most from physical slavery systems.
  • "I am an Intellectual Property lawyer and this offends me." The phrase "Intellectual Property" offends me.
  • "What about Trade Secrets?" Trade secrets and private information are fine. No one should be forced to publish anything. But once you publish something, let it thrive.
  • "Can't we just copyleft our way to the promised land?" Perhaps, but why lie about the system in the meanwhile?
  • One difference between Physical Slavery and Intellectual Slavery is in the latter it is slavery from a million masters.
  • This woman is amazing.

View source

Preface: Richard Brhel of placepeep shared a great quote the other day on StartupSchool. He saw the quote on a poster years ago when he was helping a digitization effort in Ohio. I had never seen this exact quote before so wanted to transcribe it for the web.

February 9, 2020 — In 1851 Ezekiel G. Folsom incorporated Folsom's Mercantile College in Ohio. Folsom's taught bookkeeping, banking, and "railroading", amongst other things.

The image above is a screenshot of an 1850's poster promoting the college. The poster includes a motto (which I boxed in green) that I think is great guidance:

Integrity and Perseverance in Business ensure success

Guess who went to Folsom's and presumably saw this poster and was influenced by this motto?

John D. Rockefeller.

View source

January 29, 2020 — In this long post I'm going to do a stupid thing and see what happens. Specifically I'm going to create 6.5 million files in a single folder and try to use Git and Sublime and other tools with that folder. All to explore this new thing I'm working on.

TreeBase is a new system I am working on for long-term, strongly-typed collaborative knowledge bases. The design of TreeBase is dumb. It's just a folder with a bunch of files encoded with Tree Notation. A row in a normal SQL table in TreeBase is roughly equivalent to a file. The filenames serve as IDs. Instead of each using an optimized binary storage format it just uses plain text like UTF-8. Field names are stored alongside the values in every file. Instead of starting with a schema you can just start adding files and evolve your schema and types as you go.

For example, in this tiny demo TreeBase of the planets the file mars.planet looks like this:

diameter 6794 surfaceGravity 4 yearsToOrbitSun 1.881 moons 2

TreeBase is composed of 3 key ingredients.

Ingredient 1: A folder All that TreeBase requires is a file system (although in theory you could build an analog TreeBase on paper). This means that you can use any tools on your system for editing files for editing your database.

Ingredient 2: Git Instead of having code to implement any sort of versioning or metadata tracking, you just use Git. Edit your files and use Git for history, branching, collaboration, etc. Because Tree Notation is a line and word based syntax it meshes really well with Git workflows.

Ingredient 3: Tree Notation The Third Ingredient for making a TreeBase is Tree Notation. Both schemas and data use Tree Notation. This is a new very simple syntax for encoding strongly typed data. It's simple, extensible, and plays well with Git.

TreeBase Compared to Other Database Systems

Probably hundreds of billions of dollars has gone into designing robust database systems like SQL Server, Oracle, PostgreSQL, MySQL, MongoDB, SQLite and so forth. These things run the world. They are incredibly robust and battle-hardened. Everything that can happen is thought of and planned for, and everything that can go wrong has gone wrong (and learned from). These databases can handle trillions of rows, can conduct complex real-time transactions, and survive disasters of all sort. They use sophisticated binary formats and are tuned for specific file systems. Thousands of people have gotten their PhD's working on database technology.

TreeBase doesn't have any of that. TreeBase is stupid. It's just a bunch of files in a folder.

You might be asking yourself "Why use TreeBase at all when great databases exist?". To further put the stupidity of the current TreeBase design into perspective, the Largest Git Repo on the Planet is Windows which has 3.5 million files. I'm going to try and create a repo with 6.5 million files on my laptop.

Even if you think TreeBase is silly aren't you curious what happens when I try to put 6.5 million files into one folder? I kind of am. If you want an explanation of why TreeBase, I'll get to that near the end of this post.

But first...

Let's Break TreeBase

Here again is a demo TreeBase with only 8 files.

The biggest TreeBase I work with has on the order of 10,000 files. Some files have thousands of lines, some just a handful.

While TreeBase has been great at this small scale, a question I've been asked, and have wondered myself, is what happens when a TreeBase gets too big?

I'm about to find out, and I'll document the whole thing.

Every time something bad happens I'll include a 💣.

Choosing a Topic

TreeBase is meant for knowledge bases. So all TreeBases center around a topic.

To test TreeBase on a big scale I want something realistic. I wanted to choose some big structured database that thousands of people have contributed to that's been around for a while and see what it would look like as a TreeBase.

IMDB is just such a database and amazingly makes a lot of their data available for download. So movies will be the topic and the IMDB dataset will be my test case.

The Dataset

First I grabbed the data. I downloaded the 7 files from IMDB to my laptop. After unzipping, they were about 7GB.

One file, the 500MB title.basics.tsv, contained basic data for all the movie and shows in the database.

Here's what that file looks like with head -5 title.basics.tsv:

tconst titleType primaryTitle originalTitle isAdult startYear endYear runtimeMinutes genres
tt0000001 short Carmencita Carmencita 0 1894 \N 1 Documentary,Short
tt0000002 short Le clown et ses chiens Le clown et ses chiens 0 1892 \N 5 Animation,Short
tt0000003 short Pauvre Pierrot Pauvre Pierrot 0 1892 \N 4 Animation,Comedy,Romance
tt0000004 short Un bon bock Un bon bock 0 1892 \N \N Animation,Short

This looks like a good candidate for TreeBase. With this TSV I can create a file for each movie. I don't need the other 6 files for this experiment, though if this was a real project I'd like to merge in that data as well (in that case I'd probably create a second TreeBase for the names in the IMDB dataset).

Doing a simple line count wc -l title.basics.tsv I learn that there are around 6.5M titles in title.basics.tsv. With the current implementation of TreeBase this would be 6.5M files in 1 folder. That should handily break things.

The TreeBase design calls for me to create 1 file for every row in that TSV file. To again stress how dumb this design is keep in mind a 500MB TSV with 6.5M rows can be parsed and analyzed with tools like R or Python in seconds. You could even load the thing near instantly into a SQLite database and utilize any SQL tool to explore the dataset. Instead I am about to spend hours, perhaps days, turning it into a TreeBase.

From 1 File to 6.5 Million Files

What will happen when I split 1 file into 6.5 million files? Well, it's clear I am going to waste some space.

A file doesn't just take up space for its contents: it also has metadata. Every file contains metadata like permissions, modification time, etc. That metadata must take up some space, right? If I were to create 6.5M new files, how much extra space would that take up?

My MacBook uses APFS It can hold up to 9,000,000,000,000,000,000 files. I can't easily find hard numbers on how much metadata one file takes up but can at least start with a ballpark estimate.

I'll start by considering the space filenames will take up.

In TreeBase filenames are composed of a permalink and a file extension. The file extension is to make it easier for editors to understand the schema of a file. In the planets TreeBase above, the files all had the planet extension and there is a planet.grammar file that contains information for the tools like syntax highlighters and type checkers. For my new IMDB TreeBase there will be a similar title.grammar file and each file will have the ".title" extension. So that is 6 bytes per file. Or merely 36MB extra for the file extensions.

Next, the body of each filename will be a readable ID. TreeBase has meaningful filenames to work well with Git and existing file tools. It keeps things simple. For this TreeBase, I will make the ID from the primaryTitle column in the dataset. Let's see how much space that will take.

I'll try xsv select primaryTitle title.basics.tsv | wc.

💣 I got this error:

CSV error: record 1102213 (line: 1102214, byte: 91470022): found record with 8 fields, but the previous record has 9 fields 1102213 3564906 21815916

XSV didn't like something in that file. Instead of getting bogged down, I'll just work around it.

I'll build a subset from the first 1M rows with head -n 1000000 title.basics.tsv > 1m.title.basics.tsv. Now I will compute against that subset with xsv select primaryTitle 1m.title.basics.tsv | wc. I get 19751733 so an average of 20 characters per title.

I'll combine that with the space for file extension and round that to say 30 extra bytes of file information for each of the 6.5 million titles. So about 200MB of extra data required to split this 500MB file into filenames. Even though that's a 50% increase, 200MB is dirt cheap so that doesn't seem so bad.

You may think that I could save a roughly equivalent amount by dropping the primaryTitle field. However, even though my filenames now contain information from the title, my permalink schema will generally distort the title so I need to preserve it in each file and won't get savings there. I use a more restrictive character set in the permalink schema than the file contents just to make things like URLs easier.

Again you might ask why not just an integer for the permalink? You could but that's not the TreeBase way. The human readable permalinks play nice with tools like text editors, URLs, and Git. TreeBase is about leveraging software that already works well with file systems. If you use meaningless IDs for filenames you do away with one of the very useful features of the TreeBase system.

But I won't just waste space in metadata. I'm also going to add duplicate data to the contents of each file. That's because I won't be storing just values like 1999 but I'll also be repeating column names in each file like startYear 1999.

How much space will this take up? The titles file has 9 columns and using head -n 1 1m.title.basics.tsv | wc I see that adds up to 92 bytes. I'll round that up to 100, and multiple by 6.5M, and that adds up to about 65,000,000 duplicate words and 650MB. In other words the space requirements roughly doubled (of course, assuming no compression by the file system under the hood).

You might be wondering why not just drop the column names from each file? Again, it's just not the TreeBase way. By including the column names, each file is self-documenting. I can open up any file with a text editor and easily change it.

So to recap: splitting this 1 TSV file into 6.5 million files is going to take up 2-3x more space due to metadata and repetition of column names.

Because this is text data, that's actually not so bad. I don't foresee problems arising from wasted disk space.

Foreseeing Speed Problems

Before I get to the fun part, I'm going to stop for a second and try and predict what the problems are going to be.

Again, in this experiment I'm going to build and attempt to work with a TreeBase roughly 1,000 times larger than any I've worked with before. A 3 order of magnitude jump.

Disk space won't be a problem. But are the software tools I work with on a day-to-day basis designed to handle millions of files in a single folder? How will they hold up?

  • Bash How will the basics like ls and grep hold up in a folder with 6.5M files?
  • Git How slow will git status be? What about git add and git commit?
  • Sublime Text Will I even be able to open this folder in Sublime Text? Find/replace is something I so commonly use, will that work? How about regex find/replace?
  • Finder Will I be able to visually browse around?
  • TreeBase Scripts Will my simple TreeBase scripts be usable? Will I be able to type check a TreeBase?
  • GitHub Will GitHub be able to handle 6.5M files?

Proceeding in Stages

Since I am going to make a 3 order of magnitude jump, I figured it would be best to make those jumps one at a time.

Actually, to be smart, I will create 5 TreeBases and make 4 jumps. I'll make 1 small TreeBase for sanity checks and then four where I increase by 10x 3 times and see how things hold up.

First, I'll create 5 folders: mkdir 60; mkdir 6k; mkdir 60k; mkdir 600k; mkdir 6m

Now I'll create 4 smaller subsets for the smaller bases. For the final 6.5M base I'll just use the original file.

head -n 60 title.basics.tsv > 60/titles.tsv head -n 6000 title.basics.tsv > 6k/titles.tsv head -n 60000 title.basics.tsv > 60k/titles.tsv head -n 600000 title.basics.tsv > 600k/titles.tsv

Now I'll write a script to turn those TSV rows into TreeBase files.

#! /usr/local/bin/node --use_strict const { jtree } = require("jtree") const { Disk } = require("jtree/products/Disk.node.js") const folder = "600k" const path = `${__dirname}/../imdb/${folder}.titles.tsv` const tree = jtree.TreeNode.fromTsv(Disk.read(path).trim()) const permalinkSet = new Set() tree.forEach(node => { let permalink = jtree.Utils.stringToPermalink(node.get("primaryTitle")) let counter = "" let dash = "" while (permalinkSet.has(permalink + dash + counter)) { dash = "-" counter = counter ? counter + 1 : 2 } const finalPermalink = permalink + dash + counter permalinkSet.add(finalPermalink) // Delete Null values: node.forEach(field => { if (field.getContent() === "\\N") field.destroy() }) if (node.get("originalTitle") === node.get("primaryTitle")) node.getNode("originalTitle").destroy() Disk.write(`${__dirname}/../imdb/${folder}/${finalPermalink}.title`, node.childrenToString()) })

The script iterates over each node and creates a file for each row in the TSV.

This script required a few design decisions. For permalink uniqueness, I simply keep a set of titles and number them if a name comes up multiple times. There's also the question of what to do with nulls. IMDB sets the value to \N. Generally the TreeBase way is to not include the field in question. So I filtered out null values. For cases where primaryTitle === originalTitle, I stripped the latter. For the Genres field, it's a CSV array. I'd like to make that follow the TreeBase convention of a SSV. I don't know all the possibilities though without iterating, so I'll just skip this for now.

Here are the results of the script for the small 60 file TreeBase:

Building the Grammar File

The Grammar file adds some intelligence to a TreeBase. You can think of it as the schema for your base. TreeBase scripts can read those Grammar files and then do things like provide type checking or syntax highlighting.

Now that we have a sample title file, I'm going to take a first pass at the grammar file for our TreeBase. I copied the file the-photographical-congress-arrives-in-lyon.title and pasted it into the right side of the Tree Language Designer. Then I clicked Infer Prefix Grammar.

That gave me a decent starting point for the grammar:

inferredLanguageNode root inScope tconstNode titleTypeNode primaryTitleNode originalTitleNode isAdultNode startYearNode runtimeMinutesNode genresNode keywordCell anyCell bitCell intCell tconstNode crux tconst cells keywordCell anyCell titleTypeNode crux titleType cells keywordCell anyCell primaryTitleNode crux primaryTitle cells keywordCell anyCell anyCell anyCell anyCell anyCell anyCell originalTitleNode crux originalTitle cells keywordCell anyCell anyCell anyCell anyCell anyCell anyCell anyCell anyCell isAdultNode crux isAdult cells keywordCell bitCell startYearNode crux startYear cells keywordCell intCell runtimeMinutesNode crux runtimeMinutes cells keywordCell bitCell genresNode crux genres cells keywordCell anyCell

The generated grammar needed a little work. I renamed the root node and added catchAlls and a base "abstractFactType". The Grammar language and tooling for TreeBase is very new, so all that should improve as time goes on.

My title.grammar file now looks like this:

titleNode root pattern \.title$ inScope abstractFactNode keywordCell anyCell bitCell intCell abstractFactNode abstract cells keywordCell anyCell tconstNode crux tconst extends abstractFactNode titleTypeNode crux titleType extends abstractFactNode primaryTitleNode crux primaryTitle extends abstractFactNode catchAllCellType anyCell originalTitleNode crux originalTitle extends abstractFactNode catchAllCellType anyCell isAdultNode crux isAdult cells keywordCell bitCell extends abstractFactNode startYearNode crux startYear cells keywordCell intCell extends abstractFactNode runtimeMinutesNode crux runtimeMinutes cells keywordCell intCell extends abstractFactNode genresNode crux genres cells keywordCell anyCell extends abstractFactNode

Next I coped that file into the 60 folder with cp /Users/breck/imdb/title.grammar 60/. I have the jtree package installed on my local machine so I registered this new language with that with the command jtree register /Users/breck/imdb/title.grammar. Finally, I generated a Sublime syntax file for these title files with jtree sublime title #pathToMySublimePluginDir.

Now I have rudimentary syntax highlighting for these new title files:

Notice the syntax highlighting is a little broken. The Sublime syntax generating still needs some work.

Anyway, now we've got the basics done. We have a script for turning our CSV rows into Tree Notation files and we have a basic schema/grammar for our new TreeBase.

Let's get started with the bigger tests now.

A 6k TreeBase

I'm expecting this to be an easy one. I update my script to target the 6k files and run it with /Users/breck/imdb/build.js. A little alarmingly, it takes a couple of seconds to run:

real 0m3.144s user 0m1.203s sys 0m1.646s

The main script is going to iterate over 1,000x as many items so if this rate holds up it would take 50 minutes to generate the 6M TreeBase!

I do have some optimization ideas in mind, but for now let's explore the results.

First, let me build a catalog of typical tasks that I do with TreeBase that I will try to repeat with the 6k, 60k, 600k, and 6.5M TreeBases.

I'll just list them in Tree Notation:

task ls category bash description task open sublime category sublime description Start sublime in the TreeBase folder task sublime responsiveness category sublime description scroll and click around files in the treebase folder and see how responsive it feels. task sublime search category sublime description find all movies with the query "titleType movie" task sublime regex search category sublime description find all comedy movies with the regex query "genres ._Comedy._" task open finder category finder description open the folder in finder and browse around task git init category git description init git for the treebase task git first status category git description see git status task git first add category git description first git add for the treebase task git first commit category git description first git commit task sublime editing category sublime description edit some file task git status category git description git status when there is a change task git add category git description add the change above task git commit category git description commit the change task github push category github description push the treebase to github task treebase start category treebase description how long will it take to start treebase task treebase error check category treebase description how long will it take to scan the base for errors.

💣 Before I get to the results, let me note I had 2 bugs. First I needed to update my title.grammar file by adding a cells fileNameCell to the root node and also adding a fileNameCell line. Second, my strategy above of putting the CSV file for each TreeBase into the same folder as the TreeBase was not ideal as Sublime Text would open that file as well. So I moved each file up with mv titles.tsv ../6k.titles.tsv.

The results for 6k are below.

category description result
bash ls instant
sublime Start sublime in the TreeBase folder instant
sublime scroll and click around files in the treebase folder and see how responsive it feels. nearInstant
sublime find all movies with the query "titleType movie" neaerInstant
sublime find all comedy movies with the regex query "genres ._Comedy._" nearInstant
finder open and browse instant
git init git for the treebase instant
git see git status instant
git first git add for the treebase aFewSeconds
git first git commit instant
sublime edit some file instant
git git status when there is a change instant
git add the change above instant
git commit the change instant
github push the treebase to github ~10 seconds
treebase how long will it take to start treebase instant
treebase how long will it take to scan the base for errors. nearInstant

So 6k worked without a hitch. Not surprising as this is in the ballpark of where I normally operate with TreeBases.

Now for the first of three 10x jumps.

A 60k TreeBase

💣 This markdown file that I'm writing was in the parent folder of the 60k directory and Sublime text seemed to be slowing a bit, so I closed Sublime and created a new unrelated folder to hold this writeup separate from the TreeBase folders.

The build script for the 60k TreeBase took 30 seconds or so, as expected. I can optimize for that later.

I now repeat the tasks from above to see how things are holding up.

category description result
bash ls aFewSeconds
sublime Start sublime in the TreeBase folder aFewSeconds with Beachball
sublime scroll and click around files in the treebase folder and see how responsive it feels. instant
sublime find all movies with the query "titleType movie" ~20 seconds with beachball
sublime find all comedy movies with the regex query "genres ._Comedy._" ~20 seconds with beachball
git init git for the treebase instant
finder open and browse 6 seconds
git see git status nearInstant
git first git add for the treebase 1 minute
git first git commit 10 seconds
sublime edit some file instant
git git status when there is a change instant
git add the change above instant
git commit the change instant
github push the treebase to github ~10 seconds
treebase how long will it take to start treebase ~10 seconds
treebase how long will it take to scan the base for errors. ~5 seconds

Uh oh. Already I am noticing some scaling delays with a few of these tasks.

💣 The first git add took about 1 minute. I used to know the internals of Git well but that was a decade ago and my knowledge is rusty.

I will now look some stuff up. Could Git be creating 1 file for each file in my TreeBase? I found this post from someone who created a Git repo with 1.7M files which should turn out to contain useful information. From that post it looks like you can indeed expect 1 file for Git for each file in the project.

The first git commit took about 10 seconds. Why? Git printed a message about Autopacking. It seems Git will combine a lot of small files into packs (perhaps in bundles of 6,700, though I haven't dug in to this) to speed things up. Makes sense.

💣 I forgot to mention, while doing the tasks for the 60k TreeBase, my computer fan kicked on. A brief look at Activity Monitor showed a number of mdworker_shared processes using single digit CPU percentages each, which appears to be some OS level indexing process. That's hinting that a bigger TreeBase might require at least some basic OS/file system config'ing.

Besides the delays with git everything else seemed to remain fast. The 60k TreeBase choked a little more than I'd like but seems with a few tweaks things could remain screaming fast.

Let's move on to the first real challenge.

A 600k TreeBase

💣 The first problem I hit immediately in that my build.js is not efficient. I hit a v8 out of memory error. I could solve this by either 1) streaming the TSV one row at a time or 2) cleaning up the unoptimized jtree library to handle bigger data better. I chose to spend a few minutes and go with option 1).

💣 It appears the first build script started writing files to the 600k directory before it failed. I had to rm -rf 600k/ and that took a surprisingly long time. Probably a minute or so. Something to keep an eye on.

💣 I updated my build script to use streams. Unfortunately the streaming csv parser I switched to choked on line 32546. Inspecting that vicinity it was hard to detect what it was breaking on. Before diving in I figured I'd try a different library.

💣 The new library seemed to be working but it was taking a while so I added some instrumentation to the script. From those logs the new script seems to generate about 1.5k files per second. So should take about 6 minutes for all 600k. For the 6.5M files, that would grow to an hour, so perhaps there's more optimization work to be done here.

💣 Unfortunately the script exited early with:

Error: ENAMETOOLONG: name too long, open '/Users/breck/imdbPost/../imdb/600k/mord-an-lottomillionr-karl-hinrich-charly-l.sexualdelikt-an-carola-b.bankangestellter-zweimal-vom-selben-bankruber-berfallenmord-an-lottomillionr-karl-hinrich-charly-l.sexualdelikt-an-carola-b.bankangestellter-zweimal-vom-selben-bankruber-berfallen01985nncrimenews.title'

Turns out the Apple File System has a filename size limit of 255 UTF-8 characters so this error is understandable. However, inspecting the filename shows that for some reason the permalink was generated by combining the original title with the primary title. Sounds like a bug.

I cd into the 600k directory to see what's going on.

💣 Unfortunately ls hangs. ls -f -1 -U seems to go faster.

The titles look correct. I'm not sure why the script got hung up on that one entry. For now I'll just wrap the function call in a Try/Catch and press on. I should probably make this script resumable but will skip that for now.

Rerunning the script...it worked! That line seemed to be the only problematic line.

We now have our 600k TreeBase.

category description result
bash ls ~30 seconds
sublime Start sublime in the TreeBase folder failed
sublime scroll and click around files in the treebase folder and see how responsive it feels. X
sublime find all movies with the query "titleType movie" X
sublime find all comedy movies with the regex query "genres ._Comedy._" X
finder open and browse 3 minutes
git init git for the treebase nearInstant
git see git status 6s
git first git add for the treebase 40 minutes
git first git commit 10 minutes
sublime edit some file X
git git status when there is a change instant
git add the change above instant
git commit the change instant
github push the treebase to github ~10 seconds
treebase how long will it take to start treebase ~10 seconds
treebase how long will it take to scan the base for errors. ~5 seconds

💣 ls is now nearly unusable. ls -f -1 -U takes about 30 seconds. A straight up ls takes about 45s.

💣 Sublime Text failed to open. After 10 minutes of 100% CPU usage and beachball'ing I force quit the program. I tried twice to be sure with the same result.

💣 mdworker_shared again kept my laptop running hot. I found a way of potentially disabling Mac OS X Spotlight Indexing of the IMDB folder.

💣 Opening the 600k folder in Apple's Finder gave me a loading screen for about 3 minutes

At least it eventually came up:

Now, how about Git?

💣 The first git add . took 40 minutes! Yikes.

real 39m30.215s user 1m19.968s sys 13m49.157s

💣 git status after the initial git add took about a minute.

💣 The first git commit after the git add took about 10 minutes.

GitHub turns out to be a real champ. Even with 600k files the first git push took less than 30 seconds.

real 0m22.406s user 0m2.657s sys 0m1.724s

The 600k repo on GitHub comes up near instantly. GitHub just shows the first 1k out of 600k files which I think is a good compromise, and far better than a multiple minute loading screen.

💣 Sadly there doesn't seem to be any pagination for this situation on GitHub, so not sure how to view the rest of the directory contents.

I can pull up a file quickly on GitHub, like the entry for License to Kill.

How about editing files locally? Sublime is no use so I'll use vim. Because ls is so slow, I'll find the file I want to edit on GitHub. Of course because I can't find pagination in GitHub I'll be limited to editing one of the first 1k files. I'll use just that License to Kill entry.

So the command I use vim 007-licence-to-kill.title. Editing that file is simple enough. Though I wish we had support for Tree Notation in vim to get syntax highlighting and such.

💣 Now I do git add .. Again this takes a while. What I now realize is that my fancy command prompt does some git status with every command. So let's disable that.

After going in and cleaning up my shell (including switching to zsh) I've got a bit more performance back on the command line.

💣 But just a bit. A git status still takes about 23 seconds! Even with the -uno option it takes about 15 seconds. This is with 1 modified file.

Now adding this 1 file seems tricky. Most of the time I do a git status and see that I want to add everything so I do a git add ..

💣 But I tried git add . in the 600k TreeBase and after 100 seconds I killed the job. Instead I resorted to git add 007-licence-to-kill.title which worked pretty much instantly.

💣 git commit for this 1 change took about 20 seconds. Not too bad but much worse than normal.

git push was just a few seconds.

I was able to see the change on GitHub instantly. Editing that file on GitHub and committing was a breeze. Looking at the change history and blame on GitHub was near instant.

Git blame locally was also just a couple of seconds.

Pause to Reflect

So TreeBase struggles at the 600k level. You cannot just use TreeBase at the 100k level without preparing your system for it. Issues arise with GUIs like Finder and Sublime, background file system processes, shells, git, basic bash utilities, and so forth.

I haven't looked yet into RAM based file systems or how to setup my system to make this use case work well, but for now, out of the box, I cannot recommend TreeBase for databases of more than 100,000 entities.

Is there even a point now to try 6.5M? Arguably no.

However, I've come this far! No turning back now.

A 6.5M TreeBase

To recap what I am doing here: I am taking a single 6.5 million row 500MB TSV file that could easily be parsed into a SQLite or other battle hardened database and instead turning it into a monstrous 6.5 million file TreeBase backed by Git and writing it to my hard disk with no special configuration.

By the way, I forgot to mention my system specs for the record. I'm doing this on a MacBook Air running macOS Catalina on a 2.2Ghz Dual-core i7 with 8GB of 1600 Mhz DDR3 Ram with a 500GB Apple SSD using APFS. This is the last MacBook with a great keyboard, so I really hope it doesn't break.

Okay, back to the task at hand.

I need to generate the 6.5M files in a single directory. The 600k TreeBase took 6 minutes to generate so if that scales linearly 6.5M should take an hour. The first git add for 600k took 40 minutes, so that for 6.5M could take 6 hours. The first git commit for 600k took 10 minutes, so potentially 1.5 hours for 6.5M. So this little operation might take about 10 hours.

I'll stitch these operations together into a shell script and run it overnight (I'll make sure to check the batteries in my smoke detectors first).

Here's the script to run the whole routine:

time node buildStream.js time cd ~/imdb/6m/ time git add . time git commit -m "initial commit" time git push

Whenever running a long script, it's smart to test it with a smaller dataset first. I successfully tested this script with the 6k file dataset. Everything worked. Everything should be all set for the final test.

(Later the next day...)

It's Alive!

It worked!!! I now have a TreeBase with over 6 million files in a single directory. Well, a few things worked, most things did not.

category description result
bash ls X
sublime Start sublime in the TreeBase folder X
sublime scroll and click around files in the treebase folder and see how responsive it feels. X
sublime find all movies with the query "titleType movie" X
sublime find all comedy movies with the regex query "genres ._Comedy._" X
finder open and browse X
git init git for the treebase nearInstant
git first git add for the treebase 12 hours
git first git commit 5 hours
sublime edit some file X
git git status when there is a change X
git add the change above X
git commit the change X
github push the treebase to github X
treebase how long will it take to start treebase X
treebase how long will it take to scan the base for errors. X

💣 There was a slight hiccup in my script where somehow v8 again ran out of memory. But only after creating 6,340,000 files, which is good enough for my purposes.

💣 But boy was this slow! The creation of the 6M+ files took 3 hours and 20 minutes.

💣 The first git add . took a whopping 12 hours!

💣 The first git commit took 5 hours!

💣 A few times when I checked on the machine it was running hot. Not sure if from CPU or Disk or a combination.

💣 I eventually quit git push. It quickly completed Counting objects: 6350437, done. but then nothing happened except lots of CPU usage for hours.

Although most programs failed, I was at least able to successfully create this monstrosity and navigate the folder.

The experiment has completed. I took a perfectly usable 6.5M row TSV file and transformed it into a beast that brings some of the most well-known programs out there to their knees.

💣 NOTE: I do not recommend trying this at home. My laptop became lava hot at points. Who knows what wear and tear I added to my hard disk.

What have I learned?

So that is the end of the experiment. Can you build a Git-backed TreeBase with 6.5M files in a single folder? Yes. Should you? No. Most of your tools won't work or will be far too slow. There's infrastructure and design work to be done.

I was actually pleasantly surprised by the results of this early test. I was confident it was going to fail but I wasn't sure exactly how it would fail and at what scale. Now I have a better idea of that. TreeBase currently sucks at the 100k level.

I also now know that the hardware for this type of system feels ready and it's just parts of some software systems that need to be adapted to handle folders with lots of files. I think those software improvements across the stack will be made and this dumb thing could indeed scale.

What's Next?

Now, my focus at the moment is not on big TreeBases. My focus is on making the experience of working with little TreeBases great. I want to help get things like Language Server Protocol going for TreeBases and a Content Management System backed by TreeBase.

But I now can envision how, once the tiny TreeBase experience is nailed, you should be able to use this for bigger tasks. The infrastructure is there to make it feasible with just a few adjustments. There are some config tweaks that can be made, more in-memory approaches, and some straightforward algorithmic additions to make to a few pieces of software. I also have had some fun conversations where people have suggested good sharding strategies that may prove useful without changing the simplicity of the system.

That being said, it would be fun to do this experiment again but this time try and make it work. Once that's a success, it would be fun to try and scale it another 100x, and try to build a TreeBase for something like the 180M paper Semantic Scholar dataset.

Why Oh Why TreeBase?

Okay, you might be wondering what is the point of this system? Specifically, why use the file system and why use Tree Notation?

1) The File System is Going to Be Here for a Long Long Time

1) About 30m programmers use approximately 100 to 500 general purpose programming languages. All of these actively used general purpose languages have battle tested APIs for interacting with file systems. They don't all have interfaces to every database program. Any programmer, no matter what language they use, without having to learn a new protocol, language, or package, could write code to interact with a TreeBase using knowledge they already have. Almost every programmer uses Git now as well, so they'd be familiar with how TreeBase change control works.

2) Over one billion more casual users are familiar with using their operating system tools for interacting with Files (like Explorer and Finder). Wouldn't it be cool if they could use tools they already know to interact with structured data?

Wouldn't it be cool if we could combine sophisticated type checking, querying, and analytical capabilities of databases with the simplicity of files? Programmers can easily build GUIs on top of TreeBase that have any and all of the functionality of traditional database-backed programs but have the additional advantage of an extremely well-known access vector to their information.

People have been predicting the death of files but these predictions are wrong. Even Apple recently backtracked and added a Files interface to iOS. Files and folders aren't going anywhere. It's a very simple and useful design pattern that works in the analog and digital realm. Files have been around for at least 4,500 years and my guess is will be around for another 5,000 years, if the earth doesn't blow up. Instead of dying, on the contrary file systems will keep getting better and better.

2) Tree Notation is All You Need to Create Meaningful Semantic Content

People have recognized the value of semantic, strongly typed content for a long time. Databases have been strongly typed since the beginning of databases. Strongly typed programming languages have dominated the software world since the beginning of software.

People have been attempting to build a system for collaborative semantic content for decades. XML, RDF, OWL2, JSON-LD, Schema.org—these are all great projects. I just think they can be simplified and I think one strong refinement is Tree Notation.

I imagine a world where you can effortlessly pass TreeBases around and combine them in interesting ways. As a kid I used to collect baseball cards. I think it would be cool if you could just as easily pass around "cards" like a "TreeBase of all the World's Medicines" or a "TreeBase of all the world's academic papers" or a "TreeBase of all the world's chemical compounds" and because I know how to work with one TreeBase I could get value out of any of these TreeBases. Unlike books or weakly typed content like Wikipedia, TreeBases are computable. They are like specialized little brains that you can build smart things out of.

So I think this could be pretty cool. As dumb as it is.

I would love to hear your thoughts.

View source

January 23, 2020 — People make biased claims all the time. A decent response used to be "citation needed". But we should demand more. Anytime someone makes a claim that seems biased, call them out with: Dataset needed.

Whether it's an academic paper, news article, blog post, tweet, comment or ad, linking to analyses is not enough. If someone stops at that, demand a link to a clean dataset supporting the author's position. If they can't deliver, they should retract.

Of course, most sources don't currently publish their datasets. You cannot trust claims from any person or organization without an easily accessible dataset. In fact, it's probably safe to assume when someone shares a conclusion without the accompanying dataset that they are distorting reality for their own benefit.

Be a broken record: "Dataset needed. Dataset needed. Dataset needed."

Encourage authors to link to and/or publish their datasets. You can't say dataset needed enough. It is valuable, constructive feedback.

Authors: support your arguments with open data

Link to the dataset. If you want to include a conclusion, provide a deep link to the relevant query of the dataset. Do not repeat conclusions that don't have an accompanying dataset. If people can't verify what you say, don't say it.

Software teams: make it easy for users to share deep links to queries over public datasets

Many teams are creating tools that make it easy to deep link to queries over open datasets, such as Observable, Our World in Data, Google Big Query, Wolfram Data Repository, Tableau Public, IDL, Jupyter, Awesome Public Datasets, USAFacts, Google Dataset Search, and many more.

Students: Learn to build and publish datasets

I remember being a high school student and getting graded on our dataset notebooks we made in the lab. Writing clean data should be widely taught in school, and there's an army of potential workers who could help us create more public, deep-linkable datasets.

Notes

Thanks to DL for helping me refine my thinking from this earlier post.

View source

January 20, 2020 — In this post I briefly describe eleven threads in languages and programming. Then I try to connect them together to make some predictions about the future of knowledge encoding.

This might be hard to follow unless you have experience working with types, whether that be types in programming languages, or types in databases, or types in Excel. Actually, this may be hard to follow regardless of your experience. I'm not sure I follow it. Maybe just stay for the links. Skimming is encouraged.

First, from the Land of Character Sets

Humans invented characters roughly 5,000 years ago.

Binary notation was invented roughly 350 years ago.

The first widely adopted system for using binary notation to represent characters was ASCII, which was created only 60 years ago. ASCII encodes little more than the characters used by English.

In 1992 UTF-8 was designed which went on to become the first widespread system that encodes all the characters for all the world's languages.

For about 99.6% of recorded history we did not have a globally used system to encode all human characters into a single system. Now we do.

Meanwhile, in the Land of Standards Organization

Scientific standards are the original type schemas. Until recently, Standards Organizations dominated the creation of standards.

You might be familiar with terms like meter, gram, amp, and so forth. These are well defined units of measure that were pinned down in the International System of Units, which was first published in 1960.

The International Organization for Standardization (ISO) began around 100 years ago and is the organization behind a number of popular standards from currency codes to date and time formats.

For 98% of recorded history we did not have global standards. Now we do.

Meanwhile, in Math Land

My grasp of the history of mathematics isn't strong enough to speak confidently to trends in the field, but I do want to mention that in the past century there has been a lot of important research into type theories.

In the past 100 years type theories have taken their place as part of the foundation of mathematics.

For 98% of recorded history we did not have strong theories of type systems. Now we do.

Meanwhile, in Programming Language Land

The research into mathematical type and set theories in the 1900's led directly into the creation of useful new programming languages and programming language features.

From the typed lambda calculus in the 1940's to the static type system in languages like C to the ongoing experiments of Haskell or the rapid growth of the TypeScript ecosystem, the research into types has led to hundreds of software inventions.

In the late 1990's and 2000's, a slew of programming languages that underutilized innovations from type theory in the name of easier prototyping, like Python and Ruby and Javascript, became very popular. For a while this annoyed programmers who understood the benefits of type systems. But now they too are benefiting, as there is a bigger demand for richer type systems now due to the increase in the number of programmers.

95%+ of the most popular programming languages use increasingly smarter type systems.

Meanwhile, in API Land

Before the Internet became widespread, the job of most programmers was to write software that interacted only with other software on the local machine. That other software was generally under their control or well documented.

In the late 1990's and 2000's, a big new market arose for programmers to write software that could interact over the Internet with software on other machines that they had no control of or knowledge about.

At first there was not a good standard language to use that was agreed upon by many people. 1996's XML a variant of SGML from 1986, was the first attempt to get some traction for this job. But XML and the dialects of XML for APIs like SOAP (1998) and WSDL (2000) were not easy to use. Then Douglas Crockford created a new language called JSON in 2001. JSON made web API programming easier and helped create a huge wave of web API businesses. For me this was great. In the beginning of my programming career I got jobs working on these new JSON APIs.

The main advantage that JSON had over XML was simple, well defined types. It had just a few primitive types—like numbers, booleans and strings—and a couple of complex types—lists and dicts. It was a very useful collection of structures that were important across all programming languages, put together in a simple and concise way. It took very little time to learn the entire thing. In contrast, XML was "extensible" and defined no types, leading to many massive dialects defined by committee.

For 99.8% of recorded history we did not have a global network conducting automated business transactions with a typed language. Now we do.

Meanwhile, in SQL Land

When talking about types and data one must pay homage to SQL databases, which store most of the world's structured data and perform the transactions that our businesses depend on.

SQL programmers spend a lot of time thinking about the structure of their data and defining it well in a SQL data definition language.

Types play a huge role in SQL. The dominant SQL databases such as MySQL, SQL Server, and Oracle all contain common primitives like ints, floats, and strings. Most of the main SQL databases also have more extensive type systems for things like dates and money and even geometric primitives like circles and polygons in PostgreSQL.

Critical information is stored in strongly typed SQL databases: Financial information; information about births, health and deaths; information about geography and addresses; information about inventories and purchase histories; information about experiments and chemical compounds.

98% of the world's most valuable, processed information is now stored in typed databases.

Meanwhile, in the Land of Types as Code

The standards we get from the Standards Organizations are vastly better than not having standards, but in the past they've been released as non-computable, weakly typed documents.

There are lots of projects that are now writing schemas in computable languages. The Schema.org project is working to build a common global database of rich type schemas. JSON LD aims to make the types of JSON more extensible. The DefinitelyTyped project has a rich collection of commonly used interfaces. Protocol buffers and similar are another approach at language agnostic schemas. There are attempts at languages just for types. GraphQL has a useful schema language with rich typing.

100% of standards/type schemas can now themselves be written in strongly typed documents.

Meanwhile, in Git Land

Git is a distributed version control system created in 2005.

Git can be used to store and track changes to any type of data. You could theoretically put all of the English Wikipedia in Git, then CAPITALIZE all verbs, and save that as a single patch file. Then you could post your patch to the web and say "I propose the new standard is we should CAPITALIZE all verbs. Here's what it would look like." While this is a dumb idea, it demonstrates how Git makes it much cheaper to iterate on standards. Someone can propose both a change to the standard and the global updates all in a single operation. Someone can fork and branch to their heart's content.

For 99.9% of recorded history, there was not a cheap way to experiment and evolve type schemas nor a safe way to roll them out. Now there is.

Meanwhile, in Hub Land

In the past 30 years, central code hubs have emerged. There were early ones like SourceForge but in the past ten years GitHub has become the breakout star. GitHub has around 30 million users, which is also a good estimate of the total number of programmers worldwide, meaning nearly every programmer uses git.

In addition to source code hubs, package hubs have become quite large. Some early pioneers are still going strong like 1993's CRAN but the breakout star is 2010's NPM, which has more packages than the package managers of all other languages combined.

Types are arbitrary. The utility of a type depends not only on its intrinsic utility but also on its popularity. You can create a better type system—maybe a simpler universal day/time schema—but unless it gains popularity it will be of limited value.

Code hubs allow the sharing of code, including type definitions, and can help make type definitions more popular, which also makes them more useful.

99% of programmers now use code hubs and hubs are a great place to increase adoption of types, making them even more useful.

Meanwhile, in Semantic Web Land

The current web is a collection of untyped HTML pages. So if I were to open a web page with lots of information about diseases and had a semantic question requiring some computation, I'd have to read the page myself and use my slow brain to parse the information and then figure out the answer to my semantic question.

The Semantic Web dream is that the elements on web pages would be annotated with type information so the computer could do the parsing for us and compute the answers to our semantic questions.

While the "Semantic Web" did not achieve adoption like the untyped web, that dream remains very relevant and is ceaselessly worked upon. In a sense Wolfram Alpha embodies an early version of the type of UX that was envisioned for the Semantic Web. The typed data in Wolfram Alpha comes from a nicely curated collection.

While lots of strongly typed proprietary databases exist on the web for various domains from movies to startups and while Wikipedia is arguable undergoing gradual typing, the open web still remains largely untyped and we don't have a universally accessible interface yet to the world's typed information.

99% of the web is untyped while 99% of the world's typed information is silo-ed and proprietary.

Meanwhile, in Deep Learning Land

Deep Learning is creeping in everywhere. In the past decade it has come to be the dominant strategy for NLP. In the past two years, a new general learning strategy has become feasible where models learn some intrinsic structure of language and can use this knowledge to perform many different language tasks.

One of those tasks could be to rewrite untyped data in a typed language.

AI may soon be able to write a strongly typed semantic web from the weakly typed web.

Tying All These Threads Together

I see a global pattern here that I call the "Type the World" trend. Here are some future predictions from these past trends.

  • We will always have creative, ambiguous, untyped human languages where new ideas can evolve freely
  • In the future great new ideas from the untyped realm will be adopted faster by the typed realm
  • Nearly all transactions in business and government will be in typed languages
  • Someone will invent a wildly popular new domain specific language(s) for type definitions
  • All the popular standards will be ones written in these new and improved TypeDSLs
  • Git—or git like systems—will be used to store both the TypeDSLs and the typed data
  • TypeHubs will arise hosting these widely used type schemas
  • Programmers will get their types from TypeHubs regardless of which programming language they use
  • Deep learning agents will be used to rewrite the web's untyped data into typed data
  • Deep learning agents will be used to improve type schemas
  • Human editors will review and sign off on the typing work of the deep learning agents
  • Silo-ed domain specific standards will merge into one or a handful of global monolithic type systems

The result of this will be a future where all business, from finance to shopping to healthcare to law, is conducted in a rich, open type system, and untyped language work is relegated to research, entertainment and leisure.

While I didn't dive into the benefits of what Type the World will bring, and instead merely pointed out some trends that I think indicate it is happening, I do indeed believe it will be a fantastic thing. Maybe I'll give my take on why Type the World is a great thing in a future post.

View source

January 16, 2020 — I often rail against narratives. I think stories always oversimplify things, have hindsight bias, and often mislead. I spend a lot of time trying to invent tools for making data derived thinking as effortless as narrative thinking (so far, mostly in vain). And yet, as much as I rail on stories, I have to admit stories work.

I read an article that put it more succinctly:

Why storytelling? Simple: nothing else works.

I would agree with that. Despite the fact that 90% of stories are lies, they motivate people better than anything else. Stories make people feel something. They get people going.

What is the math here? On a population level, it seems people who follow stories have a survival advantage. On a local level, it seems people who can weave stories have an even greater survival advantage.

Why?

Perhaps it's due to risk taking. Perhaps the people who follow stories take more risks, on average, than people who don't, and even though many of those don't pan out some of those risks do pay off and the average is worth it.

Perhaps it's due to productivity. Perhaps people who are storiers spend less time analyzing and more time doing. The act of doing generates experience (data), so often the best way to be data-driven isn't to analyze more it's to go out there and do more to collect more data. As they say in machine learning, data trumps algorithms.

Perhaps it's due to focus. If you just responded to your senses all the time the world is a shimmering place, and perhaps narratives are necessary to get anything done at all.

Perhaps it's due to memory. A story like 'The Boy who Cried Wolf' is shorter and more memorable than 'Table of Results from a Randomized Experiment on the Effect of False Alarms on Subsequent Human Behavior'.

Perhaps it's healthier. Our brains are not much more advanced than the chimp. Uncertainty can create stress and anxiety. Perhaps the confidence that comes from belief in a story leads to less stress and anxiety leading to better health, which outweighs any downsides from decisions that go against the data.

Perhaps it's a cooperation advantage. If everyone is analyzing their individual decisions all the time, perhaps that comes at the cost of cooperation. Storiers go along with the group story, and so over time their populations get more done together. Maybe the opposite of stories isn't truth, it's anarchy.

Perhaps it's just more fun. Maybe stories are suboptimal for decision making and lead us astray all the time, and yet are still a survival advantage simply because it's a more enjoyable way to live. Even when you screw up royally, it can make a good story. As the saying goes, "don't take life too seriously, you'll never make it out alive."

Despite my problems with narratives and my quest for something better, it seems quite possible to me that at the end of the day it may turn out that there is nothing better, and it's best to make peace with stories, despite their flaws. And regardless of the future, I can't argue with the value of stories today for motivation and enjoyment. Nothing else works.

View source

January 3, 2020 — Speling errors and errors grammar are nearly extinct in published content. Data errors, however, are prolific.

By data error I mean one the following errors: a statement without a backing dataset and/or definitions, a statement with data but a bad reduction(s), or a statement with backing data but lacking integrated context. I will provide examples of these errors later.

The hard sciences like physics, chemistry and most branches of engineering have low tolerance for data errors. But outside of those domains data errors are everywhere.

Fields like medicine, law, media, policy, the social sciences, and many more are teeming with data errors, which are far more consequential than spelling or grammar errors. If a drug company misspells the word dockter in some marketing material the effect will be trivial. But if that material contains data errors those often influence terrible medical decisions that lead to many deaths and wasted resources.

If Data Errors Were Spelling Errors

You would be skeptical of National Geographic if their homepage looked like this:

We generally expect zero spelling errors when reading any published material.

Spell checking is now an effortless technology and everyone uses it. Published books, periodicals, websites, tweets, advertisements, product labels: we are accustomed to reading content at least 99% free of spelling and grammar errors. But there's no equivalent to a spell checker for data errors and when you look for them you see them everywhere.

The Pandemic: An Experiment

Data errors are so pervasive that I came up with a hypothesis today and put it to the test. My hypothesis was this: 100% of "reputable" publications will have at least one data error on their front page.

Method

I wrote down 10 reputable sources off the top of my head: the WSJ, The New England Journal of Medicine, Nature, The Economist, The New Yorker, Al Jazeera, Harvard Business Review, Google News: Science, the FDA, and the NIH.

For each source, I went to their website and took a single screenshot of their homepage, above the fold, and skimmed their top stories for data errors.

Results

In the screenshots above, you can see that 10/10 of these publications had data errors front and center.

Breaking Down These Errors

Data errors in English fall into common categories. My working definition provides three: a lack of dataset and/or definitions, a bad reduction, or a lack of integrated context. There could be more, this experiment is just a starting point where I'm naming some of the common patterns I see.

The top article in the WSJ begins with "Tensions Rise in the Middle East". There are at least 2 data errors here. First is the Lack of Dataset error. Simply put: you need a dataset to make a statement like that. There is no longitudinal dataset in that article on tensions in the Middle East. There is also a Lack of Definitions. Sometimes you can not yet have a dataset but at least define what a dataset would be that could back your assertions. In this case we have neither a dataset nor a definition of what some sort of "Tensions" dataset would look like.

In the New England Journal of Medicine, the lead figure shows "excessive alcohol consumption is associated with atrial fibrillation" between 2 groups. One group had 0 drinks over a 6 month period and the other group had over 250 drinks (10+ per week). There was a small impact on atrial fibrillation. This is a classic Lack of Integrated Context data error. If you were running a lightbulb factory and found soaking lightbulbs in alcohol made them last longer, that might be an important observation. But humans are not as disposable, and health studies must always include integrated context to explore whether there is something of significance. Having one group make any sort of similar drastic lifestyle change will likely have some impact on any measurement. A good rule of thumb is anything you read that includes p-values to explain why it is significant is not significant.

In Nature we see the line "world's growing water shortage". This is a Bad Reduction, another very common data error. While certain areas have a water shortage, other areas have a surplus. Any time you see a broad diverse things grouped into one term, or "averages", or "medians", it's usually a data error. You always need access to the data, and you'll often see a more complex distribution that would prevent broad true statements like those.

In The Economist the lead story talks about an action that "will have profound consequences for the region". Again we have the Lack of Definitions error. We also have a Forecast without a Dataset error. There's nothing wrong with making a forecast--creating a hypothetical dataset of observations about the future--but one needs to actually create and publish that dataset and not just a vague unfalsifiable statement.

The New Yorker lead paragraph claims an event "was the most provocative U.S. act since...". I'll save you the suspense: the article did not include a thorough dataset of such historical acts with a defined measurement of provocative. Another Lack of Dataset error.

In Al Jazeera we see "Iran is transformed" and also a Bad Reduction, Lack of Dataset and Lack of Definition errors.

Harvard Business Review has a lead article about the Post-Holiday funk. In that article the phrase "research...suggests" is often a dead giveaway for a Hidden Data error, where the data is behind a paywall and even then often inscrutable. Anytime someone says "studies/researchers/experts" it is a data error. We all know the earth revolves around the sun because we can all see the data for ourselves. Don't trust any data you don't have access to.

Google News has a link to an interesting article on the invention of a new type of color changing fiber, but the article goes beyond the matter at hand to make the claim: "What Exactly Makes One Knot Better Than Another Has Not Been Well-Understood – Until Now". There is a Lack of Dataset error for meta claims about the knowledge of knot models.

The FDA's lead article is on the Flu and begins with the words "Most viral respiratory infections...", then proceeds for many paragraphs with zero datasets. There is an overall huge Lack of Datasets in that article. There's also a Lack of Monitoring. Manufacturing facilities are a controlled, static environment. In uncontrolled, heterogeneous environments like human health, things are always changing, and to make ongoing claims without having infrastructure in place to monitor and adjust to changing data is a data error.

The NIH has an article on how increased exercise may be linked to reduced cancer risk. This is actually an informative article with 42 links to many studies with lots of datasets, however the huge data error here is Lack of Integration. It is very commendable to do the grunt work and gather the data to make a case, but simply linking to static PDFs is not enough—they must be integrated. Not only does that make it much more useful, but if you've never tried to integrate them, you have no idea if the pieces actually will fit together to support your claims.

While my experiment didn't touch books or essays, I'm quite confident the hypothesis will hold in those realms as well. If I flipped through some "reputable" books or essayist collections I'm 99.9% confident you'd see the same classes of errors. This site is no exception.

The Problem is Language Tools

I don't think anyone's to blame for the proliferation of data errors. I think it's still relatively recent that we've harnessed the power of data in specialized domains, and no one has yet invented ways to easily and fluently incorporate true data into our human languages.

Human languages have absorbed a number of sublanguages over thousands of years that have made it easier to communicate with ease in a more precise way. The base 10 number system (0,1,2,3,4,5,6,7,8,9) is one example that made it a lot easier to utilize arithmetic.

Taking Inspiration from Programming Language Design

Domains with low tolerance for data errors, like aeronautical engineering or computer chip design, are heavily reliant on programming languages. I think it's worthwhile to explore the world of programming language design for ideas that might inspire improvements to our everyday human languages.

Some quick numbers for people not familiar with the world of programming languages. Around 10,000 computer languages have been released in history (most of them in the past 70 years). About 50-100 of those have more than a million users worldwide and the names of some of them may be familiar to even non-programmers such as Java, Javascript, Python, HTML or Excel.

Not all programming languages are created equal. The designers of a language end up making thousands of decisions about how their particular language works. While English has evolved with little guidance over millennia, programming languages are often designed consciously by small groups and can evolve much faster.

Often the designers change a language to make it easier to do something good or harder to do something bad.

Sometimes what is good and bad is up to the whims of the designer. Imagine I was an overly optimistic person and decided that English was too boring or pessimistic. I may invent a language without periods, where all sentences must end with an exclamation point! I'll call it Relish!

Most of the time though, as data and experience accumulates, a rough consensus emerges about what is good and bad in language design (though this too seesaws).

Typed Checked Languages

One of the patterns that has emerged as generally a good thing over the decades to many languages is what's called "type checking". When you are programming you often create buckets that can hold values. For example, if you were programming a function that regulated how much power a jet engine should supply, you might take into account the reading from a wind speed sensor and so create a bucket named "windSpeed".

Some languages are designed to enforce stricter logic checking of your buckets to help catch mistakes. Others will try to make your program work as written. For example, if later in your jet engine program you mistakenly assigned the indoor air temperature to the "windSpeed" bucket, the parsers of some languages would alert you while you are writing the program, while with some other languages you'd discover your error in the air. The former style of languages generally do this by having "type checking".

Type Checking of programming languages is somewhat similar to Grammar Checking of English, though it can be a lot more extensive. If you make a change in one part of the program in a typed language, the type checker can recheck the entire program to make sure everything still makes sense. This sort of thing would be very useful in a data checked language. If your underlying dataset changes and conclusions anywhere are suddenly invalid, it would be helpful to have the checker alert you.

Perhaps lessons learned from programing language design, like Type Checking, could be useful for building the missing data checker for English.

A Blue Squiggly to Highlight Data Errors

Perhaps what we need is a new color of squiggly:

✅ Spell Checkers: red squiggly

✅ Grammar Checkers: green squiggly

❌ Data Checkers: blue squiggly

If we had a data checker that highlighted data errors we would eventually see a drastic reduction in data errors.

If we had a checker for data errors appear today our screens would be full of blue. For example, click the button below to highlight just some of the data errors on this page alone.

How Do We Reduce Data Errors?

If someone created a working data checker today and applied it to all of our top publications, blue squigglies would be everywhere.

It is very expensive and time consuming to build datasets and make data driven statements without data errors, so am I saying until we can publish content free of data errors we should stop publishing most of our content? YES! If you don't have anything true to say, perhaps it's best not to say anything at all. At the very least, I wish all the publications above had disclaimers about how laden with data errors their stories are.

Of course I don't believe either of those are likely to happen. I think we are stuck with data errors until people have invented great new things so that it becomes a lot easier to publish material without data errors. I hope we somehow create a data checked language.

I still don't know what that looks like, exactly. I spend half my work time attempting to create such new languages and tools and the other half searching the world to see if someone else has already solved it. I feel like I'm making decent progress on both fronts but I still have no idea whether we are months or decades away from a solution.

While I don't know what the solution will be, I would not be surprised if the following patterns play a big role in moving us to a world where data errors are extinct:

1. Radical increases in collaborative data projects It is very easy for a person or small group to crank out content laden with data errors. It takes small armies of people making steady contributions over a long time period to build the big datasets that can power content free of data errors.

2. Widespread improvements in data usability. Lots of people and organizations have moved in the past decade to make more of their data open. However, it generally takes hours to become fluent with one dataset, and there are millions of them out there. Imagine if it took you hours to ramp on a single English word. That's the state of data usability right now. We need widespread improvements here to make integrated contexts easier.

3. Stop subsidizing content laden with data errors. We grant monopolies on information and so there's even more incentive to create stories laden with data errors—because there are more ways to lie than to tell the truth. We should revisit intellectual monopoly laws.

4. Novel innovations in language. Throughout history novel new sublanguages have enhanced our cognitive abilities. Things like geometry, Hindu-Arabic numerals, calculus, binary notation, etc. I hope some innovators will create very novel data sublanguages that make it much easier to communicate with data and reduce data errors.

Have you invented a data checked language, or are working on one? If so, please get in touch.

View source

The Attempt to Capture Truth

August 19, 2019 — Back in the 2000's Nassim Taleb's books set me on a new path in search of truth. One truth I became convinced of is that most stories are false due to oversimplification. I largely stopped writing over the years because I didn't want to contribute more false stories, and instead I've been searching for and building new forms of communication and ways of representing data that hopefully can get us closer to truth.

I've tried my best to make my writings encode "real" and "true" information, but it's impossible to overcome the limitations of language. The longer any work of English writing is, the more inaccuracies it contains. This post itself will probably be more than 50% false.

But most people aren't aware of the problem.

Fake news is a great idea.

Then came DT and "fake news". One nice thing I can say about DT is that "fake news" is a great idea.

If your ideas are any good, you'll have to ram them down people's throats. @ Howard H. Aiken
..in science the credit goes to the man who convinces the world, not to the man to whom the idea first occurs. @ Francis Darwin

DT has done a great job at spreading this idea. Hundreds of millions of people, at least, now are at least vaguely familiar that there's a serious problem, even if people can't describe precisely what that is. Some people mistakenly believe "their news" is real and their opponents' news is fake. It's all fake news.

What's the underlying problem?

English is a fantastic story telling language that has been very effective at sharing stories, coordinating commerce and motivating armies, but English evolved in a simpler time with simpler technologies and far less understanding about how the world really works.

English oversimplifies the world which makes it easy to communicate something to be done. English is a modern day cave painting language. Nothing motivates a person better than a good story, and that motivation was essential to get us out of the cave. It didn't matter so much in which direction people went, as long as they went in some direction together.

But we are now out of the cave, and it is not enough to communicate what is to be done. We have many more options now and it's important that we have langauges that can better decide what is the best thing to do.

What will a language that supports Real News look like?

Real News is starting to emerge in a few places. The WSJ has long been on the forefront but newer things like Observable are also popping up.

I don't know exactly what a language for truth will look like but I imagine it will have some of these properties:

  • It will be a language that is hard to lie with
  • It will contain more numerics and be more data driven
  • It will be interactive, with assumptions made clear and adjustable by the reader
  • It will be blameable, with the source and history of every line and character auditable
  • It will be linked and auditable, with the ability to "go to definitions"
  • It will discourage obfuscation, and will make it easy to compare 2 representations and choose the simpler one
  • It will be more visual, expanding beyond character alphabets and embracing more charts and visualizations
  • It will be more Random Access, like bullet points

English and Sapir–Whorf

I would say until we move away from English and other story-telling languages to encodings that are better for truth telling, our thinking will also be limited.

A language that doesn’t affect the way you think about programming, is not worth knowing. – Alan Perlis

New languages designed for truth telling might not just be useful in our everyday lives, they could very much change the way we think.

Finally

Again, to channel Taleb, I'm not saying English is bad. By all means, enjoy the stories. But just remember they are stories. If you are reading English, know that you are not reading Real News.

View source

January 13, 2018 — This is a story about how my FitBit logged a manic episode.

In mid-2017, I had a manic episode that led me to act impulsively, over-confidently and grandiosely. Like a textbook case of mania, I was filled with grand ideas and visions, rushed through life decisions (and a lot of my savings), and was positively euphoric.

This was not a fluke event. I had been stable for about 2 years, but I was diagnosed with bipolar disorder 13 years ago, and have had approximately 7 swings of varying severity since then.

But this episode had an epidemiological silver lining: my FitBit recorded it. I hope this story might help at least one other person suffering from bipolar disorder or encourage people working on using wearable tech for mental health treatments to keep up the promising work.

Sleep Tracking

In November 2014, I started wearing a fitness band at all times. I now have about 160 consecutive weeks worth of sleep and other data.

The average FitBit user gets 6 hours and 38 minutes of sleep per night.

My average — when stable — was a bit over 8 hours a night. Lots of people — parents especially — don't have the luxury of sleeping so much, and I feel a bit selfish and lazy to sleep so much. But when I do sleep less, as we shall see, my brain handles it worse than most.

The Beginning

In May of 2016, I left my job to do some freelancing and work on an entrepreneurial software project. I was staying sane and getting over 7.5 hours of sleep per night.

But in April 2017, things changed. My sleep average dropped to a little over 6 hours per night. Compared to prior years, this was a 30% drop in sleep. But I wasn't tired. I felt more awake than I had been in years.

On Thursday, May 3, I slept for 4 hours and 56 minutes — for no good reason. The next night I did it again. I was coding faster than ever and loving life to boot. Later that day in my journal I wrote, “Life is good. No. Life is fucking great!”

Mania is like an invisible drug. I had been off it for years. I had been vigilant. But after a long stable period, I had forgotten and now my guard was down. I didn't recognize it as it was happening.

Looking back, this was probably the time to catch it. To go see my therapist. To get back on meds. To reveal my condition to some more people and ask for help.

Alas, I didn't do that. By chance, I spent the next three weeks at my girlfriend's on the East Coast and did calm down a bit. But I was not thinking, “Yikes! I was getting a bit manic there, I need to be careful.” Instead I was thinking, “Wow, I was in the zone a couple weeks ago — I gotta get back to that!” My mind had tasted mania again and was subconsciously itching to get more.

The Episode

Eager to get back to “the zone,” I impulsively bought a ticket back home and cut my trip to the East Coast short. I got back to coding and my work, and then, predictable as a clock, my countdown to mania began.

The above chart is weekly averages. But the day-to-day variance was extreme — a few nights I slept less than 3 hours — but I felt great!

My behavior became textbook manic. My ideas and claims became more and more ambitious — topics that had seemed complex to me before suddenly seemed simple, I could learn anything, solve any problem, change industries.

With my software project I started to feel the paranoid need to move fast — I got the delusion Google was also working on the same idea and was about to launch before me, stealing my thunder. Embarrassingly, I started publishing these grand claims, emailing past coworkers and employers, and started telling people I would win the Turing Prize.

My spending got wild. For example, on a trip during the episode, I spent $300 to upgrade my ticket to first class (at the time I thought it was worthwhile because I wrote a “brilliant” math proof on the plane), took a deluxe Uber for $70, and tipped a busker $100. My usual daily expenses were about $30. My monthly expenses had been about $3k a month, but now shot up to over $10k.

I can't believe I didn't know better, given more than a decade of experience with this condition. But at the time I was oblivious to what was driving me — in my mind there was a perfectly logical narrative explaining all my actions. Only now, looking at the sleep data, is it clear to me that physical brain conditions were contributing a huge amount to my behavior.

The Confusing, Mixed Aftermath

Following this acute episode, I had a couple months of mixed moods.

July and August were particularly confusing. The problem of having ideas in a manic state is that you believe in them so fervently—ideas come hard and fast in what feels like a spiritual experience — that it becomes very hard to let those ideas and beliefs go. At times I questioned what the hell I was doing, but I had made lots of claims in public and did my best to try and prove them.

In the months that followed, sleep was mixed. I was trying to “get back” to the energy levels and clarity that I had in June — I still hadn't recognized that time as manic — but was finding it hard to do so. I tried sleeping less to kickstart my system.

Far in the back of my head it was starting to occur to me that maybe I had gone manic again — that my ideas weren't so grand after all — but for months I strongly repressed those thoughts.

December — Coming to Terms with the Data

By December I couldn't go mixed anymore. I knew something was wrong and I finally started to reflect on the past.

Out of curiosity, I downloaded all my sleep data. What I saw confirmed my worst fears. When my sleep went out of control, so did my mania. When sleep decreased, grandiosity increased. Those long days weren't a result of groundbreaking work, but rather the result of a manic mind.

Tracking is Invaluable but Not a Cure

Wearables gave me great hope for curing my extreme mood issues. At first this was confirmed by my experience — once I started wearing a fitness band and regularly kept an eye on my sleep, I went on to have the longest mania-free period in my life. But now the data shows me that wearables by themselves are not a cure.

Could Alerts Have Prevented It?

There are a lot of things I could have done differently to prevent this manic episode from happening. After stabilizing, I went off my medication and hadn't visited my doctor for over 18 months. I screwed up.

But I wonder what would have happened back in May if my wearable service had alerted my doctor and a few close friends of my foreboding sleep changes. Perhaps there could have been a minor intervention that prevented the huge swing?

Of course, I know it's not as easy as alerts. I realize that if done in the wrong way, an alert service might make things worse — perhaps people might get paranoid and angry, rip the band off, and continue on their manic way.

But in the future perhaps someone will design an alert and intervention system that is effective and palatable. Perhaps the alert sets into motion something subtle and agreeable — the person agrees to take an extra medication for a time, check in with their therapist, or start filling out a daily mood journal for a month, et cetera.

Grateful for the Data

I am hugely disappointed in myself for letting this happen but at least grateful to have the huge amount of sleep data this time, something I never had before.

Having lots of quantitative biological data like this makes it easier to accept the diagnosis that this is a real, physical condition. Sleep stats are also a simple, objective, and near-real time indicator for state of mind. Other data like journaling, mood tracking, emails, and finances reveal the symptoms, but that data is sparse, subjective, and laggy.

I'm hopeful that wearable makers might crack the code for measuring other key indicators, like anxiety and social activity data, which could also be very helpful for people with mental health issues.

Of course, the holy grail would probably be actual images of the brain over time. Perhaps when industrious scientists and engineers improve MRI and fMRI technology enough, getting a brain scan done a few times a year could really help people with bipolar understand their brains more and take better control of their condition. A new study published in Nature in May of 2017 provides tantalizing evidence that brain MRI scans will help us understand more how bipolar brains are different. I know I have to accept that my brain has some biological aberrations that make me more likely to behave in ways I'd rather not — but it's hard to accept that when you can't see what those aberrations are, and when the treatments are a lot of guesswork.

Wearables for Bipolar Disorder

Monitoring sleep alone won't be the secret to a stable, productive, happy life. But it might reduce future manias. As far as I can remember, I never had a manic period without an accompanying need to sleep less. That seems to be the common experience, from what I've read. I would suggest to younger folks in high school and college who have been recently diagnosed with bipolar to get a sleep tracker and stay on it. Hopefully you'll be able to prevent some manic episodes. And if, like many others with bipolar disorder, you continue to have swings over the decades ahead, at least you'll gather data that could help you and other people figure this thing out.

Of course, technology might also be making bipolar disorder worse in those prone to it. The technology advanced U.S. has the highest rate of bipolar disorder in the world. Perhaps increased screen time, less social time, or more media exacerbates the problem. But that's why I like passive wearables, which collect data without intruding on your life. Even if some innovations of the modern world offer new challenges to bipolar sufferers, some innovations also offers new hope.

Conclusion

I wish my longest stable streak was still going strong. I wish I hadn't gone manic and then crashed into the inevitable depression.

But I'm grateful that this time I was wearing a figurative “black box.” Hopefully others can learn from my experience.

Next time I start acting on a grand idea, I hope my band and I will do the healthy thing: get some sleep and forget it.

Note: I originally published this anonymously on Medium. I was too scared to reveal my name. I am less scared now. I feel we are close to an accurate model of this condition. I have a more sophisticated understanding now but will leave the post as is to reflect my understanding at the time. Thank you to CP and DR, who provided me feedback at the time on this post. - 6/13/2023

View source

June 23, 2017 — I just pushed a project I've been working on called Ohayo.

You can also view it on GitHub: https://github.com/treenotation/ohayo

I wanted to try and make a fast, visual app for doing data science. I can't quite recommend it yet, but I think it might get there. If you are interested you can try it now.

View source

June 21, 2017 — Eureka! I wanted to announce something small, but slightly novel, and potentially useful.

What did I discover? That there might be useful general purpose programming languages that don't use any visible syntax characters at all.

I call the whitespace-based notation Tree Notation and languages built on top of it Tree Languages.

Using a few simple atomic ingredients---words, spaces, newlines, and indentation--you can construct grammars for new programming languages that can do anything existing programming languages can do. A simple example:

if true print Hello world

This language has no parentheses, quotation marks, colons, and so forth. Types, primitives, control flow--all of that stuff can be determined by words and contexts instead of introducing additional syntax rules. If you are a Lisper, think of this "novel" idea as just "lisp without parentheses."

There are hundreds of very active programming languages, and they all have different syntax as well as different semantics.

I think there will always be a need for new semantic ideas. The world's knowledge domains are enormously complex (read: billions/trillions of concepts, if not more), machines are complex (billions of pieces), and both will always continue to get more complex.

But I wonder if we always need a new syntax for each new general purpose programming language. I wonder if we could unlock potentially very different editing environments and experiences with a simple geometric syntax, and if by making the syntax simpler folks could build better semantic tooling.

Maybe there's nothing useful here. Perhaps it is best to have syntax characters and a unique syntax for each general purpose programming language. Tree Notation might be a bad idea or only useful for very small domains. But I think it's a long-shot idea worth exploring.

Thousands of language designers focus on the semantics and choose the syntax to best fit those semantics (or a syntax that doesn't deviate too much from a mainstream language). I've taken the opposite approach--on purpose--with the hopes of finding something overlooked but important. I've stuck to a simple syntax and tried to implement all semantic ideas without adding syntax.

Initially I just looked at Tree Notation as an alternative to declarative format languages like JSON and XML, but then in a minor "Eureka!" moment, realized it might work well as a syntax for general purpose Turing complete languages across all paradigms like functional, object-oriented, logic, dataflow, et cetera.

Someday I hope to have data definitively showing that Tree Notation is useful, or alternatively, to explain why it is suboptimal and why we need more complex syntax.

I always wanted to try my hand at writing an academic paper. So I put the announcement in a 2-page paper on GitHub and arxiv. The paper is titled Tree Notation: an antifragile program notation. I've since been informed that I should stick to writing blog posts and code and not academic papers, which is probably good advice :).

Two updates on 12/30/2017. After I wrote this I was informed that one other person from the Scheme world created a very similar notation years ago. Very little was written in it, which I guess is evidence that the notation itself isn't that useful, or perhaps that there is still something missing before it catches on. The second note is I updated the wording of this post as the original was a bit rushed.

View source

A Suggestion for a Simple Notation

September 24, 2013 — What if instead of talking about Big Data, we talked about 12 Data, 13 Data, 14 Data, 15 Data, et cetera? The # refers to the number of zeroes we are dealing with.

You can then easily differentiate problems. Some companies are dealing with 12 Data, some companies are dealing with 15 Data. No company is yet dealing with 19 Data. Big Data starts at 12 Data, and maybe over time you could say Big Data starts at 13 Data, et cetera.

What do you think?

This occurred to me recently as I just started following Big Data on Quora and was surprised to see the term used so loosely, when data is something so easily measurable. For example, a 2011 Big Data report from McKinsey defined big data as ranging "from a few dozen terabytes to multiple petabytes (thousands of terabytes)." Wikipedia defines Big Data as "a collection of data sets so large and complex that it becomes difficult to process using on-hand database management tools or traditional data processing applications."

I think these terms make Big Data seem mysterious and confusing, when in fact it could be completely straightforward.

View source

September 23, 2013 — Making websites is slow and frustrating.

I met a young entrepreneur who wanted to create a website for his bed and breakfast. He had spent dozens of hours with different tools and was no closer to having what he wanted.

I met a teacher who wanted his students to turn in web pages for homework instead of paper pages. No existing tool allows his students to easily create pages without restricting their creativity.

I met an artist who wanted a website with a slideshow for her portfolio.

A restaurant owner who wanted a website that could take online orders.

An author who wanted a website with a blog.

A saleswoman who wanted to build a members-only site for great deals she gathered.

A candidate who wanted a website that could coordinate his volunteers.

A nonprofit founder who wanted a website that told the story of impoverished children in his country and accepted donations.

These are just a handful of real people with real ideas who are frustrated by the current tools.

The problem

The fact is, people want to do millions of different things with their websites, but the only two options are to use a tool that limits your creative potential or to program your site from scratch. Neither option is ideal.

The solution

Which is why we're building a third option. We are building an open source, general purpose IDE for building websites.

Here's a short video demonstrating how it works:

NudgePad is in early beta, but is powering a number of live websites like these:

Although we have a lot more to do to get to a stable version 2.0, we thought the time was right to start opening up NudgePad to more people and recruiting more help for the project. We also want to get feedback on the core ideas in NudgePad.

To get involved, give NudgePad a try or check out the source code on GitHub.

We truly believe this new way to build websites--an IDE in your browser-- is a faster way to build websites and the way it will be done in the future. By this time next year, using NudgePad, it could be 100x faster and easier to build websites than it is today.

View source

April 2, 2013 — For me, the primary motivation for creating software is to save myself and other people time.

I want to spend less time doing monotonous tasks. Less time doing bureaucratic things. Less time dealing with unnecessary complexity. Less time doing chores.

I want to spend more time engaged with life.

Saving people time is perhaps the only universal good. Everyone wants to have more options with their time. Everyone benefits when a person has more time. They can enjoy that extra time and/or invest some of it to make the world better for everyone else.

Nature loves to promote inequality, but a fascinating feature of time is that it is so equally distributed. Nature took the same amount of time to evolve all of us alive today. All of our evolutionary paths are equally long. We also have equal amounts of time to enjoy life, despite the fact that other things may be very unequally distributed.

The very first program I made was meant to save me and my family time. Back in 1996, to start our computer, connect to the Internet and launch Netscape took about 20 minutes, and you had to do each step sequentially. My first BAT script automated that to allow you to turn the computer on and go play outside for 20 minutes while it connected to the web. Many years later, my ultimate motivation to save people time has remained constant.

View source

Two people in the same forest, have the same amount of water and food, Are near each other, but may be out of sight, The paths behind each are equally long. The paths ahead, may vary. One's path is easy and clear. The other's is overgrown and treacherous. Their paths through the forest, in the past, in the present, and ahead are equal. Their journeys can be very different.

View source

The crux of the matter, is that people don't understand the true nature of money. It is meant to circulate, not be wrapped up in a stocking @ Guglielmo Marconi

March 30, 2013 — I love Marconi's simple and clear view of money. Money came in and he put it to good use. Quickly. He poured money into the development of new wireless technology which had an unequal impact on the world.

This quote, by the way, is from "My Father, Marconi", a biography of the famous inventor and entrepreneur written by his daughter, Degna. Marconi's story is absolutely fascinating. If you like technology and entrepreneurship, I highly recommend the book.

P.S. This quote also applies well to most man made things. Cars, houses, bikes, et cetera, are more valuable circulating than idling. It seemed briefly we were on a trajectory toward overabundance, but the sharing economy is bringing circulation back.

View source

March 30, 2013 — Why does it take 10,000 hours to become a master of something, and not 1,000 hours or 100,000 hours?

The answer is simple. Once you've spent 10,000 hours practicing something, no one can crush you like a bug.

Let me explain. First, the most important thing to keep in mind is that nature loves inequality. For example, humans and bugs are not even close to equal in size. Humans are 1,000x bigger than bugs. It is very easy for a human to squash a bug.

Now, when you are starting to learn something and have spent say, 100 hours practicing that thing, you, my friend, are the bug. There are many people out there who have been practicing that thing for 10,000 hours, and can easily crush you like a bug, if they are mean spirited like that.

Once you've got 1,000 hours of practice under your belt, it becomes very hard for someone to crush you.

You reach 10,000 hours of practice, and you are now at a level where no one can possibly crush you like a bug. It is near impossible for a human to practice something for 100,000 hours. That would be 40 hours of practice per week for fifty years! Life is too chaotic, and our bodies are too fragile, to hit that level of practice. Thus, when you hit 10,000 hours, you're safe. You no longer have to wonder if there's someone out there who knows 10x more than you. You are now a master.

Do you hear them talking of genius, Degna? There is no such thing. Genius, if you like to call it that, is the gift of work continuously applied. That's all it is, as I have proved for myself. @ Guglielmo Marconi

View source

March 16, 2013 — A kid says Mommy or Daddy or Jack or Jill hundreds of times before grasping the concept of a name.

Likewise, a programmer types name = Breck or age=15 hundreds of times before grasping the concept of a variable.

What do you call it when someone finally sees the concept?

John Calcote, a programmer with decades of experience, calls it a minor epiphany.

Minor epiphanies. Anyone who's programmed for a while can appreciate that term.

When you start programming you do pure trial and error. What will happen when I type this or click that? You rely on memorization of action and reaction. Nothing makes sense. Every single term--variable, object, register, boolean, int, string, array, and so on--is completely and utterly foreign.

But you start to encounter these moments. These minor epiphanies, where suddenly you see the connection between a class of things. Suddenly something makes sense. Suddenly one term is not so foreign anymore. You have a new tool at your disposable. You have removed another obstacle that used to trip you.

In programming the well of minor epiphanies never runs dry. Even after you've learned thousands of things the epiphanies keep flowing at the same rate. Maybe the epiphanies are no longer about what the concept is, or how you can use it, but now are more about where did this concept come from, when was it created, who created it, and most fascinating of all, why did they create it?

Minor epiphanies give you a rush, save you time, help you make better products, and help you earn more.

As someone who loves to learn, my favorite thing about them is the rush you get from having something suddenly click. They make this programming thing really, really fun. Day in and day out.

  • Stumbled upon the term in John Calcote's book "Autotools".

View source

March 8, 2013 — If your software project is going to have a long life, it may benefit from Boosters. A Booster is something you design with two constraints: 1) it must help in the current environment 2) it must be easy to jettison in the next environment.

View source

February 24, 2013 — It is a popular misconception that most startups need to fail. We expect 0% of planes to crash. Yet we switch subjects from planes to startups and then suddenly a 100% success rate is out of the question.

This is silly. Maybe as the decision makers switch from gambling financeers to engineers we will see the success rate of starting a company shoot closer to 100%.

View source

February 16, 2013 — Some purchasing decisions are drastically better than others. You might spend $20 on a ticket to a conference where you meet your next employer and earn 1,000x "return" on your purchase. Or you might spend $20 on a fancy meal and have a nice night out.

Purchasing decisions have little direct downside. You most often get your money's worth.

The problem is the opportunity cost of purchases. That opportunity cost can cost you a fortune.

Since some purchases can change your life, delivering 100x or greater return on your investment, spending your money on things that only give you a 10% return can be a massive mistake, because you'll miss out on those great deals.

It's best to say "no" to a lot of deals. Say "yes" to the types of deals that you know deliver massive return.

  • For me, this is: books, meeting people, coffee (which I use as "fuel" for creating code which generates a strong return).
  • I'm not advocating being a penny pincher. I'm advocating being aware of the orders of magnitude difference in return from purchases.
  • I wonder if someday we'll have credit cards that help you be aware of the order of magnitude variance in expected value of purchases. It seems like one great benefit, would be to charge a higher interest rate on purchases that have low expected value, and a very low interest rate on purchases with high expected value (like books).

View source

February 12, 2013 — You shouldn't plan for the future. You should plan for one of many futures.

The world goes down many paths. We only get to observe one, but they all happen.

In the movie "Back to the Future II", the main character Marty, after traveling decades into the future, buys a sports alamanac so he can go back in time and make easy money betting on games. Marty's mistake was thought he had the guide to the future. He thought there was only one version of the future. In fact, there are many versions of the future. He only had the guide to one version.

Marty was like the kid who stole the answer key to an SAT but still failed. There are many versions of the test.

There are infinite futures. Prepare for them all!

View source

December 29, 2012 — I love that phrase.

I want to learn how to program. Prove it.

I value honesty. Prove it.

I want to start my own company. Prove it.

It works with "we" too.

We're the best team in the league. Prove it.

We love open source. Prove it.

We're going to improve the transportation industry. Prove it.

Words don't prove anything about you. How you spend your time proves everything.

The only way to accurately describe yourself or your group is to look at how you've spent your time in the past. Anytime someone says something about what they will do or be like in the future, your response should be simple: prove it.

View source

December 23, 2012 — If you are poor, your money could be safer under the mattress than in the bank:

The Great Bank Robbery dwarfs all normal burglaries by almost 10x. In the Great Bank Robbery, the banks are slowly, silently, automatically taking from the poor.

One simple law could change this:

What if it were illegal for banks to automatically deduct money from someone's account?

If a bank wants to charge someone a fee, that's fine, just require they send that someone a bill first.

What would happen to the statistic above, if instead of silently and automatically taking money from people's accounts, banks had to work for it?

Sources

Moebs via wayback machine

FBI from here

View source

December 22, 2012 — Entrepreneurship is taking responsibility for a problem you did not create.

It was not Google's fault that the web was a massive set of unorganized pages that were hard to search, but they claimed responsibility for the problem and solved it with their engine.

It was not Dropbox's fault that data loss was common and sharing files was a pain, but they claimed responsibility for the problem and solved it with their software.

It is not Tesla's fault that hundreds of millions of cars are burning gasoline and polluting our atmosphere, but they have claimed responsibility for the problem and are attempting to solve it with their electric cars.

In a free market, like in America or online, you can attempt to take responsibility for any problem you want. That's pretty neat. You can decide to take responsibility for making sure your neighborhood has easy access to great Mexican food. Or you can decide to take responsibility for making sure the whole Internet has easy access to reliable version control. If you do a good job, you will be rewarded based on how big the problem is and how well you solve it.

How big an entrepreneur's company gets is strongly correlated with how much responsibility the entrepreneur wants. The entrepreneur gets to constantly make choices about whether they want their company to take on more and more responsibility. Companies only get huge because their founders say "yes" to more and more responsibility. Oftentimes they can say "yes" to less responsibility, and sell their company or fold it.

Walmart started out as a discount store in the Midwest, but Sam Walton (and his successors) constantly said "yes" to more and more responsibility and Walmart has since grown to take on responibility for discounting across the world.

Google started out with just search, but look at all the other things they've decided to take responsibility for: email, mobile operating systems, web browsers, social networking, document creation, calendars, and so on. Their founders have said "yes" to more and more responsibility.

Smart entrepreneurship is all about choosing problems you can and want to own. You need to say "no" to most problems. If you say "yes" to everything, you'll stretch yourself too thin. You need to increase your responsibility in a realistic way. You need to focus hard on the problems you can solve with your current resources, and leave the other problems for another company or another time.

View source

December 19, 2012 — For the past year I've been raving about Node.js, so I cracked a huge smile when I saw this question on Quora:

In five years, which language is likely to be most prominent, Node.js, Python, or Ruby, and why? - Quora

For months I had been repeating the same answer to friends: "Node.js hands down. If you want to build great web apps, you don't have a choice, you have to master Javascript. Why then master two languages when you don't need to?"

Javascript+Node.js is to Python and Ruby what the iPhone is to MP3 players--it has made them redundant. You don't need them anymore.

So I started writing this out and expanding upon it. As I'm doing this, a little voice in my head was telling me something wasn't right. And then I realized: despite reading Taleb's books every year, I was making the exact mistake he warns about. I was predicting the future without remembering that the future is dominated not by the predictable, but by the unpredictable, the Black Swans.

And sure enough, as soon as I started to imagine some Black Swans, I grew less confident in my prediction. I realized all it would take would be for one or two browser vendors to start supporting Python or Ruby or language X in the browser to potentially disrupt Node.js' major advantage. I don't think that's likely, but it's the type of low probability event that could have a huge impact.

When I started to think about it, I realized it was quite easy to imagine Black Swans. Imagine visiting hackernews in 2013 and seeing any one of these headlines:

  • Microsoft Open Sources Windows.
  • Researchers Crack SSL. Render it useless.
  • Google Perfects Voice Recognition.
  • Facebook Covers the World in Free Wifi.
  • Amazon Launches Distributed Data Centers in your home.

It took only a few minutes to imagine a few of these things. Clearly there are hundreds of thousands of low probability events that could come from established companies or startups that could shift the whole industry.

The future is impossible to predict accurately.

Notes

All that being said, Node.js kicks ass today(the Javascript thing, the community, the speed, the packages, the fact I don't need a separate web server anymore...it is awesome), and I would not be surprised if Javascript becomes 10-100x bigger in the years ahead, while I can't say the same about other languages. And if Javascript doesn't become that big, worst case is it's still a very powerful language and you'll benefit a lot from focusing on it.

View source

There's a man in the world who is never turned down, wherever he chances to stray; he gets the glad hand in the populous town, or out where the farmers make hay; he's greeted with pleasure on deserts of sand, and deep in the aisles of the woods; wherever he goes there's the welcoming hand--he's The Man Who Delivers the Goods. The failures of life sit around and complain; the gods haven't treated them white; they've lost their umbrellas whenever there's rain, and they haven't their lanterns at night; men tire of the failures who fill with their sighs the air of their own neighborhoods; there's one who is greeted with love-lighted eyes--he's The Man Who Delivers the Goods. One fellow is lazy, and watches the clock, and waits for the whistle to blow; and one has a hammer, with which he will knock, and one tells a story of woe; and one, if requested to travel a mile, will measure the perches and roods; but one does his stunt with a whistle or smile--he's The Man Who Delivers the Goods. One man is afraid that he'll labor too hard--the world isn't yearning for such; and one man is always alert, on his guard, lest he put in a minute too much; and one has a grouch or a temper that's bad, and one is a creature of moods; so it's hey for the joyous and rollicking lad--for the One Who Delivers the Goods! Walt Mason, his book (1916)

December 19, 2012 — For a long time I've believed that underpromising and overdelivering is a trait of successful businesses and people. So the past year I've been trying to overdeliver.

But lately I realized that you cannot try to overdeliver. All an individual can do is deliver, deliver, deliver. Delivering is a habit that you get into. Delivering is something you can do.

Overdelivering is only something a team can do. The only way to overdeliver, is for a team of people to constantly deliver things to each other, and then the group constantly delivers something to other people that that person could never imagine doing alone.

But in your role on a team, the key isn't to worry about overdelivering, just get in the habit of delivering.

Be the One who delivers the goods!

View source

December 18, 2012 — One of Nassim Taleb's big recommendations for how to live in an uncertain world is to follow a barbell strategy: be extremely conservative about most decisions, but make some decisions that open you up to uncapped upside.

In other words, put 90% of your time into safe, conservative things but take some risks with the other 10%.

I personally try to follow this advice, particularly with our startup. I think it is good advice. I think it would be swell if our company became a big, profitable, innovation machine someday. But that's not what keeps me up at night.

I'm more concerned about creating the best worst case scenario. I spend most of my time trying to improve the worst case outcomes. Specifically, here's how I think you do this:

Tackle a big problem. Worst case scenario is you don't completely solve it, but you learn a lot in that domain and get acquired/acquihired by a bigger company in the space. That's a great outcome.

Build stuff you want. Worst case scenario is no one uses your product but you. If you aren't a fan of what you build, then you have nothing. If you love your product, that's a great outcome.

Focus on your customers. Make sure your customers are happy and getting what they want. Worst case scenario, you made a couple people of people happy. That's a great outcome.

Practice your skills. Worst case scenario is the company doesn't work out, but you are now much better at what you do. That's a great outcome.

Deliver. Worst case scenario is you deliver something that isn't quite perfect but is good and helps people. That's a great outcome.

Avoid debt. If you take on debt or raise money, worst case scenario is you run out of time and you lose control of your destiny. If you keep money coming in, worst case scenario is things take a little longer or if you move on you are not in a hole. That's a great outcome.

Enjoy life. Make sure you take time to enjoy life. Worst case scenario is you spend a few years with no great outcome at work but you have many great memories from life. That's a great outcome.

Then, if you want to make yourself open to positive black swans, you can put 10% of your efforts into things that make you more open to those like: recruiting world class talent, pitching and raising money, tackling bigger markets. But make sure you focus on the conservative things. Risk, in moderation, is a good thing. Risk, in significant amounts, is for the foolish.

View source

December 18, 2012 — My whole life I've been trying to understand how the world works. How do planes fly? How do computers compute? How does the economy coordinate?

Over time I realized that these questions are all different ways of asking the same thing: how do complex systems work?

The past few years I've had the opportunity to spend thousands of hours practicing programming and studying computers. I now understand, in depth, one complex system. I feel I can finally answer the general question about complex systems with a very simple answer.

Compounded Probability Makes Complex Systems Work

There is no certainty in life or in systems, but there is probability, and probability compounds.

We can combine the high probability that wheels roll, with the high probability that wood supports loads, to build a wooden chariot that has a high probability of carrying things from point A to point B, which has a high probability of giving us more time to innovate, and so on and so forth...

Everything is built off of probability. You are reading this because of countless compounded probabilities like:

  • There is a high probability that your Internet connection works
  • There is a high probability that my web server is responding to connections
  • There is a high probability that my web server's disk works
  • There is a high probability that my web software works and will handle this post
  • There is a high probability that my Internet connection works
  • There is a high probability that my text editor won't fail as I write this
  • There is a high probability that my laptop works
  • There is a high probability that the CPU in my laptop consistently adds numbers in the registers correctly

Complex systems consist of many, many simple components with understood probabilities stitched together.

How does a plane fly? The most concise and accurate answer isn't about aerodynamics or lift, it's about probabilities. A plane is simply a huge system of compounded probabilities.

How does a bridge stay up? The answer is not about physics, it's about compounded probabilities.

How do computers work? Compounded probability.

How do cars work? Compounded probability.

The economy? Compounded probability.

Medicine? Compounded probability.

It's probability all the way down.

View source

December 16, 2012 — When I was a kid I loved reading the Family Circus. My favorite strips were the "dotted lines" ones, which showed Billy's movements over time:

These strips gave a clear narrative of Billy's day. In the strip above, Billy, a fun loving kid, was given a task by his mother to put some letters in the mailbox before the mailman arrives. Billy took the letters, ran into the kitchen, then dashed into the living room, jumped on the couch, sprinted to the dining room, crawled under the dining room table, skipped into the TV room, jumped into the crib, twirled into the foyer, stumbled outside, swung around the light post, then ran to the mailbox.

We know the end result: Billy failed to get to the mailbox in time.

With this picture in mind, let's do a thought experiment.

Let's imagine that right now, once again, Billy and his mom are standing in the laundry room and she's about to give him the mail. What are the odds that Billy gets to the mailbox in time?

Pick a range, and then click here to see the answer.

View source

December 16, 2012 — Concise but not cryptic. e=mc² is precise and not too cryptic. Shell commands, such as chmod -R 755 some_dir are concise but very cryptic.

Understandable but not misleading. "Computing all boils down to ones and zeros" is understandable and not misleading. "Milk: it does a body good", is understandable but misleading.

Minimal but not incomplete. A knife, spoon and fork is minimal. Just a knife is incomplete.

Broad but selective. A knife, spoon, and fork is broad and selective. A knife, spoon, fork and turkey baster is just as broad but not selective.

Flat but not too flat. 1,000 soldiers is flat but too flat. At least a few officers would be better.

Anthropocentric not presentcentric. Shoes are relevant to people at any time. An iPhone 1 case is only useful for a few years.

Cohesive but flexible. You want the set to match. But you want each item to be independently improvable.

Simple is balanced. It is nuanced, not black and white.

View source

December 14, 2012 — Note is a structured, human readable, concise language for encoding data.

XML to JSON to Note

In 1998, a large group of developers were working on technologies to make the web simpler, more collaborative, and more powerful. Their vision and hard work led to XML and SOAP.

XML was intended to be a markup language that was "both human-readable and machine-readable". As Dave Winer described it, "XML is structure, simplicity, discoverability and new power thru compatibility."

SOAP, which was built on top of XML, was intended to be a "Simple Object Access Protocol". Dave said "the technology is potentially far-reaching and precedent-setting."

These technologies allowed developers across the world to build websites that could work together with other websites in interesting ways. Nowadays, most web companies have APIs, but that wasn't always the case.

Although XML and SOAP were a big leap forward, in practice they are difficult to use. It's arguable whether they are truly "human-readable" or "simple".

Luckily, in 2001 Douglas Crockford specified a simpler, more concise language called JSON. Today JSON has become the de facto language for web services.

Could JSON be improved?

Early last year, one idea that struck me was that subtle improvements to underlying technologies can have exponential impact. Fix a bug in subversion and save someone hours of effort, but replace subversion and save someone weeks.

The switch from XML to JSON had made my life so much easier, I wondered if you could extract an even simpler alternative to JSON. JSON, while simple, still takes a while to learn, particularly if you are new to coding. Although more concise than XML, JSON has at present six types and eight syntax characters, all of which can easily derail developers of all skill levels. Because whitespace is insignificant in JSON, it quickly becomes messy. These are all relatively small details, but I think perhaps getting the details right in a new encoding could make a big difference in developers' lives.

Introducing Note

After almost two years of tinkering, and with a lot of inspiration from JSON, XML, HAML, Python, YAML, and other languages, we have a new simple encoding that I hope might make it easier for people to create and use web services.

We dubbed the encoding Note, and have put an early version with Javascript support up on Github. We've also put out a quick demonstration site that allows you to interact with some popular APIs using Note.

Note is a text based encoding that uses whitespace to give your data structure. Note is simple: there are only two syntax characters (newline and space). It is concise--not a single keystroke is wasted (we use a single space for indentation--why use 2 when one is sufficient?). Note is neat: the meaningful whitespace forces adherance to a clean style. These features make Note very easy to read and to write.

Despite all this minimalism, Note is very powerful. Each note is a hash consisting of name/value pairs. Note is also recursive, so each note can be a tree containing other notes.

Note has only two types: strings and notes. Every entity in note is either a string or another Note. But Note is infinitely extendable. You can create domain specific languages on top of Note that support additional types as long as you respect the whitespace syntax of Note.

This is a very brief overview of the thinking behind Note and some of its features. I look forward to the months ahead as we start to implement Note on sites across the web and demonstrate some of the neat features and capabilities of the encoding.

Please feel free to email me with any questions or feedback you may have, as well as if you'd be interested in contributing.

View source

November 26, 2012 — For todo lists, I created a system I call planets and pebbles.

I label each task as a planet or a pebble. Planets are super important things. It could be helping a customer complete their project, meeting a new person, finishing an important new feature, closing a new sale, or helping a friend in need. I may have 20 pebbles that I fail to do, but completing one planet makes up for all that and more.

I let the pebbles build up, and I chip away at them in the off hours. But the bulk of my day I try to focus on the planets--the small number of things that can have exponential impact. I don't sweat the small stuff.

I highly recommend this system. We live in a power law world, and it's important to practice the skill of predicting what things will prove hugely important, and what things will turn out to be pebbles.

View source

November 25, 2012 — I published 55 essays here the first year. The second and third years combined, that number nosedived to 5.

What caused me to stop publishing?

It hasn't been a lack of ideas. All my essays start with a note to self. I have just as many notes to self nowadays as I did back then.

It hasn't been a lack of time. I have been working more but blogging doesn't take much time.

It's partly to do with standards. I've been trying to make higher quality things. I used to just write for an hour or two and hit publish. Now I'm more picky.

I've also become somewhat disappointed with the essay form. I am very interested in understanding systems, and I feel words alone don't explain systems well. So I've been practicing my visual and design skills. But I'm basically a beginner and output is slow.

The bottom line is I want to publish more. It forces me to think hard about my opinions, and it opens me up to advice from other people. I think this blog has helped me be less wrong about a lot of things. So here's to another fifty posts.

View source

November 20, 2012 — "Is simplicity ever bad?" If you had asked me this a year ago, I probably would have called you a fucking moron for asking such a dumb question. "Never!", I would have shouted. Now, I think it's a fair question. Simplicity has it's limits. Simplicity is not enough, and if you pursue simplicity at all costs, that can be a bad thing. There's something more than simplicity that you need to be aware of. I'll get to that in a second, but first, I want to backtrack a bit and state clearly that I do strongly, strongly believe and strive for simplicity. Let me talk about why for a second.

Why I love simplicity

Simple products are pleasant to use. When I use a product, and it is easy to use, and it's quick to use, I love that. I fucking hate things that are not as simple as possible and waste people's time or mental energy as a result. For example, to file my taxes with the IRS, I cannot go to the IRS' website. It's much more complex than that. I hate that. It is painful. Complex things are painful to use. Simple things are pleasant to use. They make life better. This is, of course, well known to all good designers and engineers.

Simple things are also more democratic. When I can understand something, I feel smart. I feel empowered. When I cannot understand something, I feel stupid. I feel inferior. Complex things are hard to understand. The response shouldn't be to spend a long time learning the complex thing, it should be to figure out how to make the complex thing simpler. When you do that, you create a lot of value. If I can understand something, I can do something. When we make things simpler, we empower people. Often times I wonder if being a doctor would only take 2 years if Medicine abandoned Latin terms for a simpler vocabulary.

Reaching the Limits of Simplicity

This whole year, and well before that, I've been working with people trying to make the web simpler. The web is really complex. You need to know about HTML, CSS, Javascript, DNS, HTTP, DOM, Command Line, Linux, Web Servers, Databases, and so on. It's a fucking mess. It's fragmented to all hell as well. Everyone is using different languages, tools, and platforms. It can be a pain.

Anyway, we've been trying to make a simple product. And we've been trying to balance simplicity with features. And that's been difficult. Way more difficult than I would have predicted.

The thing is, simpler is not always better. A fork is simpler than a fork, knife, and spoon, but which would you rather have? The set is better. Great things are built by combining distinct, simple things together. If you took away the spoon, you'd make the set simpler, but not better. Which reminds me of that Einstein quote:

Make things as simple as possible, but not simpler.

I had always been focused on the first part of that quote. Make things as simple as possible. Lately I've thought more about the second part. Sometimes by trying to make things too simple you make something a lot worse. Often, less is more, but less can definitely be less.

People rave about the simplicity of the iPhone. And it is simple, in a sense. But it is also very complex. It has a large screen, 2 cameras, a wifi antenna, a GPS, an accelerometer, a gyroscope, a cell antenna, a gpu, cpus, memory, a power unit, 2 volume buttons, a power button, a home button, a SIM card slot, a mode switch, and a whole lot more. Then the software inside is another massive layer of complexity. You could try to make the iPhone simpler by, for example, removing the volume buttons or the cameras, but that, while increasing the simplicity, would decrease the "setplicity". It would remove a very helpful part of the set which would make the whole product worse.

Think about what the world would be like if we only used half of the periodic table of elements--it would be less beautiful, less enjoyable, and more painful.

Simplicity is a great thing to strive for. But sometimes cutting things out to make something simpler can make it worse. Simplicity is not the only thing to maximize. Make sure to balance simplicity with setplicity. Don't worry if you haven't reduced things to a singularity. Happiness in life is found by balancing amongst a set of things, not by cutting everything out.

View source

October 20, 2012 — I love to name things.

I spend a lot of time naming ideas in my work. At work I write my code using a program called TextMate. TextMate is a great little program with a pleasant purple theme. I spend a lot of time using TextMate. For the past year I've been using TextMate to write a program that now consists of a few hundred files. There are thousands of words in this program. There are hundreds of objects and concepts and functions that each have a name. The names are super simple like "Pen" for an object that draws on the screen, and "delete" for a method that deletes something. Some of the things in our program are more important than others and those really important ones I've renamed dozens of times searching for the right fit.

There's a feature in TextMate that lets me find and replace a word across all 400+ files in the project. If I am unhappy with my word choice for a variable or concept, I'll think about it for weeks if not months. I'll use Thesaurus.com, I'll read about similar concepts, I'll run a subconscious search for the simplest, best word. When I find it, I'll hit Command+Shift+F in TextMate and excitedly and carefully execute a find and replace across the whole project. Those are some of my favorite programming days--when I find a better name for an important part of the program.

Naming a thing is like creating life from inorganic material in a lab. You observe some pattern, combine a bunch of letters to form a name, and then see what happens. Sometimes your name doesn't fit and sits lifeless. But sometimes the name is just right. You use it in conversation or in code and people instantly get it. It catches on. It leaves the lab. Your name takes a life of its own and spreads.

Words are very contagious. The better the word, the more contagious it can be. Like viruses, small differences in the quality of a word can have exponential differences on it's spread. So I like to spend time searching for the right words.

Great names are short. Short names are less effort to communicate. The quality of a name drops exponentially with each syllable you add. Coke is better than Coca-Cola. Human is better than homo sapiens.

Great names are visual. A good test of whether a name is accurate is whether you can draw a picture of the name that makes sense. Net is better than cyberspace. If you drew a picture of the physical components of the Internet, it would look a lot like a fishing net. Net is a great name.

Great names are used for great ideas. You should match the quality of a name to the quality of the idea compared to the other ideas in the space. This is particularly applicable in the digital world. If you are working on an important idea that will be used by a lot of people in a broad area, use a short, high quality name. If you are working on a smaller idea in that same area, don't hog a better name than your idea deserves. Linux is filled with great programs with bad names and bad programs with great names. I've been very happy so far with my experience with NPM, where it seems programmers who are using the best names are making their programs live up to them.

I think the exercise of naming things can be very helpful in improving things. Designing things from first principles is a proven way to arrive at novel, sometimes better ideas. Attempting to rename something is a great way to rethink the thing from the ground up.

For example, lately I've been trying to come up with a way to explain the fundamentals of computing. A strategy I recently employed was to change the names we use for the 2 boolean states from True and False to Absent or Present. It seems like it gets closer to the truth of how computers work. I mean, it doesn't make sense to ask a bit whether it is True or False. The only question an electronic bit can answer is whether a charge is present or absent. When we compare variable A to variable B, the CPU sets a flag in the comparison bit and we are really asking that bit whether a charge is present.

What I like about the idea of using the names Present and Absent is that it makes the fundamentals of computing align with the fundamentals of the world. The most fundamental questions in the world about being--of existence. Do we exist? Why do we exist? Will we exist tomorrow? Likewise, the most fundamental questions in computing is not whether or not there are ones and zeroes, it's whether or not a charge exists. Does a charge exist? Why does that charge exist? Will that charge exist in the next iteration? Computing is not about manipulating ones and zeroes. It's about using the concept of being, of existence, to solve problems. Computing is about using the concept of the presence or absence of charge to do many wonderful things.

View source

March 30, 2011 — Railay is a tiny little beach town in Southern Thailand famous for its rock climbing. I've been in Railay for two weeks. When the weather is good, I'm outside rock climbing. When the weather is bad, I'm inside programming. So naturally I've found myself comparing the two. Specifically I've been thinking about what I can take away from my rock climbing experience and apply to my programming education.

Here's what I've come up with.

1. You should always be pushing yourself. Each day spent climbing I've made it to a slightly higher level than the previous day. The lazy part of me has then wanted to just spend one day enjoying this new level without pushing myself further. Luckily I've had a great climbing partner who's refused that and has forced me to reach for the next level each day. In both rock climbing and programming you should always be reaching for that new level. It's not easy, you have to risk a fall to reach a new height, but it's necessary if you want to become good. In programming, just like in climbing, you should be tagging along with the climbers at levels above you. That's how you get great. Of course, don't forget to enjoy the moment too.

2. Really push yourself. In rock climbing you sometimes have these points where you're scared--no, where you're fucking petrified--that you're going to fall and get hurt or die and you're hanging on to the rock for dear life, pouring sweat, and you've got to overcome it. In programming you should seek out moments like these. It will never be that extreme of course, but you should find those spots where you are afraid of falling and push yourself to conquer them. It might be a project whose scope is way beyond anything you've attempted before, or a task that requires advanced math, or a language that scares the crap out of you. My climbing instructor here was this Thai guy named Nu. He's the second best speed climber in Thailand and has been climbing for fifteen years. The other day I was walking by a climbing area and saw Nu banging his chest and yelling at the top of his lungs. I asked a bystander what was going on and he told me that Nu was struggling with the crux of a route and was psyching himself up to overcome it. That's why he's a master climber. Because he's been climbing for over fifteen years and he's still seeking out those challenges that scare him.

3. There's always a next level. In rock climbing you have clearly defined levels of difficulty that you progress through such as 5 to 8+ or top rope to lead+. In programming the levels are less defined and span a much wider range but surely exist. You progress from writing "hello world" to writing compilers and from using notepad to using vim or textmate or powerful IDEs. You might start out writing a playlist generator and ten years later you may be writing a program that can generate actual symphonies, but there still will be levels to climb.

4. Climbing or programming without teachers is very inefficient. There are plenty of books on rock climbing. But there's no substitute for great teachers. You can copy what you see in books and oftentimes you'll get many parts right, but a teacher is great for pointing out what you're doing wrong. Oftentimes you just can't tell what the key concepts and techniques to focus on are. You might not focus on something that's really important such as using mostly legs in climbing or not repeating yourself in programming. A good teacher can instantly see your mistakes and provide helpful feedback. Always seek out great teachers and mentors whether they be friends, coworkers, or professional educators.

5. You learn by doing; practice is key. Although you need teachers and books to tell you what to do, the only way to learn is to do it yourself, over and over. It takes a ton of time to master rock climbing or programming and although receiving instruction plays an important part, the vast majority of the time it takes to learn will be spent practicing.

6. Breadth, not only depth, is important. Sometimes to get to the next level in rock climbing you need to get outside of rock climbing. You may need to take up yoga to gain flexibility or weightlifting to gain strength. Likewise in programming sometimes you need to go sideways to go up. If you want to master Rails, you'll probably want to spend time outside of it and work on your command line and version control skills. Programming has a huge amount of silos. To go very deep in any one you have to gain competance in many.

7. People push the boundaries. Both rock climbing and programming were discovered by people and people are continually pushing the boundaries of both. In rock climbing advanced climbers are discovering new areas, bolting new routes, inventing new equipment, perfecting new techniques, and passing down new knowledge. Programming is the most cumulative of all human endeavors. It builds on the work of tens of millions of people and new "risk takers" are always constantly pushing the frontiers (today in areas like distributed computing, data mining, machine learning, parallel processing and mobile amongst others).

8. Embrace collaboration. The rock climbing culture is very collaborative much like the open source culture. Rock climbing is an inherently open source activity. Everything a climber does and uses is visible in the open. This leads to faster knowledge transfer and a safer activity. Likewise, IMO open source software leads to a better outcome for all.

9. Take pride in your work. In rock climbing when you're the first to ascend a route your name gets forever attached to that route. In programming you should be proud of your work and add your name to it. Sometimes I get embarrassed when I look at some old code of mine and realize how bad it is. But then I shrug it off because although it may be bad by my current standards, it represents my best honest effort at the time and so there's nothing to be ashamed of. I'm sure the world's greatest rock climbers have struggled with some easy routes in their day.

10. Natural gifts play a part. Some people who practiced for 5,000 hours will be worse than some people who practiced for only 2,000 hours due to genetics and other factors. It would be great if how good you were at something was determined totally by how many hours you've invested. But it's not. However, at the extremes, the number of hours of practice makes a huge difference. The absolute best climbers spend an enormous amount of time practicing. In the middle of the pack, a lot of the difference is just due to luck. I've worked with a wide range of programmers in my (short so far) career. I've worked with really smart ones and some average ones. Some work hard and others aren't so dedicated. The best by far though, possess both the intelligence and the dedication. And I'd probably rather work with the dedicated and average smarts over the brilliant but lazy.

Notes

  • Rock climbing and programming are great complements because rock climbing seems to erase any bad effects on your hands that typing all day can cause.
  • I'm a terrible rock climber and only a decent programmer. Take all my advice with a grain of salt.

View source

March 5, 2011 — A good friend passed along some business advice to me a few months ago. "Look for a line," he said. Basically, if you see a line out the door at McDonald's, start Burger King. Lines are everywhere and are dead giveaways for good business ideas and good businesses.

Let's use Groupon as a case study for the importance of lines. Groupon scoured Yelp for the best businesses in its cities--the businesses that had virtual lines of people writing positive reviews--and created huge lines for these businesses with their discounts. Other entrepreneurs saw the number of people lining up to purchase things from Groupon and created a huge line of clones. Investors saw other investors lining up to buy Groupon stock and hopped in line as well. Business is all about lines.

In every country we travel to I look around for lines. It's a dead giveaway for finding good places to eat, fun things to do, amazing sites to see. If you want to start a business, look for lines and either create a clone or create an innovation that can steal customers from that line. If you see tons of people lining up to take taxis, start a taxi company. Better yet, start a bus.

Creating Lines

Succeeding in business is all about creating lines. Apple creates lines of reporters looking to write about their next big product. Customers line up outside their doors to buy their next big product. Investors line up to pump money into AAPL. Designers and engineers line up to work there.

If you are the CEO of a company, your job is simply to create lines. You want customers lining up for your product, investors lining up to invest, recruits lining up to apply for jobs. It's very easy to measure how you're doing. If you look around and don't see any lines, you gotta pick it up.

View source

March 4, 2011 — I haven't written in a long while because i'm currently on a long trip around the world. at the moment, we're in indonesia. one thing that really surprised me was that despite our best efforts to do as little planning as possible, we were in fact almost overprepared. i've realized you can do an around the world trip with literally zero planning and be perfectly fine. you can literally hop on a plane with nothing more than a passport, license, credit card, and the clothes on your back and worry about the rest later. i think a lot of people don't make a journey like this because they're intimidated not by the trip itself, but by the planning for the trip. i'm here to say you don't need to plan at all to travel the world (alas, would be a lot harder if you were not born in a first world country, unfortunately). here's my guide for anyone that might want to attempt to do so. every step is highlighted in bold. adjust accordingly for your specific needs and desires.

The plan (see below for bullet points)

Set a savings goal. you'll need money to travel around the world, and the more money you have, the easier, longer, and more fun your journey will be.

Save, save, save. make sure you save enough so that when your trip ends you won't come home broke. $12,000 would be a large enough amount to travel for a long time and still come back with money to get you resettled easily.

Once you've saved half of your goal, buy your first one way plane ticket to a cheap, tourist friendly country. bali, indonesia or bangkok, thailand would be terrific first stops, amongst others. next, get a paypal account with a paypal debit card. this card gives you 1.5% cash back on all purchases, only charges a $1 atm fee, and charges no foreign transaction fees at all. the 1.5% cash back more than offsets the 1% fee Mastercard charges for interchange fees. if you don't have them already, get a drivers license and a passport with at least 1 year left before expiration. get a free google voice number so people can still SMS and leave you voicemails without paying a monthly cell phone bill. if you need glasses, contacts, prescription medication, or other custom things, stock up on those.

Settle your affairs at home--housing, job, etc. now, your planning is DONE! you have everything you need to embark on a trip around the world.

Get on the plane with your passport, license, paypal debit card, and $100 US Cash. you don't need anything else--not even a backpack! you'll pick up all that later.

Once you've arrived in bali (or another similar locale), go to a large, cheap shopping district(kuta square in bali for example). if you arrived late, find a cheap place to crash first and hit the market first thing in the morning. look for backpackers at the airport or ask someone who works there for cheap accommodation recommendations.

Once you're at the market, you've got a lot to buy. visit an ATM to take money out of your PayPal account in the local currency. if you want, space out your purchases over a few days. you'll want to buy a lonely planet/rough guides for your current country, a solid backpack (get a good one), bug spray with deet, sun tan lotion, a toothbrush, toothpaste, deodorant, nail clippers, tweezers, a swiss army knife, pepto bismol, tylenol, band aids, neosporin, bathing suit, some clothes for the current weather, shoes/flip flops, a cheap cell phone and SIM card, a netbook, a power adapter, and a camera and memory card. you now have pretty much everything you need for your trip and you probably spent less than half of what you would have had to spend in the states. you may want some other things like a sleeping bag, tent, portable stove, goggles, etc., depending on what you want to do on your trip.

Now, talk to locals and other travelers for travel recommendations. that plus your lonely planet and maybe some google searching and you'll have all the tools you need to plan where to go, what to do and what to eat.

Hit up an internet cafe to email and print a copy of your drivers license, passport, and credit card. it will be dirt cheap. get some passport photos made for countries that require a photo for visas. then sign up for skype and facebook (if you're the one person in the world who hasn't done this yet) to make cheap phone calls and keep in touch with family and friends.

Plan your trip one country at a time. every few days, check flight prices for the next few legs of your trip. you can sometimes get amazingly cheap deals if you check prices frequently and are flexible about when and where you fly. use sites like kayak, adioso, hotels.com, airbnb, and hostelworld to find cheap flights and places to stay, especially in expensive countries. in cheap countries, lonely planet and simply asking around often works great for finding great value hotels. also in expensive cities, find the local groupon clones and check them often for great excursion and meal deals. finally, you might want to get travel insurance from a site like world nomads.

That's it. enjoy your trip!

Bullet point format

  • set a savings goal.
  • save, save, save.
  • buy your first one way plane ticket to a cheap, tourist friendly country.
  • get a paypal account with a paypal debit card.
  • get a drivers license and a passport with at least 1 year left before expiration.
  • get a free google voice number.
  • settle your affairs at home--housing, job, etc.
  • get on the plane with your passport, license, paypal debit card, and $100 US Cash.
  • go to a large, cheap shopping district.
  • visit an ATM.
  • buy a lonely planet for your current country, a solid backpack (get a good one), bug spray with deet, sun tan lotion, a toothbrush, toothpaste, deodorant, nail clippers, tweezers, a swiss army knife, pepto bismol, tylenol, band aids, neosporin, bathing suit, some clothes for the current weather, shoes/flip flops, a cheap cell phone and SIM card, a netbook, a power adapter, and a camera and memory card.
  • talk to locals and other travelers for travel recommendations.
  • hit up an internet cafe to email and print a copy of your drivers license, passport, and credit card.
  • get some passport photos made for countries that require a photo for visas.
  • sign up for skype and facebook.
  • plan your trip one country at a time.
  • check flight prices for the next few legs of your trip.
  • find the local groupon clones and check them often for great excursion and meal deals.

View source

September 18, 2010 — I was an Economics major in college but in hindsight I don't like the way it was taught. I came away with an academic, unrealistic view of the economy. If I had to teach economics I would try to explain it in a more realistic, practical manner.

I think there are two big concepts that if you understand, you'll have a better grasp of the economy than most people.

The first idea is that the economy has a pulse and its been beating for thousands of years. The second is that the economy is like a brain and if you visualize it in that way you can make better decisions depending on your goals.

In Media Res

Thousands of years ago people were trading goods and service, knitting clothes, and growing crops. The economy slowly came to life probably around 20 or 15 thousand years ago and it's never stopped. Although countless kingdoms, countries, industries, companies, families, workers, owners, have come and gone, this giant invisible thing called the economy has kept on trucking.

And not much has changed.

Certainly in 2,000 B.C. there was a lot more bartering and a lot less Visa, but most of the concepts that describe today's economy are the same as back then. You had industries and specialization, rich and poor, goods and services, marketplaces and trade routes, taxes and government spending, debts and investments.

Today, the economy is more connected. It covers more of the globe. But it's still the same economy that came to life thousands of years ago. It's just grown up a bit.

What are the implications of this? I think the main thing to take away from this idea is that we live in a pretty cool time where the economy has matured for thousands of years. It has a lot to offer if we understand what it is and how to use it. Which brings me to my next point.

The economy is like a brain.

The second big idea I try to keep in mind about the economy is that it's like a neural network. It's really hard to form a model of what the economy really looks like, but I think a great analogy is the human brain.

At a microscopic level, the brain is composed of around 100 billion neurons. The economy is currently composed of around 8 billion humans.

The average neuron is directly connected to 1,000 other neurons via synapses. Some neurons have more connections, some have less. The average human is directly connected to 200 other humans in their daily economic dealings. Some more, some less.

Neurons and synapses are not distributed evenly in the brain. Some are in relatively central connections, some are on the periphery. Likewise, some humans operate in critical parts of the economy(London or Japan for example), while many live in the periphery(Driggs, Idaho or Afghanistan, for example).

If we run with this analogy that the economy is like the human brain, what can we take home from that?

For people that want a high paying job

If you want a high paying job then you should think carefully about where you plug yourself into the network/economy. You want to plug yourself in where there's a lot of action. You want to plug yourself into a "nerve center". These nerve centers can be certain geographies, certain industries, certain companies, etc. For instance, plugging yourself into an investment banking job on Wall Street will bring you more money than teaching surfing in Maui. Now, if you're born in the periphery, like a third world nation, you might be SOL. It's tremendously easier to plug yourself into a nerve center if you're born in the right place at the right time.

If you don't care much for money there are plenty of peripheries

Now if you don't want a high paying job there are more choices available to you. Most of the economy is not a nerve center. It's also a lot easier to move from a high paying spot in the economic brain to a place in a lower paying spot.

Starting a business you've got to inject yourself into it.

When you start a business, you're basically a neuron with no synapses living outside the brain. You've got to inject yourself into the brain and build as many synapses as possible. When you start a business, the brain("the economy"), doesn't give a shit about you. You've got to plug yourself in and make yourself needed. You've got to get other neurons(people/companies/governments) to depend on you. You can do this through a combination of hard work, great products/services, great sales, etc.

Now one thing I find interesting is that a lot of people say entrepreneurs are rebels. This is sort of true, however, for a business to be successful the business has to conform a lot for the economy to build connections to it. If you want to be a nerve center, you've got to make it easy for other parts of the economy to connect to you. You can't be so different that you are incompatible with the rest of the economy. If you want to be a complete rebel, you can do that on the periphery, but you won't become a big company/nerve center.

If you're an existing business, it's hard to get dislodged.

Once you are "injected" into the economy, it's hard to get dislodged. If a lot of neurons have a lot of synapses connected to you, those will only die slowly. For a long time business will flow through you. This explains why a company like AOL can still make a fortune.

In conclusion

In conclusion, the economy is a tremendous creature that can provide you with a lot if you plug yourself in. It's been growing for thousands of years and has a lot to offer. You can also to choose to stay largely unplugged from it, and that's okay too.

View source

August 25, 2010 — Warren Buffet claims to follow an investment strategy of staying within his "circle of competence". That's why he doesn't invest in high tech--it's outside his circle.

I think this is good advice. The tricky part is to figure out where to draw the circle.

Here are my initial thoughts:

  1. Start with a small circle. Be conservative about where you draw the circle.
  2. Do what you're good at as opposed to what you want to do. Our economy rewards specialization. You want to work on interesting problems, but it pays better to work on things you've done before. Use that money to explore the things you want to do.
  3. Be a big fish in a small circle.
  4. Spend time outside your circle, but expand it slowly. Definitely work hard to improve your skill set but don't overreach. It's better to have a solid core and build momentum from that than to be marginal in a lot of areas.

View source

August 25, 2010 — I have a feeling critical thinking gets the least amount of brain's resources. The trick is to critically think about things, come to conclusions, and turn those conclusions into habits. The subconcious, habitual mind is much more powerful than the tiny little conscious, critically thinking mind.

If you're constantly using the critical thinking part of your mind, you're not using the bulk of your mind. You're probably accomplishing a lot less than you could be.

Come to conclusions and build good habits. Let your auto pilot take over. Then occasionally come back and revisit your conclusions.

View source

August 25, 2010 — I've been working on a fun side project of categorizing things into Mediocristan or Extremistan(inspired by NNT's book The Black Swan).

I'm trying to figure out where intelligence belongs. Bill Gates is a million times richer than many people; was Einstein a million times smarter than a lot of people? It seems highly unlikely. But how much smarter was he? Was he 1,000x smarter than the average joe? 100x smarter?

I'm not sure. The brain is a complex thing and I haven't figure out how to think about intelligence yet.

Would love to hear what other people think. Shoot me an email!

View source

August 25, 2010 — Maybe I'm getting old, but I'm starting to think the best way to "change the world" isn't to bust your ass building companies, inventing new machines, running for office, promoting ideas, etc., but to simply raise good kids. Even if you are a genius and can invent amazing things, by raising a few good kids their output combined can easily top yours. Nerdy version: you are a single core cpu and can't match the output of a multicore machine.

I'm not saying I want to have kids anytime soon. I'm just realizing after spending time with my family over on Cape Cod, that even my dad, who is a harder worker than anyone I've ever met and has made a profound impact with his work, can't compete with the output of 4 people (and their potential offspring), even if they each work only 1/3 as hard, which is probably around what we each do. It's simple math.

So the trick to making a difference is to sometimes slow down, spend time raising good kids, and delegate some of the world saving to them.

View source

August 25, 2010 — Genetics, aka nature, plays the dominant role in predicting most aspects of your life, in my estimation.

Across every dimension in life your genes are both a glass ceiling--preventing you from reaching certain heights--and a cement foundation--making it unlikely you'll hit certain lows. How tall/short you will be, how smart/dumb you will be, how mean/nice you will be, how popular/lonely you will be, how athletic/clumsy, how fat/skinny, how talkative/quiet, how long/short you'll live, and so forth.

By the time you are born, your genes, place of birth, year of birth, parents--they're all set in stone, and the constraints on your life are largely in place. That's an interesting thought.

Nurture plays a huge role in making you, of course. Being born with great genes is irrelevant if you are malnourished, don't get early education, etc. But nurture cannot overcome nature. Our DNA is not at all malleable and no one knows if it ever will be. Nonetheless, it makes no sense to complain about nature. It is up to you to make the most of your starting hand. On the other hand, let us not be quick to judge others. I make that mistake a lot.

I think the bio/genome field will be the most interesting industry come 2025 or so.

View source

August 25, 2010 — Doctors used to recommend leeches to cure a whole variety of illnesses. That seems laughable today. But I think our recommendations today will be laughable to people in the future.

Recommendations work terrible for everyone but decently on average.

We are a long, long way from making good individual recommendations. You won't get good individual recommendations until your individual genome is taken into account. And even then it will take a while. We may never get to the point where we can make good individual recommendations.

So many cures and medicines work for a certain percentage of people, but for some people they can have detrimental or even fatal effects. People rave about certain foods, exercises, and so forth, without considering how differences in genetics can have a huge role.

People are quite similar, but they are also quite different and react to different things in different ways. I think we are a long way away from seeing breakthroughs in recommendations.

Recommendations are great business, but I think we're 2 or 3 orders of magnitude away from where they could be, and it could take decades(or never) to reach those levels.

View source

August 25, 2010 — Ruby is an awesome language. I've come to the conclusion that I enjoy it more than Python for the simple reason that whitespace doesn't matter.

Python is a great language too, and I have more experience with it, and the whitespace thing is a silly gripe. But I've reached a peak with PHP and am looking to master something new. Ruby it is.

View source

August 25, 2010 — I've been very surprised to discover how unpredictable the future is. As you try to predict farther out, your error margins grow exponentially bigger until you're "predicting" nothing specific at all.

Apparently this is because many things in our world are "chaotic". Small errors in your predictions get compounded over time. 10 day weather forecasts are notoriously inaccurate despite the fact that teams of the highest IQ'd people on earth have been working on them for years. I don't understand the math behind chaos but I believe in the basic ideas.

A Simple Example

I can correctly predict whether or not I'll work out tomorrow with about 85% accuracy. All I need to do is look at whether I worked out today and whether I worked out yesterday. If I worked out those 2 days, odds are about 90% I will work out tomorrow. If I worked out yesterday but didn't work out today, odds are about 40% I will work out tomorrow. If I worked out neither of those two days, odds are about 20% I'll work out tomorrow.

However, I can't predict with much accuracy whether or not I'll work out 30 days from now. That's because the biggest two factors depend on whether I work out 29 days from now and 28 days from now. And whether I work out 29 days from now depends on the previous 2 days the most. If I'm wrong in my predictions about tomorrow, that error will compound and throw me off. It's hard to make an accurate prediction about something so simple. Imagine how hard it is to make a prediction about a non-binary quantity.

Things You Can't Predict

Weather, the stock market, individual stock prices, the next popular website, startup success, box office hits, etc. Basically dynamic, complex systems are completely resistant to predictions.

On Model verse Off Model

When making predictions you generally build a model--consciously or unconsciously. For instance, in predicting my future workouts I can make a spreadsheet (or just a "mental spreadsheet") where I come up with some inputs that are used to predict the future workout. My inputs might be whether I worked out today and whether it will rain. These are the "on model" factors. But all models leave things out that may or may not affect the outcome. For example, it could be sunny tomorrow and I could have worked out today, so my model would predict a workout tomorrow. But then I might get injured on my way to the gym--an "off model" risk that I hadn't taken into account.

Avoid Making Predictions & Run From People Pushing Predictions

The world is complex and impossible to predict accurately. But people don't get this. They think the world is easier to explain and predict than it really is. And so they demand predictions. And so people provide them, even though these explanations and predictions are bogus. Feel free to make or listen to long term predictions for entertainment, but don't believe any long term predictions you hear. We're a long way(possibility an infinitely long way) from making accurate predictions about the long run.

Inside Information

What if you have inside information? Should you then be able to make better predictions than others? Let's imagine for a moment that you were alive in 1945 and you were trying to predict when WWII would end. If you were like 99.99999+% of the population, you would have absolutely no idea that a new type of bomb was just invented and about to be put to use. But if you were one of the few who knew about the bomb, you might have been a lot more confident that the war was close to an end. Inside information gives you a big advantage in predicting the future. If you have information and can legally "bet on that", go for it. However, even the most connected people only have the inside scoop on a handful of topics, and even if you know something other people don't it's very hard to predict the scale (or direction) of an event's effect.

Be Conservative

My general advice is to be ultra conservative about the future and ultra bullish on the present. Plan and prepare for the worst of days--but without a pessimistic attitude. Enjoy today and make safe investments for tomorrow.

View source

August 23, 2010 — Your most recent experiences effect you the most. Reading this essay will effect you the most today but a week from now the effect will have largely worn off.

Experiences have a half-life. The effect decays over time. You might watch Almost Famous, run out to buy a drumset, start a band, and then a month later those drums could be gathering dust in your basement. You might read Shakespeare and start talking more lyrically for a week.

Newer experiences drown out old ones. You might be a seasoned Rubyist and then read an essay espousing Python and suddenly you become a Pythonista.

All genres of experiences exhibit the recency effect. Reading books, watching movies, listening to music, talking with friends, sitting in a lecture--all of these events can momentarily inspire us, influence our opinions and understanding of the world, and alter our behaviors.

Taking Advantage of the Recency Effect

If you believe in the recency effect you can see the potential benefit of superstitious behavior. For instance, I watched "The Greatest Game Ever Played", a movie about golf, and honest to god my game improved by 5 strokes the next day. A year later when I was a bit rusty, I watched it again and the effect was similar(though not as profound). When I want to write solid code, I'll read some quality code first for the recency effect.

If you want to do great work, set up an inspiring experience before you begin. It's like taking a vitamin for the mind.

Some More Examples

  • Settlers of Cataan can make you an astute businessman after a few games. You'll find yourself negotiating everything and saving/making money left and right.
  • Influence by Cialdini will give you a momentary force field against the tricks of pushy salespeople and also temporarily boost your own ability to get people to do what you want.
  • Watching Jersey Shore will temporarily make you feel much better about your life while at the same time altering your vocabulary with phrases like "You do you" and "GTL".

View source

August 23, 2010 — Note: Sometimes I'll write a post about something I don't understand at all. I am not a neuroscientist and have only the faintest understanding of the brain so this is one of those times. Reading this post could make you dumber. But occasionally writing from ignorance leads to good things--like the time I wrote about Linear Algebra and got a number of helpful emails better explaining the subject to me.

My question is: how are the brain's resources allocated for its different tasks?

In a restaurant the majority of the workers are involved with serving, then a smaller number of employees are involved with cooking, and still a smaller number of people are involved with managing.

The brain has a number of functions: vision, auditory, speech, mathematics, locomotion, and so forth. Which function uses the most resources? Which function uses the least?

I have no idea, but my guess is below.

  • 1. Vision. My guess is vision uses more than 50% of the brain.
  • 2. Memory. Perhaps 50% or more of the brain is involved with storing memories of sights, sounds, smells, etc.
  • 3. Locomotion. Movement probably touches between 20-60% of the brain.
  • 4. Auditory/Speech. I guess that between 20-40% of the brain is involved with this.
  • 5. Taste/touch/smell. My guess is between 10-20%.
  • 6. Emotion. My guess is 10-15% is involved with setting/controlling emotion.
  • 7. Long term planning/mathematics. I think the ability to do complex thinking is given the least resources.

I'm probably quite far off, but I thought it was an interesting question to think about. Now I'll go see if I can dig up some truer numbers.

View source

August 11, 2010 — I've had some free time the past two weeks to work on a few random ideas I've had.

They all largely involve probability/statistics and have no practical or monetary purpose. If I was a painter and not a programmer you might call them "art projects".

One project deals with categorizing data into "Extremistan" and "Mediocristan". Taleb's books, the Black Swan and Fooled by Randomness, list a number of different examples for each, and I thought it would be interesting to extend that categorization further.

The second project I'll expand on a bit more here.

TheOvarianLottery.com

Warren Buffett coined the idea of the "ovarian lottery"--his basic idea is that the most important factor in determining how you end up in life is your birth. You either are born "lucky"--in a rich country, with no major diseases, to an affluent member of society, etc.--or you aren't. Other factors like hard work, education, smart decision making and so forth have a role, but play a relatively tiny role in determining what your life will be like.

I thought this was a very interesting idea and so I started a program that lets you be "born again" and see how things turn out. When you click "Play", theOvarianLottery will show you:

  • What year you were born in(or you can choose this yourself)
  • What continent/country you were born in
  • What your gender is
  • How old you will be when you die
  • Your Religion
  • Silly things like whether you will ever be a Facebook user (with 500 million users potentially 1 in every 200 people that has ever lived has used Facebook!)

Two Surprises

I've encountered two major surprises with the theOvarianLottery.

First, I thought theOvarianLottery would take me an hour or two. I was wrong. It turns out the coding isn't hard at all--the tricky part is finding the statistics. Not a whole lot of countries provide detailed statistics on their current populations. Once you start looking up stats for human population before 1950, the search gets an order of magnitude harder. (I've listed a few good sources and resources at the bottom of this post if anyone's interested)

Second, I've found so many fascinating diversions while working on this. I've encountered cool stats like:

  • Estimates on the total number of births that have happened so far range from 40 - 150 billion. This doesn't include species prior to homo sapiens.
  • Around 1 in 5 births today happens in China. Odds of being born in the U.S. are around 4%.
  • Potentially around 40% of all babies that have ever been born have died before the age of 1. Nowadays the infant mortality rate is around 2% (and less in many countries).

But cooler than interesting descriptive statistics are the philosophical questions that this idea of the Ovarian Lottery raises. If I was a philosopher I might ponder these questions at depth and write more about each one, but I don't think that's a great use of time and so I'll just list them. Philosophy is most often a fruitless exercise.

How does the real Ovarian Lottery work?

My site is just a computer program. It's interesting to think about how the real ovarian lottery works. Is there a place where everyone is hanging out, and then you spin a wheel and your "soul" is magically transported to a newborn somewhere in the world?

What if the multiverse theory is correct?

If the multiverse theory is correct, then my odds are almost certainly off. In other words, theOvarianLottery assumes there's only 1 universe and extrapolates the odds from that. If there are dozens or infinite universes, who knows what the real odds are.

What role has chaos played in the development of humanity?

If you go back to around 10,000 B.C., somewhere around 2-10 million people roamed the planet. Go back earlier and the number is even smaller. It's interesting to think of how small differences in events back then would have created radically different outcomes today. I've dabbled a bit into chaos theory and find it quite humbling.

What does the fact that we are alive today tell us about the future?

Depending on the estimate, between 4-20% of all humans that have ever lived are alive today. In other words, the odds of you being alive right now (according to my model) are higher than they've ever been. The odds of you being alive in 10,000 BC are over 1,000 times less. If humans indeed go on to live for another ten thousand years and the population grows another 1,000 times the odds of you being born today would be vastly smaller. In other words, if my model represented reality than we could conclude that odds are high that the human population does not continue growing like it has.

What's the future shape of the population curve?

The growth of human population has followed an exponential curve. How long will it last? Will earth become overpopulated? Will we invent technology to leave earth? Will human population decline? Human population growth is hard to predict over any long term time period.

Complete Uselessness of This Model

I don't believe you can take the concept of the Ovarian Lottery any more seriously than you can take religion. It provides food for thought, but it doesn't provide any real answers to much. The stats though could certainly be used in debates.

Oh well. Ars gratia artis

Notes

I hope to finish up theOvarianLottery and slap a frontend on it sometime in the future.

Helpful Links for Population Statistics(beyond wikipedia):

View source

August 6, 2010 — Three unexpected things have happened to me during my two years of entrepreneurial pursuits in California.

First, I have not gotten rich.

Second, I have met many people who have gotten rich. I've even had the pleasure to witness some of my friends get rich.

Third, I've yet to meet someone much happier than me.

I've met a large amount of people who are 6, 7, even 8 orders of magnitude richer than me and yet not a single one of them was even close to an order of magnitude happier than me.

The explanation, I finally realized, is simple.

Happiness is a physical quantity, like Height or Weight

Happiness, as NNT would say, resides in Mediocristan. Happiness is a physical condition and just as it is impossible to find someone 60 feet tall, it is impossible to find someone ten times happier than everyone else. I could sit next to you and drink 3 cups of coffee, and sure, I might be 20% happier than you for about 20 minutes, but 1,000% happier? Not even close.

Our happiness is a result of some physical processes going on in our brains. While we don't understand yet the details of what's happening, from observation you can see that people only differ in happiness about as much as they differ in weight.

Millionaire Entrepreneurs Do Not Leap Out of Bed Every Morning

This idea of happiness being distributed rather equally might not be surprising to people with common sense. There are a million adages that say the same thing. Thinking about it mathematically took me by surprise, however.

I was rereading the Black Swan at the same time I was reading Zappos founder Tony Hsieh's "Delivering Happiness". In his autobiography, Tony talks about how he wasn't much happier after selling his first company for a 9 figure sum. I thought about this for a bit and realized I wasn't suprised. I've read the same thing and even witnessed it happen over and over again amongst startup founders who strike it rich. The change in happiness doesn't reflect the change in the bank account. Not at all! The bank account undergoes a multi-order of magnitude shift, while the happiness level fluctuates a few percentage points at best. It dawned on me that happiness is in Mediocristan. Of course!

Don't, Don't Stress Over Getting Rich

I'm not warning you that you might not become an order of magnitude happier if you become rich, I'm telling you IT'S PHYSICALLY IMPOSSIBLE!!! There's no chance of it happening. You can be nearly as happy today as you will be the week after you make $1 billion. (In rare cases, you might even be less happy after you strike it rich.) Money is great, and having a ton of it would be pretty fun. By all means, try to make a lot of it. You will most likely be at least a few percentage points happier. Just remember to keep it in a realistic perspective. Aim to be 5 or 10% happier, not 500% happier.

It's funny, although our society doles out vastly different rewards, at the end of the day, in what matters the most, mother nature has created a pretty equal playing field.

View source

August 6, 2010 — In February I celebrated my 26th Orbit. I am 26 orbits old. How many orbits are you?

I think we should use the word "orbit" instead of year. It's less abstract. The earth's 584 million mile journey around the sun is an amazing phenomena, and calling it merely "another year" doesn't do it justice.

Calling years orbits also makes life sound more like a carnival ride--you get a certain number of orbits and then you get off.

Enjoy the ride!

Notes

  • 1. Neat fact: The sun completes a revolution around the center of the Milky Way galaxy once every 250 million years. It is estimated to have completed 25 orbits during the lifetime of our Sun, and less than one orbit since the origin of humans.
  • 2. My roommate Andrew suggests the reason why we don't refer to a year as an orbit is perhaps because when we started calling things years, we didn't yet know that the earth revolved around the sun. Bad habits die hard.
  • 3. I think the orbit around the sun has more impact on our world than we even realize. In other words, seasonality's affects are underestimated. Perhaps when you give something a name it seems less threatening. We call the change from August to December "fall", and thus we underestimate the volatility and massive change that occurs all around us.
  • 4. We should also call days "revs". Okay I'm done.

View source

August 6, 2010 — Figuring out what you want in life is very hard. No one tells you exactly what you want. You have to figure it out on your own.

When you're young, it doesn't really matter what you want because your parents choose what you do. This is a good thing, otherwise kids would grow up uneducated and malnourished from ice cream breakfasts. But when you grow up, you get to call the shots.

You Need Data to Figure Out What You Want

The big problem with calling the shots is that what your conscious, narrative mind thinks you want and what your subconscious mind really wants often differ quite a lot. For instance, growing up I said I wanted to be in politics, but in reality I always found myself tinkering with computers. Eventually you have the "aha" moment, and drop things you thought you wanted and focus on the things that you really want, the things you keep coming back to.

If you pay attention to what you keep drifting back to, you'll figure out what you want. You just have to pay attention.

Collect data on what makes you happy as you go. Run experiments with your life.

You don't have to log what you do each day and run statistics on your life. But you do have to get out there and create the data. Try different things. Try different jobs, try different activities, try living in different places. Then you'll have experiences--data--which you can use to figure out exactly what the hell it is you really want.

You Want More Than You Think

People like to simplify things as much as possible. It would be nice if you only wanted a few things, such as a good family, a good job, and food on the table. I think though that in reality we each want somewhere around 10 to 20 different things. On my list of things I want, I've got 15 or 16 different things. Family, money, and food are on there. But also some more specific things, like living in the San Francisco Bay area, and studying computer science and statistics.

Being Happy is About Balancing All of These Things

You don't get unlimited hours in the day so you've got to budget your time amongst all of these things that you want. If I were to spend all of my time programming, I'd have no time for friends and family, which are two things really important to me. So I've got to split my energies between these things. You'll always find yourself neglecting at least one area. Life is a juggling act. The important thing is to juggle with the right balls. It's fine to drop a ball for a bit, just pick it back up and keep going.

Limit the Bad Stuff

As you grow up you'll learn that there are things you want that aren't so good for you. Don't pretend you don't want that, just try to minimize it. For instance, part of me wants to eat ice cream almost everyday. But part of me wants to have healthy teeth, and part of me wants to not be obese. You've got to strike a balance.

Your Wants Change

First, you've got to figure out all the different things you want. Then, you've got to juggle these things as best as possible. Finally, when you think you've got it figured out, you'll realize that your wants have changed slightly. You might want one thing a bit less (say, partying), while wanting something else more (a career, a family, learning to sail, who knows). That's totally normal. Just add or drop the new discovery to your list and keep going.

Make a Mindmap

Almost 2 years ago I made a dead simple mindmap of what I wanted. I think a mindmap is better than a list in this case because A) it looks cooler and B) there's not really a particular ranking with what I want. My list has changed by just one or two things in 2 year's time.

I like to be mysterious and have something to talk about at parties, so I've gone ahead and erased most of the items, but you can get the idea:

If you don't know what it is you want, try making a mindmap.

View source

August 3, 2010 — Last night over dinner we had an interesting conversation about why we care about celebrities. Here's my thinking on the matter.

Celebrities are not that special

If you look at some stats about the attributes of celebrities, you'll realize something interesting: they're not that special. By any physical measure--height, weight, facial symmetry, body shape, voice quality, personality, intelligence--celebrities are not much different from the people around you. Conan O'Brien might be a bit funnier than your funniest friend, but he wouldn't make you laugh 10x more; it'd be more like 5% more. Angelina Jolie might be 10% more attractive than your most attractive friend, but for some groups she could even be less attractive.

If these people aren't so special, why do they interest us so much? One explanation is that we see these people over and over again on television and as a result we are conditioned to care about them.

I concede this may be part of it, but I actually don't think celebrities are forced upon us. Instead, I think we need celebrities. We need them to function in a global society.

It's all because of the Do You Know Game.

The Do You Know Game

The Do You Know Game is a popular party game. People often play it every time they meet a stranger. It goes something like this:

Where are you from?
Brockton, Massachusetts
Oh, do you know Greg Buckley?
Yes, I know Greg Buckley.
Cool! That's so funny! Small world!!!

That's the basic premise. You ask me where I am from. You think of everyone you know from that place and ask me one by one if I know that person. Then we switch roles and play again.

People play this game at work, at parties, at networking events, at college--especially at college. This game has a benefit.

The Do You Know Games Lets Strangers Build Trust

People play this game for many reasons, but certainly one incentive to play is that if two strangers can identify a mutual friend, they can instantly trust each other a bit more. If we have a mutual friend, I'm more likely to do you a favor, and less likely to screw you over, because word gets around. Back in the day when people carried swords, this was even more important.

A mutual friend also gives two strangers a shared interest. It's something that they can continually talk about.

And having a mutual friend can reveal a lot about a person:

Do you know Breck Yunits?
Yes, I think he's an idiot.
No YOU'RE the idiot.

As you can see, having mutual friends serves many purposes.

The Do You Know Game has gotten harder as the world has globalized

Throughout the 20th century, the proportion of people that have traveled far from their hometowns for school or career has steadily increased. The further you travel from your home, the less likely you are to have a successful round of "do you know" with a stranger. You might share common interests or values with the new people you meet, but you'll know none of the same people and thus it will be harder to build and grow relationships. This is a big problem for a globalized society that depends on strong ties between people from different places to keep the economy running smoothly.

Celebrities to the Rescue

Celebrities have naturally arisen to fill a need for strangers in a globalized world to have mutual friends. We all interact with strangers more frequently nowadays, and if we didn't have celebrities, there would be a gaping hole in our arsenal of shortcuts to establishing trust with new people. There are a thousand ways to build repoire with a stranger, but the technique of talking about a shared acquaintance is one of the easiest and most effective. We travel farther than we ever have, but thanks to celebrities, we still have dozens of "mutual friends" wherever we go.

Of course, just because two people know who Tom Hanks is doesn't mean they should trust each other more. Tom Hanks doesn't know them and so none of the "word gets around" stuff I mentioned earlier applies. I'm not arguing that celebrities are an equal substitute for a mutual friend by any means. A mutual friend is a much more powerful bond than knowing about the same celebrity.

But celebrities are better than nothing.

View source

July 2, 2010 — A year ago I wrote a post titled "The Truth about Web Design" where I briefly argued that "design doesn't matter a whole lot."

My argument was: "you go to a website for the utility of it. Design is far secondary. There are plenty of prettier things to look at in the real world."

I do think the real world is a pretty place, but about design, I was completely wrong.

I now think design is incredibly important, and on par with engineering. I used to think a poorly designed product was a matter of a company setting the right priorities, now I think it reflects ignorance, laziness or mediocrity. If a company engineers a great product but fails to put forward a great design, it says:

  • 1. The company doesn't feel that design is important.
  • 2. The company was too lazy to put much effort into design.
  • 3. The company's engineering team is incapable of working effectively with the design team.

Why Design is Important, In Princple

For nearly a decade I've always dreamed of my ideal computer as no computer at all. I wanted a computer smaller than the smallest smartphone, that would always be ready to take commands but would also be out of site. In other words, I've always thought of computers purely as problem solving tools--as a means to an end.

I want the computer to solve the problem and get out of my way. Computers are ugly. The world is beautiful. I like to look at other people, the sky, the ocean and not a menu or a screen. I didn't care about the style in which the computer solved my problem, because no matter how "great" it looked it couldn't compare to the natural beauty of the world.

I was wrong.

A computer, program, or product should always embody a good design, because the means to the end is nearly important as the end itself. True, when riding in a car I care about the end--getting to my destination. But why shouldn't we care about the style in which we ride? Why shouldn't we care about the means? After all, isn't living all about appreciating the means? We all know what the end of life is, the important thing is to live the means with style. I've realized that I want style--and I'm a little late to the party, most people want style.

Why Design is Important, In Practice

If that argument didn't make sense, there are a number of practical reasons why a great design is important.

A great design can unlock more value for the user. Dropbox overcomes herculean engineering challenges to work, but if it weren't for its simple, easy to use design it wouldn't be nearly as useful.

A great design can be the competitive edge in a competive market. Mint.com had a great design, and it bested a few other startups in that emerging market.

A great design can be the differentiator in a crowded market. Bing's design is better than Google's. The design of Bing differentiates the two search engines in my mind, and makes Bing more memorable to me. The results of Microsoft's search engine have always been decent, but it was the design of Bing that finally gave them a memorable place in consumers' minds.

A great design is easy to get people behind. People like to support sites and products that are designed well. People love to show off their Apple products. Airbnb's beautiful design had a large role in making it easy for people to support the fledgling site.

What to do if you aren't good at design

Personally, I'm a terrible designer. Like many hackers, I can program but I can't paint. What should we do?

First, learn to appreciate the importance of design.

Second, learn to work well with designers. Don't treat design as secondary to engineering. Instead, think of how you can be a better engineer to execute the vision of your design team.

Great engineering can't compensate for poor design just as great design can't compensate for poor engineering. To create great products, you need both. Don't be lazy when it comes to design. It could be the make or break difference between your product's success or failure.

Notes

  • 1. Are Google, craigslist, and eBay exceptions to the rule that you need a great design to succeed? Yes. If you're the first mover in a market, you can get by with an ugly design. At least in the case of Google, they continually refine it.

View source

June 28, 2010 — Competition and specialization are generally positive economics forces. What's interesting is that they are contradictory.

Competition. Company 1 and Company 2 both try to solve problem A. The competition will lead to a better outcome for the consumer.

Specialization. Company 1 focuses on problem A; Company 2 focuses on problem B. The specialization will lead to a better outcome for all because of phenomena like economies of scale and comparative advantage.

So which is better? Is it better to have everyone compete to solve a small number of problems or to have everyone specialize on a unique problem?

Well, you want both. If you have no competition, it's either because you've been able to create a nice monopolistic arrangement for yourself or it's because you're working on a problem no one cares about.

If you have tons of competition, you're probably working on a problem that people care about but that is hard to make a profit in.

Update 8/6/2010: Overspecialization can be bad as well when things don't go according to plan, as NNT points out, Mother Nature does not like overspecialization, as it limits evolution and weakens the animals. If Intel fell into a sinkhole, we'd be screwed if it weren't for having a backup in AMD.

View source

June 17, 2010 — Doing a startup is surprisingly simple. You have to start by creating a product that people must have, then you scale it from there.

What percent of your customers or "users" would be disappointed if your product disappeared tomorrow? If it's less than 40%, you haven't built a must have yet.

As simple as this sounds, I've found it to be quite hard. It's not easy to build a must have.

Some Reasons Why Startups Fail to Build a Must Have

  1. Lack of ability. If you want to build a plane that people can't wait to fly on, you probably need to be an aerospace engineer. If you want to draw a comic that people can't wait to read, you probably need to be a talented artist and comedian to boot. You might have a great idea for a search engine, but if you don't have a PhD level understanding of math and computer science, your search engine is quite unlikely to become a must have when people have Google. You need a talented team in the product area to build a must have.
  1. Release too late. A lot of people take too long to release and get feedback. The odds of your first iteration being a must have are quite slim. People aren't going to get it like you get it. You'll need to iterate. If you burn up all your money and energy before releasing, you might not leave yourself with enough time to tweak your product until it's a must have. Give yourself ample time, release early.
  1. Lack of vision. It seems like successful entrepreneurs have a clear vision about what people will want ahead of time. There are endless directions in which you can take your product. Sometimes a product will get started in the right direction, but then will be tweaked into a dead end. I think you need a simple, clear, medium to long term vision for the product.
  1. Preoccupation with Unimportant Things. A lot of founders get bogged down with minor details like business plans or equity discussions or fundraising processes. If you don't put your focus almost entirely on creating a must have product, none of this stuff will matter. Your company needs a reason to exist, without a must have product, there isn't one. (Unless of course, you are trying to create a lifestyle business, in which your first priority is a good lifestyle, then by all means do things in which ever way you want).
  1. Too broad a focus. Every successful business starts with a small niche. You need to create a must have product for a few people before you can create one for a lot of people. If your business is a two sided marketplace, pick a very small market to start in, and grow it from there.
  1. Get tired of the space. This is a mistake I've made a lot. I've come up with a simple idea that I think is cool, I launch it, then when the going gets tough, I realize I'm not too interested in the space. No matter what the idea or space, there are going to be low moments when you don't have a growing, must have product, and if your passion isn't in that industry, you might not want to keep going. Pick a space that you think is cool; build a product that you want.
  1. Stubbornness. Sometimes people are too stubborn to realize that their product isn't something people want. If people don't care if it disappeared tomorrow, you need to improve it! Don't be stubborn. Listen to the numbers. Listen to feedback.

What are some other reasons people fail to build a must have product?

View source

June 16, 2010 — Every Sunday night in college my fraternity would gather in the commons room for a "brother meeting". (Yes, I was in a fraternity, and yes I do regret that icing hadn't been invented yet). These meetings weren't really "productive", but we at least made a few decisions each week. The debates leading up to these decisions were quite fascinating. The questions would be retarded, like whether or not our next party should be "Pirate" themed or "Prisoner" themed(our fraternity was called Pike, so naturally(?) we were limited to themes that started with the letter P so we could call the party "Pike's of the Caribbean" or something). No matter what the issue, we would always have members make really passionate arguments for both sides.

The awesome thing was that these were very smart, persuasive guys. I'd change my mind a dozen times during these meetings. Without fail, whichever side spoke last would have convinced me that not only should we have a Pirate themed party, but that it was quite possibly one of the most important decisions we would ever make.

The thing I realized in these meetings is that flip flopping is quite easy to do. It can be really hard, if not impossible, to make the "right" decision. There are always at least two sides to every situation, and choosing a side is a lot more about the skills of the debaters, the mood you happen to be in, and the position of the moon(what I'm trying to say is there's a lot of variables at work).

I think humans are capable of believing almost anything. I think our convictions are largely arbitrary.

Try an experiment.

1) Take an issue, a political issue--the war in Afghanistan, Global Warming, marijuana legalization--or a minor everyday issue--what to have for dinner tonight, whether it's better to drink coffee or not, whether Facebook is a good thing or bad thing.

2) Take a stand on that issue. Think of all the reasons why your stand is right. Be prepared to support your stance in a debate.

3) Completely change your position. Take the other side. Think of every reason why this new side is correct. Be prepared to support this side without feeling like you are lying.

4) Keep flipping if you want.

I think it's fascinating to see how now matter what the issue, you can create a convincing case for any side. And it's hard not to hear an argument for the opposing side and not want to change your position. Our brains can be easily overloaded. The most recently presented information pushes out the old arguments.

But at some points, survival necessitates we take a side. The ability to become stubborn and closed-minded is definitely a beneficial trait. Survival causes us to become stubborn on issues and survival requires closed-mindedness to get anything done.

Three men set out to find a buried treasure. The first guy believes the treasure is to the north so heads in that direction. The second guy heads south. The third guy keeps changing his mind and zigzags between north and south. I don't know who finds the treasure first, but I do know it's certainly not the third guy.

Oftentimes the expected value of being stubborn is higher than the expected value of being thoughtful.

Is flip flopping a good thing? Is being open minded harder than being stubborn? Does it depend on the person? Does success require being certain?

I have no idea.

View source

June 15, 2010 — I think it's interesting to ponder the value of information over it's lifetime.

Different types of data become outdated at different rates. A street map is probably mostly relevant 10 years later, while a 10 year old weather forecast is much less valuable.

Phone numbers probably last about 5 years nowadays. Email addresses could end up lasting decades. News is often largely irrelevant after a day. For a coupon site I worked on, the average life of a coupon seemed to be about 2 weeks.

If your data has a long half life, then you have time to build it up. Wikipedia articles are still valuable years later.

What information holds value the longest? What are the "twinkies" of the data world?

Books, it seems. We don't regularly read old weather forecasts, census rolls, or newspapers, but we definitely still read great books, from Aristotle to Shakespeare to Mill.

Facts and numbers have a high churn rate, but stories and knowledge last a lot longer.

View source

June 14, 2010 — Have you heard of the Emperor Penguins? It's a species of penguins that journeys 30-75 miles across the frigid Antarctic to breed. Each year these penguins endure 8 months of brutally cold winters far from food. If you aren't familiar with them, check out either of the documentaries March of the Penguins or Planet Earth.

I think the culture of the emperor penguins is fascinating and clearly reveals some general traits from all cultures:

Culture is a set of habits that living things repeat because that's what they experienced in the past, and the past was favorable to them. Cultures have a mutually dependent relationship with their adherents.

The Emperor Penguins are born into this Culture. The Culture survives because the offspring keep repeating the process. The Emperor Penguins survive because the process seems to keep them safe from predators and close to mates. The culture and the species depend on each other.

Cultures are borne out of randomness.

At any moment, people or animals are doing things that may blossom into a new culture. Some of these penguins could branch off to Hawaii and start a new set of habits, which 500 years from now might be the dominant culture of the Emperor Penguins.

But predicting what will develop into a culture and what won't is impossible--there's too many variables, too much randomness involved. Would anyone have predicted that these crazy penguins who went to breed in the -40 degree weather for 8 months would survive this long? Probably not. Would anyone have predicted that people would still pray to this Jesus guy 2,000 years later? Probably not.

Cultures seem crazy to outsiders and are almost impossible to explain.

One widespread human culture is to always give an explanation for an event even when the true reason is just too complex or random to understand. The cultural habits are always easier to repeat and pass down then they are to explain.

I don't have any profound insights on culture, I just think it's fascinating and something not to read too much into---it helps us survive, but there's no greater meaning to it.

Notes

  1. Interesting factet: there are apparently 38 colonies of Emperor Penguins in Antarctica.

View source

Or..We Think we have Free Will because we only Observe One Path.

March 24, 2010 — "Dad, I finished my homework. Why?"

The father thinks for a moment. He realizes the answer involves explaining the state of the world prior to the child doing the homework. It involves explaining the complex probabilities that combined would calculate the odds the child was going to do the homework. And it likely involved explaining quantum mechanics.

The father shrugs and says "Because you have free will, and chose to do it."

Free Will was Born

Thus was born the notion of free will, a concept to explain why we have gone down certain paths when alternatives seemed perfectly plausible. We attribute the past to free will, and we attribute the unpredictability of the future to free will as well (i.e. "we haven't decided yet").

One little problem

The problem is, this is wrong. You never choose just one path to go down. In fact, you go down all the paths. The catch is you only get to observe one.

In one world the child did their homework. In another world, they didn't.

The child who did their homework will never encounter the child who didn't, but they both exist, albeit in different universes or dimensions. Both of them are left wondering why they "chose" the way they did. The reality is that they chose nothing. They're both just along for the ride.

Even the smug boy who says free will doesn't exist, is just one branch of the smug boy.

Notes

  • This all assumes, of course, that there are many worlds and not just one.
  • Perhaps it is the case that many worlds and free will coexist, in that although we have no absolute control of the future, we can somehow affect the distribution of different paths?

View source

March 22, 2010 — Google has a list of 10 principles that guide its actions. Number 2 on this list is:

It's best to do one thing really, really well.

This advice is so often repeated that I thought it would be worthwhile to think hard about why this might be the case.

Why is it best to do one thing really, really well?

For two reasons: economies of scale and network effects.

Economies of scale. The more you do something, the better you get at it. You can automate and innovate. You'll be able to solve the problem better than it's been solved in the past and please more people with your solutions. You'll discover tricks you'd never imagine that help you create and deliver a better "thing".

Network effects. If you work on a hard problem for a long time, you'll put a great deal of distance between yourself and the average competitor, and in our economy it doesn't take too big a lead to dominate a market. If your product and marketing is 90% as good as the competitor's, it will capture much less than 47% of the market. The press likes to write about the #1 company in an industry. The gold medalist doesn't get 1/3 of the glory, they get 95% of the glory. The network effects in our economy are very strong. If you only do something really well, the company that does it really, really well will eat your lunch.

A simpler analogy: You can make Italian food and Chinese food in the same restaurant, but the Italian restaurant down the street will probably have better Italian food and the Chinese restaurant will probably have better Chinese food, and you'll be out of business soon.

Why the "really, really"?

My English teacher would have told me that at least one of the "really"'s was unnecessary. But if you think about the statement in terms of math having the two "really"'s makes sense.

Let's define doing one thing well as being in the top 10% of companies that do that thing. Doing one thing really well means being in the top 1% of companies that do that thing. Doing one thing really, really well means being in the top 0.1% of companies that do that thing.

Thus, what Google is striving for is to be the #1 company that does search. They don't want to just be in the top 10% or even top 1% of search companies, they want to do it so well that they are at the very top. If you think about it like that, the 2 "really's" make perfect sense.

What's the most common mistake companies make when following this advice?

My guess is they don't choose the correct "thing" for their given team. They pick the wrong thing to focus on. For instance, if Ben and I started a jellyfish business, and decided to do jellyfish tanks really, really well, we would be making a huge mistake because we just don't have the right team for that business. It makes more sense when Al, a marine biology major and highly skilled builder, decides to do jellyfish tanks really, really well.

It makes perfect sense for the Google founders to start Google since they were getting their PhD's in search.

You need good team/market fit. The biggest mistake people make when following the "do one thing really, really well" advice is choosing the wrong product or market for their team.

What's the second most common mistake companies make when following this advice?

Picking a "thing" that's too easy. You should go after a problem that's hard with a big market. Instead of writing custom software for ten of your neighbors that helps them do their taxes, generalize the problem and write internet software that can help anyone do their taxes. It's good to start small of course, but be in a market with a lot of room to grow.

Can you change the one thing you do?

Yes. It's good to be flexible until you stumble upon the one thing your team can do really, really well that can address a large market. Don't be stubborn. If at first you thought it was going to be social gaming, and then you learn that you can actually do photo sharing really, really well and people really want that, do photo sharing.

How do you explain the fact that successful companies actually do a lot of different things?

Microsoft Windows brings in something like $15 billion per year. Google Adwords brings in something like $15 billion per year. When you make that kind of money, you can drop $100 million selling ice cream and it won't hurt you too much. But to get there, you've first got to do one hard thing really, really well, whether it be operating systems or search.

View source

March 17, 2010 — If you automate a process which you repeat Y times, that takes X minutes, what would your payoff be?

Payoff = XY minutes saved, right?

Surprisingly I've found that is almost never the case. Instead, the benefits are almost always greater than XY. In some cases, much greater. The benefits of automating a process are greater than the sum of the process' parts.

Actual Payoff = XY minutes saved + E

What is E? It's the extra something you get from not having to waste time and energy on XY.

An Example

Last year I did a fair amount of consulting work I found via craigslist. I used to check the Computer Gigs page for a few different cities, multiple times per day. I would check about 5 cities, spending about 2 minutes on each page, about 3 times per day. Thus, I'd spend 30 minutes a day just checking and evaluating potential leads.

I then wrote a script that aggregated all of these listings onto one page(including the contents so I didn't have to click to a new page to read a listing). It also highlighted a gig if it met a certain criteria that I had found to be promising. The script even automated a lot of the email response I would write to each potential client.

It cut my "searching time" down to about 10 minutes per day. But then something happened: I suddenly had more time and energy to focus on the next aspect of the problem: getting hired. It wasn't long before I was landing more than half the gigs I applied to, even as I raised my rates.

I think this is where the unexpected benefits come from. The E is the extra energy you'll have to focus on other problems once you don't have to spend so much time doing rote work.

Automate. Automate. Automate

Try to automate as much as possible. The great thing about automation is that once you automate one task you'll have more time to automate the next task. Automation is a great investment with compounding effects. Try to get a process down to as few steps or keystrokes as possible(your ideal goal is zero keystrokes). Every step you eliminate will pay off more than you think.

View source

March 16, 2010 — I wrote a simple php program called phpcodestat that computes some simple statistics for any given directory.

I think brevity in source code is almost always a good thing. I think as a rule your code base should grow logarithmically with your user base. It should not grow linearly and certainly not exponentially.

If your code base is growing faster than your user base, you're in trouble. You might be attacking the wrong problem. You might be letting feature creep get the past of you.

I thought it would be neat to compute some stats for popular open source PHP applications.

My results are below. I don't have any particular profound insights at the moment, but I thought I'd share my work as I'm doing it in the hopes that maybe someone else would find it useful.

Name Directories Files PHPFiles PHPLOC PHPClasses PHPFunctions
../cake-1.2.6 296 677 428 165183 746 3675
../wordpress-2.9.2 82 753 279 143907 149 3827
../phpMyAdmin-3.3.1-english 63 810 398 175867 44 3635
../CodeIgniter_1.7.2 44 321 136 43157 74 1211
../Zend-1.10 360 2145 1692 336419 42 11123
../symfony-1.4.3 770 2905 2091 298700 362 12198

View source

March 8, 2010 — If a post on HackerNews gets more points, it gets more visits.

But how much more? That's what Murkin wanted to know.

I've submitted over 10 articles from this site to HackerNews and I pulled the data from my top 5 posts (in terms of visits referred by HackerNews) from Google Analytics.

Here's how it looks if you plot visits by karma score:

The Pearson Correlation is high: 0.894. Here's the raw data:

Karma Visits Page
53 3389 /twelve_tips_to_master_programming_faster
54 2075 /code/use_rsync_to_deploy_your_website
54 1688 /unfeatures
34 1588 /flee_the_bubble
25 1462 /make_something_40_of_your_customers_must_have
14 1056 /when_forced_to_wait_wait
4 214 /diversification_in_startups
1 146 /seo_made_easy_lumps
1 36 /dont_flip_the_bozo_bit

View source

February 19, 2010 — All the time I overhear people saying things like "I will start excercising everyday" or "We will ship this software by the end of the month" or "I will read that book" or "I will win this race." I'm guilty of talking like this too.

The problem is that often, you say you will do something and you don't end up doing it. Saying "I will do", might even be a synonym for "I won't do".

Why does this happen? I don't think it's because people are lazy. I think it's because we overestimate our ability to predict the future. We like to make specific predictions as opposed to predicting ranges.

I'll explain why we are bad at making predictions in a minute, but first, if you find yourself making predictions about what you will do that turn out to be wrong, you should fix that. You can either tone down your predictions, giving ranges instead. For instance, instead of saying "I think I will win the race", say "I think I will finish the race in the top 10". Or, even easier: stop talking about things you will do entirely, and only talk about things you have done. So, in the race example, you might say something like "I ran 3 miles today to train for the race." (If you do win the race, don't talk about it a lot. No one likes a braggert).

Why we are bad at making predictions.

Pretend you are walking down a path:

Someone asks you whether you've been walking on grass or dirt. You can look down and see what it is:

Now, they ask you what you will be walking on. You can look ahead see what it is:

Easy right? But this is not a realistic model of time. Let's add some fog:

Again, someone asks you whether you've been walking on grass or dirt. Even with the fog, you can look down and see what it is:

Now, they ask you what you will be walking on. You look ahead, but now with the fog you can't see what it is:

What do you do? Do you say:

  • 1. Dirt
  • 2. Grass
  • 3. I don't know. It could be either dirt or grass, or maybe something else entirely.
  • 4. I don't know. I've been walking on grass. Not sure what I'll be walking on in the future.

In my opinion you should say something like 3 or 4.

This second example models real life better. The future is always foggy.

Why is the future foggy?

I don't know. Maybe a physicist could answer that question, but I don't know the answer. And I don't think I ever will.

Notes

  1. In other words, don't overpromise and underdeliver.

View source

February 17, 2010 — If a book is worth reading, it's worth buying too.

If you're reading a book primarily to gain value from it(as opposed to reading it for pleasure) you should always buy it unless it's a bad book.

The amount of value you can get from a book varies wildly. Most books are worthless. Some can change your life. For simplicity, let's say the value you can derive from any one book varies from 1 cent to $100,000(there are many, many more worthless books than there are of the really valuable kind).

The cost however, does not vary as much. Books rarely cost more than $100, and generally average to about $15.

You shouldn't read a book that you think will offer you less than $100 in value. Time could be better spent reading more important books.

So let's assume you never read a book that gives you less than $100 in value. Thus, the cost of a physical copy of the book is at most 15% (using the $15 average price) of the value gained.

Would owning that book help you extract 15% more from it? It nearly always will. When you own a book, you can take it anywhere. You can mark it up. You can flip quickly through the pages. You can bookmark it. You can easily share it with a friend and then dicuss it. If these things don't help you get 15% more out of that book, I'd be very surprised.

Where it gets even more certain, is when you read a really valuable book--say a book offering $1,000 of value. Now you'd only need to get 1.5% more out of that book.

The investment in that case is a no brainer.

Notes

  • 1. Even if a book is more expensive, say $50, the numbers don't change too much.
  • 2. Pardon my scribbling. Got a new toy.

View source

February 2, 2010 — My room was always messy. Usually because clothes were strewn everywhere On the floor, on the couch, anywhere there was a surface there was a pile of clothes. Dirty, clean, or mostly-clean scattered about.

I tried a dresser. I tried making a system where I had spaces for each type of clothing: shirts, pants, etc. Nothing worked.

Then a friend saw my room and quipped, "Duh. You have too many clothes. Let's get rid of most of them."

So we did. About 75% of my clothes were packed up in garbage bags and sent off to the Salvation Army that day.

Ever since, my room has been at least 5x cleaner on average.

Almost always, there is one simple change you can make that will have drastic effects. This change is called the least you can do.

I had a website that was struggling to earn money even with a lot of visitors. I added AdSense and almost nothing happened. Then I moved the AdSense to a different part of the page and it suddenly made 5x more money. A week later I changed the colors of the ad and it suddenly made 2x as much money. Now the site makes 10x as much money and I barely did anything.

These are trivial examples, but the technique works on real problems as well.

The key is to figure out what the "least you can do" is.

You can discover it by working harder or smarter:

  • The hard way. You can try a ton of things, go through a to-do list dozens of items long, and hope you hit upon it.
  • The smart way. You can invest time learning instead of doing. Reading books, learning new math or programming techniques, talking to other people, thinking critically, etc. You'll then have a much better hunch at what the "least you can do is".

In reality you need to do things both ways. But try to put extra effort into doing things the smart way, and see where it takes you.

Notes

  • Thanks to Conor for providing feedback.
  • I never shop for clothes. Once a year, maybe twice. The reason I had so many was because I never got rid of any clothes.
  • This AdSense site doesn't make a ton of money, but it now makes enough to pay all my server bills, which is nice.
  • Finding the least you can do is kind of like diff. You are trying to find the smallest change you can make to turn the status quo into an improved version.
  • Another relevant computer science topics is the The longest common subsequence problem.

View source

January 29, 2010 — Good communication is overcommunication. Very few people overcommmunicate. Undercommunication is much more common. Undercommunication is also the cause of countless problems in business.

Instead of striving for some subjective "good communication", simply strive to overcommunicate. It's very unlikely you'll hit a point where people say "he communicates too much". It's much more likely you'll come up a bit short, in which case you'll be left with good communication.

Here are 4 tips that will bring you closer to overcommunicating:

  1. Say the hard things. Often the hardest things to talk about are the most important things to talk about. If something is stressing you out, just say it. Getting it out there, even if not in the most eloquent way, is much better than not talking about it at all. A good strategy when approaching a hard subject is to bounce your approach off a neutral 3rd party to see if your angle is smart. Many times it's the other person who has something they're stressed about but isn't talking about. It's your job to be perceptive and ask them questions to get it out on the table.
  2. Repeat yourself. People have bad memories and even worse attention spans. Repeat yourself. If something is very important, repeat yourself multiple times. If someone hasn't gotten the message, it's more likely your fault for not repeating yourself enough than it is their fault for not getting it.
  3. Use tools. Email, Facebook, Google Wave, Basecamp, Skype, Gchat, Dropbox, Github, Sifter...these are just a sample of the modern tools you can use to communicate. Embrace them. Try different ones. Try pen and paper and whiteboards. Ideally you'll find two or three tools that cover all the bases, but don't be afraid to use multiple tools even if you have to repeat yourself across them a bit.
  4. Set a regular schedule. Set aside a recurring time for communication. It could be once a week or once a day. Even if there's nothing new to talk about, it will help to just go over the important topics again as you can rarely repeat yourself too much.

That's it. Good luck!

View source

January 22, 2010 — Network effects are to entrepreneurs what compounding effects are to investors: a key to getting rich.

Sometimes a product becomes more valuable simply as more people use it. This means the product has a "network effect".

You're probably familiar with two famous examples of network effects:

  • Windows. People started using Microsoft Windows. Therefore, developers started building more software for Windows. This made Windows more valuable, and more people started to use it.
  • Facebook. People joined Facebook and invited their friends. Their friends joined which made the site more valuable to everyone. People invited more friends.

All businesses have network effects to some degree. Every time you buy a slice of pizza, you are giving that business some feedback and some revenue which they can use to improve their business.

Giant businesses took advantage of giant network effects. When you bought that pizza, you caused a very tiny network effect. But when you joined Facebook, you immediately made it a more valuable product for many other users(who could now share info with you), and you may even have invited a dozen more users. When a developer joins Facebook, they might make an application that improves the service for thousands or even millions of users, and brings in a similar number of new users.

The biggest businesses enabled user-to-user network effects. Only the pizza store can improve its own offering. But Facebook, Craiglist, Twitter, and Windows have enabled their customers and developers to all improve the product with extremely little involvement from the company.

Notes

  • 1. This is probably easier said than done.

View source

Is there any subject which cannot be explained simply?
No.
What about quantum mechanics, organic chemistry, or rocket science? Surely these cannot be explained simply.
Any and every subject that can be explained logically, can also be explained simply.
So you are saying that even I can become an expert at quantum mechanics?
No. I am saying that every logical thing there is to learn in quantum mechanics can be explained simply. This holds for all subjects. However, that does not mean that every person can master every subject. Only people that master the basic building blocks of human knowledge can master any subject.
What are the basic building blocks of human knowledge?
First, the ones you learn early on: reading, writing, and arithmetic. Then, a few you are not forced to learn: probability, statistics, evolution and psychology.
Why do I have to learn probability, statistics, evolution and psychology?
Because these subjects explain 99% of what you see in the world. You need to learn probability and statistics to understand subjects like chemistry, physics, and engineering. You need to understand evolution and psychology to understand subjects like history, economics, government and religion. You need to know probability and statistics to understand these latter subjects as well. Thus, probability and statistics is as core to learning as reading, writing, and arithmetic.
I took a prob/stat course in high school. Is that good enough?
Probably not. After you took your reading and writing classes in elementary school, did you stop reading and writing or did you start practicing these skills everyday? You continued to use and practice them, right? Did you continue to practice your prob/stat skills? You should.
You're wrong. I've mastered probability and statistics, evolution, and psychology, and there are still subjects I can't find simple explanations for.
I'm not wrong. You just need to look in the right places. You probably won't find simple explanations in school. Schools are in the business of making learning seem complex and expensive. Better places to search for simple explanations: 🌎 Online. Sites like Khan Academy, Ted, Wikipedia and Google. 📚 In books. Browse around Barnes & Noble or Borders. 💁‍♀️ From a friend. Find someone that knows the subject well and ask them to teach you.
Do I need to master the building blocks to be successful in life?
No. But you need to know them if you want to be able to learn any subject.

View source

January 15, 2010 — In computer programming, one of the most oft-repeated mottos is DRY: "Don't Repeat Yourself."

The downside of DRY's popularity is that programmers might start applying the principle to conversations with other humans.

This fails because computers and people are polar opposites.

With computers, you get zero benefit if you repeat yourself. With people, you get zero benefit if you don't repeat yourself!

Four Ways Computers and People are Different

  • A computer's memory is perfect. A computer forgets nothing. Tell it something once, and it will remember it forever. A human remembers almost nothing. I forget what I had for breakfast 2 days ago. I don't remember which people I talked to last week, nevermind what was said. If memory were cheese, a computer's would be cheddar and a human's would be swiss. You've got to repeat yourself when communicating with people because people forget.
  • A computer is always paying attention. Computers are perfect listeners. They are always listening to your input and storing it in memory. You, the operator, are the only thing they care about. Computers don't have needs. They don't daydream or have cellphones(yet). People on the other hand, rarely if ever pay full attention. They zone in and out. It's hard to even tell if they're zoned in, as we've all learned it's better to nod our heads. People have their own needs and concerns and opinions. You've got to repeat yourself when communicating with people because people don't pay attention.
  • A computer understands your logic. When you write a program, a computer never misunderstands. It will execute the program exactly as you typed it. People, however, do not communicate so flawlessly. Until I was 22 I used to think "hors d'oeuvres" meant dress nice. I did not understand the pronunciation. One time a friend emailed me about an event and said "Our place. Hors d'oeuvres. 7pm" and I responded "Awesome. Will there be food?" You've got to repeat yourself when communicating with people because people don't understand.
  • A computer doesn't need to know what's most important. Computers don't make decisions on their own and so don't need to know what's most important. A computer will remember everything equally. Then it will sit awaiting your commands. It won't make decisions without you. A person, however, will make decisions without you and so needs to know the order of importance of things. For example, if you're not a fan of peanuts, you might tell the waiter once that you'd prefer the salad without nuts. But if you're deathly allergic to peanuts, you should probably repeat yourself a few times so the waiter knows there better not be any nuts on your salad. You've got to repeat yourself when communicating with people because people need to know what's most important.

A Numeric Explanation

If you tell something to your computer once:

  • The odds the computer remembers: 100%.
  • The odds the computer was paying attention: 100%.
  • The odds the computer understood you: 100%.
  • The odds the computer gets the importance right: 100%.

If you tell something to a person once:

  • The odds the person remembers: 30%?
  • The odds the person was paying attention: 40%?
  • The odds the person understood you: 50%?
  • The odds the person gets the importance right: 30%?

In other words, the odds of communicating perfectly are very low: 1.8%! You are highly likely to run into at least one of those four problems.

Now, if you repeat yourself 1 time, and we assume independence, here's how the probabilities change:

  • The odds the person remembers: 51%
  • The odds the person was paying attention: 64%
  • The odds the person understood you: 75%
  • The odds the person gets the importance right: 51%

By repeating yourself just once you've increased the chances of perfect communication from 1.8% to 12.5%! Repeat yourself one more time and the probability of perfect communication increases to over 90%. Well, in this simplistic model anyway. But I hope you get the idea.

Repeat yourself until you overcommunicate

To communicate well you should try to overcommunicate. Overcommunicating is hard to do. It's much easier and more common to undercommunicate. If you're not repeating yourself a lot, you're not overcommunicating.

An example of how I repeat myself

On the various projects I'm involved with we use Gmail, Google Docs, Google Wave, Basecamp, Github, Sifter, gChat and Skype. Which one do I prefer?

None of them. I prefer pen, paper, whiteboards and face-to-face meetings. I write down my own todo list and schedule with pen and paper. Then I login to these sites and repeat what I've written down for the sake of repeating myself to other people. This isn't inefficiency, it's good communication.

Some people prefer Google Docs, some prefer Basecamp. I'll post things to both, to ensure everyone knows what I'm working on.

With every new project I repeat a lot of messages and questions to the team. "How many people love this product?", "How can we make this simpler?", "Which of the 7 deadly sins does this appeal to?". I think these are important questions and so I'll repeat them over and over and add them to the todo lists for every project, multiple times.

Notes

  • 1. I've yet to be part of founding a big Internet company, so you don't have to agree with me that repeating yourself is critical to success.

View source

January 14, 2010 — When a problem you are working on forces you to wait, do you wait or switch tasks?

For example, if you are uploading a bunch of new web pages and it's taking a minute, do you almost instinctively open a new website or instant message?

I used to, and it made me less productive. I would try to squeeze more tasks into these short little idle periods, and as a result I would get less done.

Multitasking during idle times seems smart

Doing other things during idle times seems like it would increase productivity. After all, while you're waiting for something to load you're not getting anything done. So doing something else in the interim couldn't hurt, right? Wrong.

Switching tasks during idle times is bad, very bad

While you're solving one problem, you likely are "holding that problem in your head". It takes a while to load that problem in your head. You can only hold one important problem in your head at a time. If you switch tasks, even for a brief moment, you're going to need to spend X minutes "reloading" that problem for what is often only a 30 second vacation to Gmail, Facebook, Gchat, Hackernews, Digg, etc. It's clearly a bad deal.

Don't multitask

If you're doing something worth doing, give it all of your attention until it's done. Don't work on anything else, even if you're given idle time.

Why you can't multitask well

Human intelligence is overrated. Even the smartest people I know still occasionally misplace their keys or burn toast. We are good at following simple tasks when we focus, most of the time. But we are not built for multitasking.

Can you rub your head clockwise? Can you rub your belly counterclockwise? Can you say your ABC's backwards?

Dead simple, right? But can you do all three at once? If you can, by all means ignore my advice and go multitask.

Wait out those idle times

If what you are doing is easy or mundane, multitasking is permissible because loading a simple problem like "laundry" into your head does not take much time. But if what you are doing is important and worth doing, you are obligated to give it your full attention and to wait out those "idle times".

If you switch tasks during your idle times, you're implying that the time to reload the problem is less than the time gained doing something else. In other words, you are implying what you are doing is not worth doing. If that's the case, why work on it at all?

Notes

  • 1. Influenced by Paul Graham's Holding a Program in One's Head
  • 2. Of course, if you're given a very long idle time, then feel free to switch tasks. Don't spend 4 hours staring at your screen waiting for a coworker to get back to you.

View source

January 12, 2010 — Whether you're an entrepreneur, a venture capitalist, a casual investor or just a shopper looking for a deal, you should know how to buy low and sell high. Buying low and selling high is not easy. It's not easy because it requires too things humans are notoriously bad at: long term planning and emotional control. But if done over a long period of time, buying low and selling high is a surefire way to get rich.

Warren Buffett is perhaps the king of buying low and selling high. These tips are largely regurgitated from his speeches and biographies which I've been reading over the past two years.

Let the market serve you, not instruct you.

Everything has both a price and a value. Price is what you pay for something, value is what you get. The two rarely match. Both can fluctuate wildly depending on a lot of things. For instance, the price of gas can double or triple in a year based on events in the Middle East, but the value of a gallon of gas to you largely remains constant.

Don't let the market ever tell you the value of something--don't let it instruct you. Your job is to start figuring out the intrinsic value of things. Then you can take advantage when the price is far out of whack with the true value of something--you can make the market serve you.

Google's price today is $187 Billion. But what's its value? The average investor assumes the two are highly correlated. Assume the correlation is closer to 0. Make a guess about the true value of something. You may be way off the mark in you value estimating abilities, but honing that skill is imperative.

Be frugal.

You've got to be in a position to take advantage of the market, and if you spend your cash on unnecessary things, you won't be. Buy food in bulk at Costco. Cut your cell phone bill or cancel it altogether. Trim the fat wherever you can. You'd be surprised how little you can live off of and be happy. Read P.T. Barnum's "The Art of Moneygetting" for some good perspective on how being frugal has been a key to success for a long time.

Always be able to say "no".

The crazy market will constantly offer you "buy high, sell low" deals. You've got to be able to turn these down. If you don't have good cash flow or a cash cushion, it's very hard. That's why being frugal is so important.

Be Happy

If you're happy with what you have now it's easy to make good deals over the long run. Buying low and selling high requires long term emotional control. If you're unhappy or stressed, it's very hard to make clear headed decisions. Do what you have to do to get happy.

Make the Easy Deals

Out of the tens of thousands of potential deals you can make every month, which ones should you act on? The easy ones. Don't do deals in areas that you don't understand. Do deals where you know the area well. I wouldn't do a deal in commodities, but I'd certainly be willing to invest in early stage tech startups.

Margin of Safety

The easy deals have a wide margin of safety. An easy deal has a lot of upside. An easy deal with a wide margin of safety has little to no downside. Say a company has assets you determine are worth $1 Million and for some reason the company is selling for $950,000. Even if the company didn't grow, it has a good margin of safety because the price of its assets alone are worth more than the price you paid.

Read a lot

How do you find these easy deals? You've got to read a lot. You've got to keep your eyes open. Absorb and think mathematically about a lot of information you encounter in everyday life.

Buy a business

Businesses can be the ultimate thing to buy low and sell high because they have nearly unlimited upside. Real estate, gold, commodities, etc., can be good investments perhaps. But when's the last time you heard of someone's house going up 10,000%? Starting a business can be your best investment ever, as you are guaranteed to buy extremely low, and have the potential to sell extremely high.

View source

January 5, 2010 — Possibly the biggest mistake a web startup can make is to develop in a bubble. This is based on my own experience launching 13 different websites over the past 4 years. The raw numbers:

Type Count Successes TimeToLaunch CumulativeGrossRevenues %ofTotalTraffic CumulativeProfits EmotionalToll
Bubble 3 0 Months <$5,000 <1% -$10,000's High
NonBubble 10 5-8 1-14Days $100,000's >99% Good None-low

What is "the bubble"?

The bubble is the early, early product development stage. When new people aren't constantly using and falling in love with your product, you're in the bubble. You want to get out of here as fast as possible.

If you haven't launched, you're probably in the bubble. If you're in "stealth mode", you're probably in the bubble. If you're not "launching early and often", you're probably in the bubble. If you're not regularly talking to users/customers, you're probably in the bubble. If there's not a steady uptick in the number of users in love with your product, you're probably in the bubble.

Why you secretly want to stay in the bubble

A part of you always wants to stay in the bubble because leaving is scary. Launching a product and having it flop hurts. You hesitate for the same reason you hesitate before jumping into a pool in New England: sure, sometimes they're heated, but most of the time they're frickin freezing. If the reception to your product is cold, if no one falls in love with it, it's going to hurt.

The danger of the bubble

You can stand at the edge of the pool for as long as you want, but you're just wasting time. Life is too short to waste time.

In addition to wasting time, money and energy in the bubble (which can seem like a huge waste if your product flops), two things happen the longer you stay in the bubble:

  • The marginal return of each additional unit of effort decreases.
  • Expectations increase.

This is a very bad combination that can lead to paralysis. The more you pour into your bubble product, the less impact your additional efforts will have yet at the same time the more you will expect your product to succeed.

Don't wait any longer: jump in the water, flee the bubble!

How to Flee the Bubble

Here are four easy strategies for leaving the bubble: launch, launch & drop, pick one & launch, or drop.

Launch. Post your product to your blog today. Email your mailing list. Submit it to Reddit or Hackernews or TechCrunch. Just get it out there and see what happens. Maybe it will be a success.

Launch & Drop. Maybe you'll launch it and the feedback will be bad. Look for promising use cases and tweak your product to better fit those. If the feedback is still bad, drop the product and be thankful for the experience you've gained. Move on to the next one.

Pick One & Launch. If you're product has been in the bubble too long, chances are it's bloated. Pick one simple feature and launch that. You might be able to code it from scratch in a day or two since you've spent so much time already working on the problem.

Drop. Ideas are for dating not marrying. Don't ever feel bad for dropping an idea when new data suggests it's not best to keep pursuing it. It's a sign of intelligence.

That's all I've got. But don't take it from me, read the writings of web entrepreneurs who have achieved more success. (And please share what you find or your own experiences on HackerNews).

View source

December 28, 2009 — At our startup, we've practiced a diversification strategy.

We've basically run an idea lab, where we've built around 7 different products. Now we're getting ready to double down on one of these ideas.

The question is, which one?

Here's a 10 question form that you can fill out for each of your products.

Product Checklist

  • 1. How many users/customers does the product have?
  • 2. What percentage of these users/customers would be disappointed if this product disappeared tomorrow?
  • 3. Explain the product in one sentence:
  • 4. Is the product good/honest? Yes
  • 5. What is the predicted customer acquisition cost? $
  • 6. What is the predicted average lifetime value per customer? $
  • 7. Which of the 7 deadly sins does the product appeal to? Lust Greed Sloth Gluttony Pride Envy Wrath
  • 8. What's the go to market strategy in one sentence?
  • 9. What resources do you need to do this?
  • 10. What's the total addressable market size? people $

View source

2021 Update: I think the model and advice presented here is weak and that this post is not worth reading. I keep it up for the log, and not for the advice and analysis provided.

December 24, 2009 — Over the past 6 months, our startup has taken two approaches to diversification. We initially tried no diversification and then we tried heavy diversification.

In brief, my advice is:

Diversify heavily early. Then focus.

In the early stages of your startup, put no more than 33% of your resources into any one idea. When you've hit upon an idea that you're excited about and that has product/market fit, then switch and put 80% or more of your resources into that idea.

How Startups Diversify

An investor diversifies when they put money into different investments. For example, an investor might put some money into stocks, some into bonds, and some into commodities. If one of these investments nosedives, you won't lose all your money. Also, you have better odds that you'll pick some investments that generate good returns. The downside is that although you reduce the odds of getting a terrible outcome, you also reduce the odds of getting a great outcome.

A startup diversifies when it puts resources into different products. For example, a web startup might develop a search engine and an email service at the same time and hope that one does very well.

The 4 Benefits of Diversification for Startups

There are 4 main benefits to diversify:

  1. Better odds. Creating multiple products increases the odds of finding a great idea in a great market. The Internet provides very fast feedback about whether you've found one. After building your team, the next big thing to decide is what product to focus on. You should not choose one until you're built a product you're excited about and found product/market fit. You've found product/market fit when about 40% of your customers think your product is a must have.
  2. Builds individual skills. Entrepreneurs need broad skillsets. Trying multiple products forces you to learn new skills. You may build a consumer video site and improve your technical scaling skills while at the same time be trying a B2B site and improving your sales skills.
  3. Builds team skills. Doing multiple products gives you plenty of opportunities to interact with your team in varied situations. You'll learn faster what your teammates' strengths and weaknesses are. You'll also be forced to improve your team communication, coordination, delegation and product management skills.
  4. It's fun. Let's be honest, the early stages of working on a new problem or idea are oftentimes the most stimulating and exciting. Instead of focusing on one product day in and day out that might or might not work, trying multiple ideas keeps your brain going and your enthusiasm high.

When to Focus

If diversifying has so many benefits, should you ever stop? Yes, you should.

Focus when you are ready to make money.

Coming up with new ideas and building new, simple products is the easy part of startups. Unfortunately, developing new solutions is not what creates a lot of value for other people. Bringing your solution to other people is when most value is created--and exchanged.

Imagine you're a telecom company and you build a fiber optic network on the streets of every city in America--but fail to connect people's homes to the new system. Although connecting each home can be hard and tedious, without this step no value is created and no money will come your way.

When you hear the phrase "execution is everything", this is what it refers to. If you want to make money, and you've got a great team and found product/market fit, you've then got to focus and execute. Drop your other products and hunker down. Fix all the bugs in your main product. Really get to know your customers. Identify your markets and the order in which you'll go after them. Hire great people that have skills you are going to need.

Benefits of Focusing

Let's recap the benefits of focusing.

  1. *Money. Creating new products in the early days is fun, but making money is fun too. Once you start focusing on growing one product, the money incentive will keep you motivated and spirits high.
  2. Rewarding. Creating value for other people is perhaps the most rewarding feeling in life. Finding people with a problem, and getting your solution which solves their problem into their hands, is even better than the money you earn. You'll also create valuable jobs for your employees.
  3. Resources. If you execute well, you'll end up with resources that you can use to put diversification back into the picture. For instance, after bringing better search to almost the whole world, Google can now diversify and create better email systems, web browsers, maps, etc.

Benefits of the "Diversify Early, Then Focus" approach: A Roulette Analogy

When you first begin your startup it's very similar to playing roulette. You plunk down some resources on an idea and then the wheel spins and you win more money or lose the money that you bet.

In roulette, you can bet it all on one number(focusing) or bet a smaller amount on multiple numbers(diversifying). If you bet it all on one number and win, you get paid a lot more money. But you're also more likely to lose it all.

The "game of startups" though, has two very important differences:

  1. You get more information after the game starts "spinning".
  2. You can always move your bets around.

You get way more information about the odds of an idea "hitting the jackpot" after you plunked some time and money into it. You may find customers don't really have as big a problem as you thought. Or that the market that has this problem is much smaller than you thought. You may find one idea you thought was silly actually solves a big problem for people and is wildly popular.

You can then adjust your bets. If your new info leads you to believe that this idea has a much higher chance of hitting the jackpot, grab your resources from the other ideas and plunk them all down on this one. Or vice versa.

Don't Take My Word for It

Sadly I bet there are paperboys who's businesses have done better than all mine to date, so take my advice with a grain of salt.

But if you want to learn more, I suggest reading the early histories of companies such as eBay, Twitter, and Facebook and see what their founders were up to before they founded those sites and in the following early period.

And check back here, I'll hopefully be sharing how this approached worked for us.

Notes

  1. Fun tidbit: I wrote this on paper then typed it up and posted it all while flying on Virgin Air from SFO back to Boston. Thanks for the free wifi Google!
  2. Thanks to Ben for helping me form my ideas on this issue.

View source

December 23, 2009 — It is better to set small, meaningful goals than to set wild, audacious goals.

Here's one way to set goals:

Make them good. Make them small.

Make them Good

Good goals create value. Some examples:

  • Make a customer smile.
  • Teach someone math.
  • Learn how to cook.
  • Organize weather information.

Make them Small

Start small. It is better to set one or two goals per time period than to set two dozen goals. Instead of a goal like "get 1,000,000 people to your website", start with a smaller goal like "get 10 people to your website."

If you exceed a goal and still think it's a good thing, raise the goal an order of magnitude. If you get those 10 visitors, aim for 100.

Why Small Goals Are Better

Setting smaller goals is better because:

  • It feels good when you exceed a goal. Occasionally you'll wildly exceed a goal and that will feel great.
  • It's better to do a few small good things, than to fail trying one audacious thing.
  • It's easier to accomplish an audacious thing by going one step(order of magnitude) at a time.
  • It's less stressful and makes you happier. Low expectations are good because in most cases you will exceed them and feel happy. High expectations, by definition, are bad because in most cases you will not meet them and feel bad.
  • Goals are arbitrary anyway. All goals are simply arbitrary constraints that help you focus--often with a team--to get stuff done. So since they're arbitrary, and as long as they're good goals, might as well make them simpler and easier.

Setting Ranges

Another way to set goals is to use ranges. Set a low bar and a high bar. For example, your weekly goals might be:

LowBar HighBar What
2 7 new customers
2 4 product improvements
1 3 blog posts

If you exceed your low bar, you can be happy. If you exceed your high bar, you can be very happy.

View source

December 20, 2009 — Programming, ultimately, is about solving problems. Often I make the mistake of judging a programmer's work by the elegance of the code. Although the solution is important, what's even more important is the problem being solved.

Problems are not all created equal, so while programming you should occasionally ask yourself, "is this problem worth solving?"

Here's one rubric you can use to test whether a problem is worth solving:

  1. Simplicity. Can you envision a simple solution to the problem? Can you create at least a partial, meaningful solution or prototype in a short period of time? Building a flying car would solve a lot of my transportation problems, but I don't see a simple path to getting there. Don't be too far ahead of your time. Focus on more immediate problems.
  1. Value. Would solving this problem create value? Sometimes it's hard to predict in advance whether or not your solution would create value for people. The easiest way to tell if you've succeeded is if anyone would be disappointed if your solution were to disappear. If you can get a first prototype into people's hands early, you'll find out quickly whether or not you are building a solution to a problem that creates value.
  1. Reach. Do a lot of people have this problem? Some problems, like searching for information, are shared by nearly everyone. Others, like online version control, are shared by a much smaller niche but still a significant amount of people. If a problem is shared by only a handful of people, it's probably not worth programming a solution.

Great Programmers Solve Important Problems

The best programmers aren't simply the ones that write the best solutions: they're the ones that solve the best problems. The best programmers write kernels that allow billions of people to run other software, write highly reliable code that puts astronauts into space, write crawlers and indexers that organize the world's information. They make the right choices not only about how to solve a problem, but what problem to solve.

Life is Short

Life is too short to solve unimportant problems. If you want to solve important problems, it's now or never. The greatest programmers only get to solve a relatively small amount of truly important problems. The sooner you get started working on those, the better.

Ignore Speed Limits

If you don't have the skills yet to solve important problems, reach out to those who do. To solve important problems, you need to develop a strong skill set. But you can do this much faster than you think. If you commit to solving important problems and then reach out to more committed programmers than you, I'm sure you'll find many of them willing to help speed you along your learning curve.

View source

December 16, 2009 — If you combine Paul Graham's "make something people want" advice with Sean Ellis' product-market fit advice (you have product-market fit when you survey your users and at least 40% of them would be disappointed if your product disappeared tomorrow), you end up with a possibly even simpler, more specific piece of advice:

Make something 40% of your users must have

Your steps are then:

  • 1. Make something people want.
  • 2. Put it out there.
  • 3. Survey your users. If less than 40% would be disappointed if your product disappeared, go back to step 1.

Only when you hit that 40% number(or something in that range) should you be comfortable that you've really made something people want.

Does this advice work? I think it would for 3 reasons.

#1 The Sources

PG and Sean Ellis know what they're talking about.

#2 Companies that make my "Must Haves" are successful

I made a list of my "must have" products and they are all largely successful. I suggest you try this too. It's a good exercise.

My List of Must Haves:

  • Google Search
  • Facebook
  • Gmail
  • Dropbox
  • craigslist
  • Windows
  • Excel
  • Twitter Search
  • Firefox
  • Chrome
  • Wikipedia
  • Amazon
  • (More Technical Products)
  • Git + Github
  • LAMP Stack
  • Ruby
  • Notepad++
  • Vim
  • jQuery
  • Firebug
  • Web Developers Extension
  • StackOverflow
  • TechCrunch
  • HackerNews
  • Navicat

#3 The Only "Must Have" Product I Built was the Biggest Success

I've worked on a number of products over the past 3 years.

One of them I can tell you had a "I'd be disappointed if this disappeared" rate of over 40%. We sold that site.

All the others did not have that same "must-have" rate. We launched Jobpic this summer at Demo Day. People definitely wanted it. But we didn't get good product/market fit. If we had surveyed our users, I bet less than 10% of them would report being disappointed if Jobpic disappeared. Our options are to change the product to achieve better product/market fit, or go forward with an entirely new product that will be a must have.

Concluding thoughts

I don't know if this advice will work. But I'm going to try it.

Startup advice can be both exhilarating and demoralizing.

On the plus side, good advice can drastically help you. At the same time, if it's really good advice that means two things:

  • 1. This is how you should be doing things.
  • 2. You were not doing things this way.

That can frustrating. I've spent a few years now in the space and to realize you've been doing certain things wrong for a few years is...well...painful.

But you laugh it off and keep chugging along.

Notes

  • Thanks Nivi for the great Venture Hacks interview!
  • Users/Customers refer to people who use your site regularly or buy from you. This is not "visitors". Generally a much lower percentage than 40% of visitors become users or customers. The 40% refers to the people who have made it through your funnel and have become users or customers.
  • I used customers and users interchangeably. For non-tech businesses, you can just use "customer" each time.
  • Thanks to Ben, Alex Andon, and Andrew Kitchell for feedback.
  • Another piece of startup advice that didn't "click" until recently: Roelof Botha's 7 deadly sins advice.

View source

December 15, 2009 — The best Search Engine Optimization(SEO) system I've come across comes from Dennis Goedegebuure, SEO manager at eBay. Dennis' system is called LUMPS. It makes SEO dead simple.

Just remember LUMPS:

  • Links
  • Uurls
  • Metadata
  • Page Content
  • Sitemaps

These are the things you need to focus on in order to improve your SEO. You should also, of course, first know what terms you want to rank highly for.

LUMPS is listed in order of importance to search engines. So links are most important, sitemaps are least important.

Let's break each one down a bit more.

Links

External links--links from domains other than your own--are most important. For external links, focus on 3 things, again listed in order of importance:

  1. Quality. A link from CNN.com is worth order(s) of magnitude more than a link from my blog. A link from a related source, like from ESPN.com to a sports blog, would likely be better than from an unrelated source.
  1. Quantity. Even though quality is most important, a lot of inbound links help.
  1. Anchor Text. You want links with relevant anchor text. Jellyfish tanks is better than click here.

Your internal link structure is also important. Make sure your site repeatedly links to the pages you are optimizing for.

External links are the most important thing you need for SEO. Internal links you can easily control, but it takes time to accumulate a lot of quality external links. Focus on creating quality content(or even better, build a User Generated Content site). People will link to interesting content.

URL Structure

The terms you are optimizing for should be in your urls. It's even better if they are in your domain. For instance, if I'm optimizing for "breck yunits", I've done a good job by having the domain name breckyunits.com. If I'm optimizing for the term "seo made easy", ideally I'd have that domain. But I don't, so having breckyunits.com/seomadeeasy is the next best thing.

Luckily, URL Structure is not just important, it's also relatively easy to do well and you can generally set up friendly URLs in an hour or so. I could explain how to do it with .htaccess and so forth, but there are plenty of articles out there with more details on that.

Metadata Content

Your TITLE tags and META DESCRIPTIONS tags are important for 2 reasons. First, search engines will use the content in them to rank your pages. Second, when a user sees a search results page, the title and description tags are what the user sees. You need good copy that will increase the Click Through Rate. Think of your title and description tags as the Link Text and Description in an AdWords ad. Just as you'd optimize the AdWords ad, you need to optimize this "seo ad". Make the copy compelling and clear.

Like URL structure, you can generally set up a system that generates good meta and description tags relatively easily.

Page Content

Content is king. If you've got the other 3 things taken care of and you have great content, you're golden. Not only will great content please your visitors, but it will likely be keyword rich which helps with SEO. Most importantly, it is much easier to get links to valuable, interesting content than to bad content. Figure out a way to get great content and the whole SEO process will work a lot better.

Sitemaps

Sitemaps are not the most crucial thing you can do, but they help and are an easy thing to check off your list. Use Google Webmaster tools and follow all recommendations and submit links to your sitemaps.

Summary

There you have it, SEO made easy! Just remember LUMPS.

Links

View source

December 13, 2009 — Do you "flip the bozo bit" on people?

If you don't know what that means, you probably do it unknowingly!

What it means

When you "flip the bozo bit" on someone you ignore everything they say or do. You flip the bozo bit on a person when they are wrong or make a mistake over and over again. Usually you flip the bozo bit unconsciously.

An example

You are writing a program with Bob. Bob constantly writes buggy code. You get frustrated by Bob's bugs and slowly start ignoring all the code he submits and start writing everything yourself. You've flipped the bozo bit!

This is bad for everyone. Now you are doing more work, and Bob is becoming resentful because you are ignoring his ideas and work.

Alternatives to Flipping the Bozo Bit

Instead of flipping the bozo bit, perhaps you could work with another person. If that's not possible, take a more constructive approach:

  1. Teach. Talk to Bob and figure out why he is making repeated mistakes. We all have large gaps in our education. If you've never been exposed to a concept, there's no reason why you should understand it. Try and find what it is Bob hasn't been exposed to yet, and help him learn it.
  1. Change Roles. Maybe Bob should be working in another area. Find an area where you're the bozo and Bob's the expert. Let him work in that area, while you work in your area. He can even explain a thing or two to you.

Why We Flip the Bozo Bit

It seems like a simple evolutionary trick to save time. If someone is right only 10% of the time, would it be faster to ignore every statement they made, or faster to analyze each statement carefully in case it's the 1 out of 10 statements that might be true? Seems like it would be faster to just ignore everything by flipping the bozo bit.

But this is a bad solution. The two presented above are better.

Notes

  1. Thanks to Tom Price for telling me about this.

Links

View source

December 11, 2009 — Jason Fried from 37signals gave a great talk at startup school last month. At one point he said "software has no edges." He took a normal, everyday bottle of water and pointed out 3 features:

  • 1. The bottle held the water.
  • 2. The lightweight plastic made it easy to carry, and you can tell how full it was by picking it up.
  • 3. The clear bottle let you see how much was left and what was in it.

If you added a funnel to help pour the water, that might be useful in 5% of cases, but it would look a little funny. Then imagine you attach a paper towel to each funnel for when you spill. Your simple water bottle is now a monstrosity.

The clear edges of physical products make it much harder for feature creep to happen. But in software feature creep happens, and happens a lot.

A proposal to fight feature creep

How do you fight feature creep in software? Here's an idea: do not put each new feature request or idea on a to-do list. Instead, put them on an (un)features list.

An (un)features list is a list of features you've consciously decided not to implement. It's a well maintained list of things that might seem cool, but would detract from the core product. You thought about implementing each one, but after careful consideration decided it should be an (un)feature and not a feature. Your (un)features list will also include features you built, but were only used by 1% of your customers. You can "deadpool" these features to the (un)features list. Your (un)features list should get as much thought, if not more, than your features list. It should almost certainly be bigger.

When you have an idea or receive a feature request, there's a physical, OCD-like urge to do something with it. Now, instead of building it or putting it on a todo list, you can simply write it down on your (un)features list, and be done with it. Then maybe your water bottles will look more like water bottles.

This blog is powered by software with an (un)features list.

Notes

  • 1. Feel free to move an (un)feature to your features list if you change your mind about it.

Links

Edit: 01/05/2010. Features are a great way to make money.

View source

December 10, 2009 — Employees and students receive deadlines, due dates, goals, guidelines, instructions and milestones from their bosses and teachers. I call these "arbitrary constraints".

Does it really matter if you learn about the American Revolution by Friday? No. Is there a good reason why you must increase your sales this month by 10%, versus say 5% or 15%? No. Does it really matter if you get a 4.0 GPA? No.

But these constraints are valuable, despite the fact that they are arbitrary. They help you get things done.

Constraints Help You Focus

Constraints, whether meaningful or not, simplify things and help you focus. We are simple creatures. Even the smartest amongst us need simple directions: green means go, red means stop, yellow means step on it. Even if April 15th is an arbitrary day to have your tax return filed, it is a simple constraint that gets people acting.

Successful People Constantly Set Constraints

Successful people are good at getting things done. They focus well. Oftentimes they focus on relatively meaningless constraints. But they meet those constraints, however arbitrary. By meeting a lot of constraints, in the long run they hit enough of those non-arbitrary constraints to achieve success. Google is known for it's "OKR's"--objectives and key results--basically a set of arbitrary constraints that each employee sets and tries to hit.

Entrepreneurs Must Set Their Own Constraints

If you start a company, there are no teachers or bosses to set these constraints for you. This is a blessing and a curse. It's a blessing because you get to choose constraints that are more meaningful to you and your interests. It's a curse because if you don't set these constraints, you can get fuddled. Being unfocused, at times, can be very beneficial. Having unfocused time is a great way to learn new things and come up with new ideas. However, to get things done you need to be focused. And the first step to get focused is to set some arbitrary constraints.

A Specific Example

Here are some specific constraints I set in the past week:

  1. Write 1 blog post per day.
  2. Create blogging software in under 100 lines of code.
  3. Have version 0.2 of blogging software done by 5pm yesterday.

All of these are mostly arbitrary. And I have not met all of them. But setting them has helped me focus.

When You Don't Meet Your Constraints

If you don't meet your constraints, it's no big deal. They're largely arbitrary anyway. Even by just trying to meet your constraints, you learn a lot more. You are forced to think critically about what you are doing.

When you don't meet some constraints, set new ones. Because you now have more experience, the new ones might be less arbitrary.

But the important thing is just having constraints in the first place.

View source

December 9, 2009 — A lot of people have the idea that maybe one day they'll become rich and famous and then write a book about it. That's probably because it seems like the first thing people do after becoming rich and famous is write a book about it.

But you don't have to wait until you're rich and famous to write a book about your experiences and ideas.

A few months ago I was talking to another MBA student, a very talented man, about 30 years old from a great school with a great resume. I asked him what he wanted to do for his career, and he replied that he wanted to go into a particular field, but thought he should work for McKinsey for a few years first to add to his resume. To me that's like saving sex for your old age. It makes no sense. @ Warren Buffet

Likewise, saving blogging for your old age makes no sense. There are two selfless reasons why you should start blogging now:

  • 1. You may enlighten someone.
  • 2. Sharing your experiences adds another data point to our collective knowledge and makes us all better off.

It used to take a lot of work to publish something. Now it is simpler than brushing your teeth. So publish, write, blog!

If you need some selfish reasons, here are 5:

  • 1. Writing is good exercise for the brain and gives you "writer's high".
  • 2. Blogging makes you a better writer.
  • 3. When your blog gets traffic, it stokes your ego.
  • 4. You may spark interesting conversations with interesting people.
  • 5. In rare circumstances, you may make money.

Blogging. Don't save it for your old age.

View source

December 8, 2009 — Finding experienced mentors and peers might be the most important thing you can do if you want to become a great programmer. They will tell you what books to read, explain the pros and cons of different languages, demystify anything that seems to you like "magic", help you when you get in a jam, work alongside you to produce great things people want, and challenge you to reach new heights.

Great coders travel in packs, just like great authors.

If you want to reach the skills of a Linus, Blake, Joe, Paul, David, etc., you have to build yourself a group of peers and mentors that will instruct, inspire, and challenge.

Here are 6 specific tips to do that.

  1. Get a programming job. This is probably the best thing you can do. You'll get paid to "practice". You'll work on things that will challenge you and help you grow. And you'll have peers who will provide instruction and motivation constantly. There are tens of thousands of open programming jobs right now. Even if you feel you are not qualified for one, apply anyway, and stress how you are smart, passionate, and the experience will come with time. If you don't get a programming job today, you can reapply in 6 months or 1 year when you have better skills. Here are six job sites to check out: Craigslist (Computer Gigs, Internet Engineers, Software, Systems, Web Design) StackOverflow, CrunchBoard, HackerNews, Reddit, Startuply.
  2. Take a programming class. My best tutors are my peers. People who I took a class or two with in college. We knew each other when computers were a big mystery to us, so we don't feel embarassed when we ask questions that may sound dumb. If you're currently in college, enroll in a programming class. Otherwise, look at local colleges' continuing education programs, community colleges, or professional classes. If you're in San Francisco, maybe look at AcademyX. Give unclasses.com a try. If you think classes cost too much, don't use that as an excuse until you've tried to negogiate a deal. Often someone will give you a class for free or greatly reduced price simply by explaining your situation. Other times maybe you can offer a service in return.
  3. Attend a Meetup. I go to PHP and MySQL meetups frequently. Meetup.com has thousands of programming meetups throughout the country. Go to one. Every month. You'll learn from the speaker, you'll meet other programmers, and you'll meet recruiters who will try to hire you if you still haven't gotten that job.
  4. Join Github. Github is the first user friendly collaborative development site for programmers. Once you get comfortable with it, you could be working alongside other programmers on open source projects in no time. I'll write a better tutorial on how to get started soon, but for now, just join and explore around. It may take you a month or two to "get it", so don't feel overwhelmed if you don't understand what's going on at first. You will eventually. And you'll start to find some great programmers to talk to.
  5. Email Someone Directly. Email has been around for 35 years and it's still the favorite mode of communication for programmers. If you like someone's work, send them an email and ask for 1 or 2 tips. I've found when I email great programmers, their responses are usually short and to the point. That's not because they don't want to help, it's just that they're busy and use time effectively. Keep your emails brief and specific and they can be of great aid.
  6. Enlist a Friend. If you excercise with someone else, you burn 50% more calories on average. Likewise, if you learn programming with a friend, you'll learn 50% faster. That's a significant time savings. It's also more fun. You must have a friend who has a similar interest as you in programming. Why not suggest that you get serious about learning it together?

Hopefully you'll find some of these tips useful. Feel free to email me if you need a first mentor (breck7 at google's email service). I'm not very good yet, but I may be able to help.

Notes

  1. That exercise percentage is a guess, but sounds right to me.

View source

December 7, 2009 — Do you think in Orders of Magnitude? You should.

If you think in orders of magnitude you can quickly visualize how big a number is and how much effort it would take to reach it.

Orders of magnitude is a way of grouping numbers. The numbers 5, 8 and 11 are all in the same order of magnitude. The numbers 95, 98 and 109 are in the same order of magnitude as well, but their order of magnitude is one order of magnitude greater than 5, 8, 11.

Basically, if you multiple a number by 10, you raise it one order of magnitude. If you've ever seen the scary looking notation 5x10^2, just take the number five and raise it 2 orders of magnitude (to 500).

Think of orders of magnitude as rough approximations. If you want the number 50 to be in the same order of magnitude as the number 10, you can say that "it's roughly in the same order of magnitude" or that "it's about half an order of magnitude bigger". Don't worry about being exact.

Orders of magnitude is a great system because generally there's a huge difference between 2 numbers in different orders of magnitude. Thus to cross from one order of magnitude to the next, a different type of effort is required than to simply increment a number. For example, if you run 2 miles each day and then decide to run one more, 3 total, it should be easy. But if you decided to run one more order of magnitude, 20 miles, it would take a totally new kind of effort. You'd have to train longer, eat differently, and so forth. To go from 2 to 3 requires a simple approach, just increase what you're doing a bit. To go from 2 to 20, to increase by an order of magnitude, requires a totally different kind of effort.

A Business Example

Let's do a business example.

Pretend you started a business delivering pizza. Today you have five customers, make 5 pizzas a week, and earn $50 revenue per week.

You can keep doing what you're doing and slowly raise that to 6 customers, then 7 and so on. Or you can ask yourself, "How can I increase my business an order of magnitude?"

Going from 5 to 50 will take a different type of effort than just going from 5 to 6. You may start advertising or you might create a "Refer a Customer, get a free pizza" promotion. You might have to hire a cook. Maybe lower your price by $2.

Imagine you do all those things and now have 50 customers. How do you get to 500?

Now you might need a few employees, television advertisements, etc.

Growing a business is the process of focusing like a laser on the steps needed to reach the next order of magnitude.

Here are some more examples of orders of magnitude if it's still not clear:

Bill Gates has approximately $50,000,000,000. Warren Buffett has $40,000,000,000. For Warren to match Bill, he merely has to make a few more great investments and hope Microsoft's stock price doesn't go up. He does not have to increase his wealth an order of magnitude. I on the other hand, have $5 (it was a good month). For me to become as rich as BillG, I have to increase my wealth 10 orders of magnitude. That means that I'd have 10 different types of hard challenges to overcome to match BillG's wealth.

  • Going from $5 to $50 may mean just working a bit and could be accomplished in a day.
  • Going from $50 to $500 would mean working a few days.
  • Going from $500 to $5,000 might mean getting a job that pays more.
  • Going from $5,000 to $50,000 would mean getting a job that pays more, saving more, and doing that for a longer period.
  • Going from $50,000 to $500,000 might mean doing all that, plus making some good investments.
  • ... and so forth.

More examples

  • If your room is 200 square feet, the world is 13 orders of magnitude greater than your room.
  • Google indexes 10,000,000,000 pages. This site is 10 pages. There are 9 orders of magnitude more pages in the Google Index.
  • Facebook has 350 million users. Dropbox has 3 million. Facebook has 2 orders of magnitude more users.
  • The population of California is about 35 million. The population of the US is one order of magnitude bigger, about 300 million. The population of China is about 4 times that of the U.S. at 1,300,000, which is less than an order of magnitude difference.
  • Shaq is about 1 order of magnitude taller than a newborn, but besides that height is much more narrowly distributed. Everyone is within the same order of magnitude tall.

Notes

View source

December 6, 2009 — Imagine you are eating dinner with 9 friends and you all agree to play Credit Card Roulette. Credit Card Roulette is a game where everyone puts their credit card in a pile and the server randomly chooses one and charges the whole meal to it.

Imagine you are playing this game with your own friends. Pause for a second and picture it happening.

...

What did you see?

I bet you saw one person's card get picked and that person was sad and everyone else laughed.

Wrong!

This is not what really happened! In reality, despite the fact that you observed only one's person card getting picked, in reality everyone's card got chosen.

In reality, when you played the game, the world split into 10 paths, and every person's card got picked in one of those paths. You only observed one path, but trust me, there were 9 others.

This is a simple example of the many worlds law. You probably were not taught the many worlds law in school, which is a shame. It's one of the most important laws in the world.

Notes

  • 1. I love this game because I don't have a credit card
  • 2. You can benefit greatly from understanding the many worlds law. Take solace in the fact that somewhere out there you won the lottery and are drinking a pina colada on your private island right now.
  • 3. The many worlds law could very well be wrong. There could be just 1 world. There could be 42. I don't think about that much. There may be no god, but betting there is one has benefits.
  • 4. I called it the many worlds "law" because I don't want to use the word theory or hypothesis. Theory and hypothesis are too linked in people's minds with uncertainty, and some ideas, like evolution and many worlds, have way too much supporting evidence to leave any room for uncertainty, in as much as we can be uncertain about something.
  • 5. One of the most important laws in most worlds, anyway.

View source

December 4, 2009 — Do you want to become a great coder? Do you have a passion for computers but not a thorough understanding of them? If so, this post is for you.

Saying #1: 10,000 Hours

There is a saying that it takes 10,000 hours of doing something to master it.

So, to master programming, it might take you 10,000 hours of being actively coding or thinking about coding. That translates to a consistent effort spread out over a number of years.

Saying #2: No Speed Limit

There is another saying that I just read which inspired me to write this, that says "there is no speed limit".

In that post, Derek Sivers claims that a talented and generous guy named Kimo Williams taught him 2 years worth of music theory in five lessons. I have been learning to program for 2 years, and despite the fact that I've made great progress, my process has been slow and inefficient.

I did not have a Kimo Williams. But now that I know a bit, I'll try and emulate him and help you learn faster by sharing my top 12 lessons.

I'll provide the tips first, then if you're curious, a little bit more history about my own process.

The 12 Tips

  1. Get started. Do not feel bad that you are not an expert programmer yet. In 10,000 hours, you will be. All you need to do is start. Dedicate some time each day or week to checking things off this list. You can take as long as you want or move as fast as you want. If you've decided to become a great programmer, youve already accomplished the hardest part: planting the seed. Now you just have to add time and your skills will blossom. If you need any help with any of these steps, feel free to email me and Ill do my best to help.
  2. Dont worry. Do not be intimated by how much you dont understand. Computers are still largely magic even to me. We all know that computers are fundamentally about 1s and 0s, but what the hell does that really mean? It took me a long time to figure it out--it has something to do with voltages and transistors. There are endless topics in computer science and endless terms that you won't understand. But if you stick with it, eventually almost everything will be demystified. So don't waste time or get stressed worrying about what you don't know. It will come, trust me. Remember, every great programmer at one time had NO IDEA what assembly was, or a compiler, or a pointer, or a class, or a closure, or a transistor. Many of them still dont! That's part of the fun of this subject--you'll always be learning.
  3. Silicon Valley. Simply by moving to Silicon Valley, you have at least: 10x as many programmers to talk to, 10x as many programming job opportunities, 10x as many programming meetups, and so on. You don't have to do this, but it will make you move much faster. The first year of my programming career was in Boston. The second year was in San Francisco. I have learned at a much faster pace my second year.
  4. Read books. In December of 2007 I spent a few hundred dollars on programming books. I bought like 20 of them because I had no idea where to begin. I felt guilty spending so much money on books back then. Looking back, it was worth it hundreds of times over. You will read and learn more from a good $30 paperback book than dozens of free blogs. I could probably explain why, but its not even worth it. The data is so very clear from my experience that trying to explain why it is that way is like trying to explain why pizza tastes better than broccoli: Im sure there are reasons but just try pizza and you'll agree with me.
  5. Get mentors. I used to create websites for small businesses. Sometimes my clients would want something I didnt know how to do, simple things back then like forms. I used to search Google for the answers, and if I couldnt find them, I'd panic! Dont do that. When you get in over your head, ping mentors. They dont mind, trust me. Something that youll spend 5 hours panicking to learn will take them 2 minutes to explain to you. If you dont know any good coders, feel free to use me as your first mentor.
  6. Object Oriented. This is the "language" the world codes in. Just as businessmen communicate primarily in English, coders communicate primarily in Object Oriented terms. Terms like classes and instances and inheritance. They were completely, completely, completely foreign and scary to me. Theyd make me sick to my stomach. Then I read a good book(Object Oriented PHP, Peter Lavin), and slowly practiced the techniques, and now I totally get it. Now I can communicate and work with other programmers.
  7. Publish code. If you keep a private journal and write the sentence The car green is, you may keep writing that hundreds of times without realizing its bad grammar, until you happen to come upon the correct way of doing things. If you write that in an email, someone will instantly correctly you and you probably won't make the mistake again. You can speed up your learning 1-2 orders of magnitude by sharing your work with others. Its embarrassing to make mistakes, but the only way to become great is to trudge through foul smelling swamp of embarrassment.
  8. Use github. The term version control used to scare the hell out of me. Heck, it still can be pretty cryptic. But version control is crucial to becoming a great programmer. Every other developer uses it, and you can't become a great programmer by coding alone, so you'll have to start using it. Luckily, you're learning during an ideal time. Github has made learning and using version control much easier. Also, Dropbox is a great tool that your mom could use and yet that has some of the powerful sharing and version control features of something like git.
  9. Treat yourself. Build things you think are cool. Build stuff you want to use. Its more fun to work on something you are interested in. Programming is like cooking, you don't know if what you make is good until you taste it. If something you cook tastes like dog food, how will you know unless you taste it? Build things you are going to consume yourself and you'll be more interested in making it taste not like dog food.
  10. Write English. Code is surprisingly more like English than like math. Great code is easy to read. In great code functions, files, classes and variables are named well. Comments, when needed, are concise and helpful. In great code the language and vocabulary is not elitist: it is easy for the layman to understand.
  11. Be prolific. You dont paint the Mona Lisa by spending 5 years working on 1 piece. You create the Mona Lisa by painting 1000 different works, one of them eventually happens to be the Mona Lisa. Write web apps, iPhone apps, Javascript apps, desktop apps, command line tools: as many things as you want. Start a small new project every week or even every day. You eventually have to strike a balance between quantity and quality, but when you are young the goal should be quantity. Quality will come in time.
  12. Learn Linux. The command line is not user friendly. It will take time and lots of repetition to learn it. But again, its what the world uses, you'll need at least a basic grasp of the command line to become a great programmer. When you get good at the command line, its actually pretty damn cool. Youll appreciate how much of what we depend on today was written over the course of a few decades. And youll be amazed at how much you can do from the command line. If you use Windows, get CYGWIN! I just found it a few months ago, and it is much easier and faster than running virtualized Linux instances.

That's it, go get started!

Actually, I'll give you one bonus tip:

  1. Contact me. My email address is breck7 at Google's mail service. Feel free to ping me for personal help along your journey, and I'll do my best to lend a hand.

My Story, briefly

Two years ago, in December 2007, I decided to become a great programmer. Before then, I had probably spent under 1,000 hours "coding". From 1996 to 2007, age 12 to age 23, I spent around 1,000 hours "coding" simple things like websites, MSDOS bat scripts, simple php functions, and "hello world" type programs for an Introduction to Computer Science class. Despite the fact that I have always had an enormous fascination with computers, and spent a ton of time using them, I was completely clueless about how they worked and how to really program.

(If you're wondering why didn't I start coding seriously until I was 23 and out of college there's a simple and probably common reason: the whole time I was in school my goal was to be cool, and programming does not make you cool. Had I known I would never be cool anyway, I probably would have started coding sooner.)

Finally in December 2007 I decided to make programming my career and #1 hobby. Since then I estimate I've spent 20-50 hours per week either coding or practicing. By practicing I mean reading books about computers and code, thinking about coding, talking to others, and all other related activities that are not actually writing code.

That means I've spent between 2,000-5,000 hours developing my skills. Hopefully, by reading these tips, you can move much faster than I have over the past 2 years.

Links

Notes

  1. The saying that it takes 10,000 hours to master something may or may not be true but is indisputably popular (which is often an attribute of true ideas).
  2. I added the quotes around "coding" when describing my past experience because it was simple stuff, and it felt funny funny calling it coding just as it would sound funny calling a 5 year old's work "writing".
  3. I still have a long way to go to become a "great programmer", 2-4 more years I'd say.

View source

December 3, 2009 — What would happen if instead of writing about subjects you understood, you wrote about subjects you didn't understand? Let's find out!

Today's topic is linear algebra. I know almost nothing about vectors, matrices, and linear algebra.

I did not take a Linear Algebra course in college. Multivariable calculus may have done a chapter on vectors, but I only remember the very basics: it's a size with a direction, or something like that.

I went to a Borders once specifically to find a good book to teach myself linear algebra with. I even bought one that I thought was the most entertaining of the bunch. Trust me, it's far from entertaining. Haven't made it much further than page 10.

I bet vectors, matrices, and linear algebra are important. In fact, I'm positive they are. But I don't know why. I don't know how to apply linear algebra in everyday life, or if that's something you even do with linear algebra.

I use lots of math throughout the day such as:

  • Addition/subtraction when paying for things
  • Multiplication when cooking for 6 roommates
  • Probability when deciding whether to buy cell phone insurance
  • Calculus when thinking about the distance needed to break fast while biking
  • Exponents and logs when analyzing traffic graphs and programming

But I have no idea when I should be using vectors, matrices, and other linear algebra concepts throughout the day.

There are lots of books that teach how to do linear algebra. But are there any that explain why?

Would everyone benefit from linear algebra just as everyone would benefit from knowing probability theory? Would I benefit?

I don't know the answer to these questions. Fooled by Randomness revealed to me why probability is so incredibly important and inspired me to master it. Is there a similar book like that for linear algebra?

I guess when you write about what you don't know, you write mostly questions.

View source

December 2, 2009 — What books have changed your life? Seriously, pause for a few minutes and think about the question. I'll share my list in a moment, but first come up with yours.

Do you have your list yet? Writing it down may help. Try to write down 10 books that you think have most impacted your life.

Take all the time you need before moving on.

Are you done yet? Don't cheat. Write it down then continue reading.

Okay, at this point I'm assuming you've followed instructions and wrote down your list of 10 books.

Now you have one more step. To the right of each book title, write "fiction" or "nonfiction". You can use the abbreviations "F" and "NF" if you wish.

You should now have a list that looks something like mine:

  • How to Read a Book - NF
  • Never Eat Alone - NF
  • Fooled by Randomness - NF
  • How to Win Friends and Influence People - NF
  • Snowball - NF
  • Influence - NF
  • Object Oriented PHP - NF
  • Life of Pi - F
  • Lord of the Flies - F
  • The Illiad - F

Now, count the NF's. How many do you have? I have 7. So 7 out of the 10 books that I think have most impacted my life are non-fiction. Therefore, if I have to guess whether the next book I read that greatly impacts my life will be fiction or nonfiction, my guess is it will be nonfiction.

What's your list? Do you think the next book that will greatly impact your life will be fiction or non-fiction?

Share your results here.

Notes

  • 1. I read about equal amounts fiction and nonfiction. So on average, I get greater return from nonfiction reading.
  • 2. Reading fiction is a more enjoyable form of entertainment.
  • 3. This essay is in response to a comment I read a while back on HackerNews that got me thinking about the subject.

View source

Experience is what you get when you don't get what you want.

December 2, 2009 — How many times have you struggled towards a goal only to come up short? How many times have bad things happened to you that you wish hadn't happened? If you're like me, the answer to both of those is: a lot.

But luckily you always get something when you don't get what you want. You get experience. Experience is data. When accumulated and analyzed, it can be incredibly valuable.

To be successful in life you need to have good things happen to you. Some people call this "good luck". Luck is a confusing term. It was created by people who don't think clearly. Forget about the term "luck". There is not "good luck" and "bad luck". Instead, "good things happen", and "bad things happen". Your life is a constant bombardment of things happening, good and bad. Occasionally, despite making bad decisions steadily, some people have good things happen to them. But in most cases to have good things happen to you, you've got to make a steady stream of good decisions.

You've got to see patterns in the world and recognize cause and effect. You've got to think through your actions and foresee how each action you take will affect the chances of "good things happening" versus "bad things happening" down the line.

When you're fresh out of the gate, it's hard to make those predictions. You just don't have any data so you can't analyze cause and effect appropriately. But once you're out there attempting things, even if you screw up or don't get what you want, you get experience. You get data to use to make better decisions in the future.

View source

December 2, 2009 — Decided to blog again. I missed it. Writing publicly, even when you only get 3 readers, two of which are bots and the other is your relative, is to the mind what exercise is to the body. It's fun and feels good; especially when you haven't done it in a while.

Also decided to go old school. No Wordpress or Tumblr, Blogger or Posterous. Instead, I'm writing this on pen and paper. Later I'll type it into HTML using Notepad++, vim, or equivalent(EDIT: after writing this I coded my own, simple blogging software called brecksblog). It will just be text and links. Commenting works better on hackernews, digg, or reddit anyway.

Hopefully these steps will result in better content. Pen and paper make writing easier and more enjoyable, so hopefully I'll produce more. And the process of typing should serve as a filter. If something sucks, I won't take the time to type it.

I'm writing to get better at communicating, thinking, and just for fun. If anyone finds value in these posts, that's an added bonus.

Written 11/30/2009

View source

View source