April 16, 2024 — I pricked my finger and moved a disposable ketone measuring stick into the newly formed drop of blood. I was checking my "blood ketone" levels. If the result came back higher than 0.8 mmol/L, I would be in a state of "ketosis".
The meter showed "1.0 mmol/L". Success! I was still in ketosis.
It was day 174 since, at wits' end, I started a therapeutic ketogenic diet as a treatment strategy for bipolar disorder.
Before hearing about keto for bipolar I had tried and failed nearly all of the major bipolar treatment strategies.
One, lithium, is often called the "gold standard" bipolar treatment.
I tried lithium. Three times.
I'll admit, lithium did seem to stabilize my energy, but there was a serious downside: the lithium slingshot, I call it.
The side effects and annoyances of taking lithium are constant. Some parts of my brain and body are objectively worse on it. Brain fog. Weight gain. That sort of thing.
So, even if I took lithium correctly for 300 days, on day 301 parts of me would still be voting to stop.
And so, inevitably, at some point parts of me would find an excuse to stop. Then, within weeks, I would enter a state of hypomania or mania. As the lithium tapered out of my system, my brain energy would shoot forward way past the starting point. The lithium slingshot.
I looked at the ketone meter again: 1.0 mmol/L.
I paused. There was something familiar about that number.
Then I remembered. Years ago my doctor said we were aiming for a lithium blood level of 1.0 mmol/L.
To be in therapeutic ketosis I needed ketones to be around one millimole.
To be in therapeutic lithium I needed lithium to be around one millimole.
Both molecules have neuro effects when they are around one millimole.
What are the odds? We can measure chemicals at 100 nanomole levels, 10 nanomole levels, 1 nanomole levels, and at levels below that and everything in between.
Was it just a random coincidence that these 2 different chemicals had overlapping therapeutic windows, or is this a clue to some common mechanism(s)?
Unfortunately, it had been 22 years since high school chemistry class, and I couldn't even remember what a millimole was. This was going to take me some time.
But I had to know the answer. You could even say there was a chance my life depended on it.
So, I dove in. This is the story of coming up with an answer to the question: what are the odds that the therapeutic windows for 2 different chemicals would overlap?
Taking lithium was annoying.
I had to swallow pills each day. I am a competitive person, but in a pill swallowing competition I would lose to pretty much anyone.
I suck at swallowing pills. If they are larger than an M&M I'm going to need 3 because I'll gag and cough up the first two.
I don't know why I'm worse than others at swallowing. Maybe it's from that summer on Cape Cod when I was a kid and left the kitchen with my mouth full and almost choked to death on a piece of undercooked bacon. I made it back to the kitchen seconds before I passed out, my dad saw what was happening, gave me a pop, and saved my life. Maybe ever since that day my brain has turned every unchewed thing in my mouth into a piece of undercooked bacon.
Now, if it was just discomfort from swallowing pills, I'm sure I could focus on that and learn.
But it's not just swallowing the lithium. You also have to get the lithium.
I can go to a liquor store and buy enough alcohol to supply a bar for a decade, but god forbid I be allowed to keep a year's supply of a supposedly "gold standard life saving" medicine on hand. Instead I had to go to a pharmacy every month or so, often getting a different variant of lithium pills (extended release; different shapes; different sizes, etc) which I could never predict beforehand, and never being quite sure what I had to pay until I was at the register.
More than that, to keep the lithium prescriptions coming I had to maintain an expensive relationship with a psychiatrist or nurse practitioner. The system tells us the worst thing you can do is stop taking your meds but then is designed to make it as easy as possible to stop taking your meds. As far as I can tell bipolar meds can only be managed successfully long term if you are the type of person as predictable as the moon, and bipolar people decidedly are not.
Of course, there are real safety concerns with lithium. At high enough concentrations it can damage your kidneys and even kill you. So taking lithium came with another annoyance: laboratory blood tests. Measuring lithium in your blood is harder than measuring other things, such as glucose and ketones. (I say this now as if I already knew that, but I knew almost nothing about any of this until I set out to answer that first question). I can tell you from experience at-home blood tests are a 100x better experience than lab tests.
To sum up, not even talking about side effects but for logistical reasons alone, taking lithium was annoying.
Keto is different.
There are no pills to swallow. There are no prescriptions to perpetually refill. There is no relationship with a licensed provider I have to maintain to keep getting those prescriptions. I don't need to worry about toxicity levels1. And I can do the blood tests myself, at home.
The "side effects" from eating healthier whole foods include weight loss, better teeth and better skin.
(I do need to note that there are still many long term unknowns around keto and like anything, if done improperly, there can be serious health consequences1. And it is sad to say no to real bagels, pizza and pasta).
But here's the thing about keto for bipolar disorder: it is basically brand new. Although the idea of the therapeutic ketogenic diet has been out for over a hundred years and well studied for epilepsy, how keto could be used for bipolar disorder has just barely been looked at by science. In fact, if it weren't for a few curious pioneers2 and philanthropists, it would remain barely looked at by science.
Now there's a serious effort underway to test and better understand keto for bipolar. In a few years, we will know a lot more.
Who knows, maybe the expanded science may cause ketones to pass lithium as the new gold standard in bipolar.
Meanwhile, while we wait for the professional scientists to come to confident, big data backed answers, amateur scientists, like yours truly, have gathered in online forums to share knowledge and try and help get to an understanding of keto for bipolar faster.
And so now I return to the one millimole question.
A ketone is small. Very small. The diameter of a ketone is 0.6 nanometers. Have you heard how small an atom is? A ketone is not much bigger. A ketone is only 10-15 atoms (hydrogen, carbon, and oxygen). Ketones are so small, that if you layed 20 million of them end to end, you would only make it across one penny.
There are 3 kinds of ketones. They have long chemical names. BHB (C_4H_8O_3), which makes up about 70-80% of your ketones. AA (C_4H_6O_3), which makes up around 20%. And Acetone (C_3H_6O), which makes up around 2%.
BHB is the one we care most about. It is the one most abundant. When I prick my finger and measure my ketone levels, I'm measuring BHB.
Ketone test strips contain a molecule called β-hydroxybutyrate dehydrogenase. This reacts with the BHB. This reaction is correlated with a change in electrical current, which is measured by the handheld device, and communicated to me as my ketone blood level (a technique called Amperometry).
Somewhere around 90% of ketone production happens in the mitochondria of the liver cells (hepatocytes)3. The liver is actually the largest organ inside the human body (after that is the brain, lungs, heart, ...). You can't live without a liver. Maybe that's why they call it the liver?
Ketones are small and water soluable. From the liver, ketones enter the blood plasma (not the blood cells) to reach other destinations in the body. Blood plasma makes up about 55% of the volume of the blood, is mostly water, and distributes important things like ketones throughout the body.
Ketones that are not used by cells for energy production leave the body through urine (60-80%), breath (15-25%), and sweat (<5%). To get in your urine, ketones are filtered from your blood by your kidneys, travel down your ureters, into your bladder, and then travel out into the world, emitting a fruity "ketone" smell.
Sidenote: It is interesting that humans are able to smell ketones. I wonder why we would have evolved an olfactory sensitivity to them. Is it a positive smell, or a warning sign?
For ketones to reach brain cells, they travel in the blood plasma from your liver to your brain, then cross the blood brain barrier (which has mechanisms to permit the entering of ketones), into the Interstitial Fluid of the brain, and from there are taken into the cells, where they are used in the Krebs cycle to make ATP.
Okay. So I can describe a ketone. What about a millimole of ketones?
Turns out, that is a lot of ketones.
One mole is ~ 6 \times 10^{23} . This means that one millimole of ketones is:
602,200,000,000,000,000,000 ketones
Wow! I forgot how big the small world is.
If I was able to pile up all the ketones in my blood, how much would that amount to?
The amount of whole blood (blood cells + blood plasma), varies depending on the size of the person. I am 5'10", 170lbs, which means I have roughly around 6 liters of blood in me.
I will leave the math in the comments, but the final answer is 0.5mL worth of ketones.
This means, if you gathered all the ketones in my blood right now, despite it being an absolutely massive number of ketones, you'd have around a blueberry's worth of ketones.
I told you they were tiny.
Of course, the amount of ketones in my blood at any one moment is not the same as the amount of ketones my liver is producing. I mean, the purpose of ketones aren't to circulate in the blood but to be consumed by cells.
We measure ketones in the blood, because that's the easiest place to measure them, but they are in other places too, like your cerebral spinal fluid, sweat, urine, and 💩.
Estimates vary on how many ketones my body would produce in a day while in ketosis, but 100 grams of ketones per day seems to be a decent ballpark, which amounts to around one mole of ketons (1,000 millimoles), which, if we wanted to visualize it, would be about the size of 200 blueberries:
Alright, so now we have a better sense of what a millimole of ketones is. Let's turn our attention to lithium.
There are around four lithium salt compounds in use for medicinal purposes. Lithium Carbonate (Li_2CO_3) is by far the most common, around 95% of the lithium prescribed in the world. Lithium orotate (C_5H_3LiN_2O_4) is the second most popular, accounting for around 3%-4%. Lithium Citrate and Lithium Aspartate are the least used. Lithium Chloride was also used medicinally in the past before a majority opinion arose that it was more toxic than the others.
Because it is the dominant form, for the rest of this post when I say "lithium" I am referring to Lithium Carbonate.
Just like ketones, lithium is tiny. In fact, according to my calculations, a molecule of lithium is around 2.5x smaller than a ketone molecule.
Unlike ketones, you cannot check your lithium levels at home (though some people are working on that). Instead, to measure lithium levels your blood needs to be processed in a lab using techniques with names like Ion-Selective Electrode, Atomic Absorption Spectroscopy, Inductively Coupled Plasma Mass Spectrometry or Flame Emission Spectroscopy.
I can't really find a great reason why at home lithium tests aren't available. Some people write that it wouldn't be safe, because getting accurate readings is more difficult, but I think multiple at home readings, even if off a bit, would be better than no readings at all. If I had to pick an explanation, it would be that because of diabetes, there's a big market for glucose and ketone tests, but it's relatively rare to take lithium so there isn't a big enough market for a company to build a convenient at home lithium test at this time.
After you swallow your lithium the pills land in your stomach. There, fluids dissolve them. The lithium (Li+) splits from the carbonate. The next destination in your gastrointestinal tract is your small intestines, a place which happens to allow ~100% of the lithium ions to be absorbed into your bloodstream.
Like ketones, lithium crosses the blood brain barrier. Unlike ketones, lithium cross via passive diffusion. Ketones use active transport (well, not all ketones. Acetone can cross passively).
Once lithium is in your brain, it both remains in the extracellular space and also is able to enter neurons.
When a lithium ion enters a cell, it may hang around for quite a bit, but eventually when it leaves it is the same lithium ion. This is different than a ketone molecule, which undergoes metabolic transformations in cells.
Like ketones, your body excretes lithium in your urine and sweat. (Sidenote: Unlike ketones, humans have not evolved a sense sensitivity to lithium).
Another lithium test a lab can do is called an RBC test. This test measures the levels of lithium ions that have entered your cells (red blood cells specifically). To perform this test, your red blood cells are first separated from your blood plasma, then they are lysed (split open) so their contents can be measured. It seems lithium levels in blood plasma spike quickly after ingestion, then decrease quickly, but levels change more slowly in other places, like red blood cells, neurons, and CSF (Cerebrospinal fluid).
This seems to explain why my lithium slingshot did not happen immediately after stopping lithium but in the weeks after. Once you stop taking lithium, the lithium content in your blood runs out fast, but it takes more time for the lithium to leave all of your cells. Then, those cells, which had been working in a lithium rich environment, seem vulnerable to catching a manic fire. The lithium slingshot.
Can we see the amounts of lithium in the brain? Apparently we can, to some degree, using MRS. Ketones too. I looked into being able to take my own MRS pictures at home, but didn't have a place to put one of the machines. Oh yeah, and they cost $500,000!
I also want to note something interesting about lithium blood levels vs ketone levels. We mentioned earlier that when in ketosis your body is constantly producing and consuming ketones, so many that it would amount to over 100 blueberries throughout the day. But lithium is exogenous and you usually take it just once or twice a day, so your body is exposed to just a few blueberries of lithium a day. But the blood levels are similar. So where are all those ketones going? It seems that your cells must be grabbing those ketones from the blood far faster than lithium is grabbed from the blood.
Lithium seems to move more slowly into cells compared to ketones. Perhaps the response time to ketones might be 2x-10x greater than to lithium. Again, this is probably the reason for the delayed lithium slingshot.
Whew. That was a lot to learn.
Now, armed with these basic model of ketones and lithium, I can start returning to the main question of this post.
This whole post began because I noticed that 2 very different treatments for bipolar disorder, a ketogenic diet and lithium, use blood tests to determine if someone is in a therapeutic range, and it just so happens that even though each is testing for the presence of something different (ketones vs lithium), if your level is 1.0 mmol/L then you are in the therapeutic range in both treatments.
Now of course, the ranges are not identical. For lithium, the therapeutic range is generally given as between "0.4 and 1.2 mmol/L". Lithium levels above 1.5 mmol/L are considered toxic and above 2.0 mmol/L can be life-threatening.
The range for nutritional ketosis is usually given as between "0.5 and 3.0 mmol/L". Some say ketosis begins above 0.8mmol/L. Some say nutritional ketosis requires levels above 1.5mmol/L.
Admittedly then my "one millimole" is a slight simplification, as the target therapeutic ranges are not identical, but there is overlap4.
We can measure a ton of chemicals in the body. Different chemicals have different effects in different concentrations. Should we be surprised at all that two different chemicals both seem to treat the same condition at similar concentrations?
Or is it just really common in human health to find chemicals with a therapeutic range around the one millimole per liter level?
If lots of chemicals have a therapeutic range around one millimole, then the intersection of therapeutic ketone and therapeutic lithium levels would be less surprising.
If few chemicals had a therapeutic range around that region, then it would be more surprising to see that intersection occur, and it would then be worth probing whether that is a clue to some biological mechanism. (Perhaps that is a common saturation level where enough neurons are exposed to the chemicals to prevent any breakout manic wildfires?)
To answer this question, we need to know how likely it is that 2 randomly selected blood chemicals would have overlapping therapeutic ranges.
To know the odds that any two random blood tests would have an overlapping target range, we first need to know how many different blood chemistry tests there are.
Just how many chemicals are there? According to the WHO, humans have now labeled over 160,000,000 chemicals! If you measured your blood levels for each one of these chemicals with one drop of blood per test, doing one test a second, you would be dead by tomorrow night and only have done 00.00075% of the tests.
So, let's narrow our list of chemicals to just the ones commonly found in human blood.
An example besides ketones and lithium is glucose. For glucose, the normal blood level range is between 3.9 and 6.1 mmol/L. Levels below 3.9 mmol/L start to be considered hypoglycemic, with levels below 2.8 mmol/L can be dangerous. Levels above 7.8 mmol/L when fasting start to be considered hyperglycemic, or above above 11.1 mmol/L 2 hours after a meal.
My first day looking for a good dataset of chemicals found in human blood with reference ranges did not go well. I spent a couple of hours without success. So I started making my own.
On day 2, while continuing to build my own blood test dataset for this post, I found a page on Wikipedia that already has a list of the most common blood tests with reference ranges, and some Wikipedians had even made a visualization already. After cursing myself for missing this on the first day, I switched to being grateful to the Wikipedians and started enhancing my own dataset with this information. I also found this page on Wikipedia that lists 240 human blood components.
I was thrilled to find this information on Wikipedia, but still it seemed like a small number of chemicals. Are none of the other 160,000,000 chemicals relevant? I kept searching and finally found a team who built an amazing website called HMDB that lists 3,126 chemicals that would be relevant to my question. I felt good knowing that I could do this initial analysis using the Wikipedia data, and if it seems worthwhile in the future, the HMDB data can be used to do it again at a 10x scale.
Finally I had enough raw material to start building my first structured, clean, tabular dataset to answer my question. It was a bit of data cleaning grunt work, and v1 is pretty ugly and I'm sure is packed with loads of errors, but finally I had a dataset of the reference levels for around 240 chemicals found in human blood.
So now I can finally answer: one millimole, what are the odds?
The reference levels of only 14 of the 241 chemicals in my dataset have an overlap with the therapeutic ranges of ketones and lithium. The only other chemicals that have overlap with these levels are listed below:
If someone called me at this moment, and offered me even odds as to whether it was just a random coincidence that there is overlap in the therapeutic ketone range and the therapeutic lithium range, or whether it was because of some related biomechanisms I would bet heavily that it is not a coincidence. This would be great, because I would be able to honestly say I did my best in the time allotted, made the best decision with the information I had, and could stop work on this frighteningly long blog post.
Alas, the phone did not ring. So I can't "call it a day" just yet.
Especially because I got an answer that makes me look good. This is often a sign of wishful thinking. Imagine if it turned out that half of all chemicals in the blood had target levels around 1 mmol/L, and that this was a commonly known fact to everyone in medicine and chemistry. If I found that answer, then I might feel silly for spending so much time on such a naive hunch. The fact that my hunch seems more interesting now, makes me think I may have biased my dataset.
I must walk around my work and do another inspection.
The biggest flaw in my current research is that I haven't made an extra effort to include other chemicals taken by bipolars in my dataset. Let's do that now.
I went back and added 8 drugs by hand to my dataset including the common bipolar medications Valproate, Lamictal, Zyprexa, and Abilify. Of the 8, only one has a bit of overlap: Valproate (~.2 - .5). Interestingly enough, the bipolar medication that usually ranks #2 (after lithium), is...Valproate!
So I expanded my dataset yet still found it unlikely that 2 random chemicals would have therapeutic level overlaps. At this point, I am confident concluding this leg of my research journey. With this additional data I would not change my initial bet. My answer is: the slight overlap in the therapeutic blood level ranges of the 2 top bipolar drugs with the therapeutic level of ketones is a strong clue to some underlying biomechanisms.
My research won't stop here. But this blog post will.
This post is already maybe my longest, and if it gets much longer not even the author would want to read it.
Also, given that my background in chemistry consists of the past 1 week of Internet searches combined with 1 year of high school chemistry 22 years ago, there could be some really naive, glaring mistakes in this post, such as flaws in my mental models or huge errors in my datasets (largely assembled with copy/paste and LLMs), that radically alter the conclusions. It's better that I publish what I have now, and maybe a reader could alert me to suboptimal models or paths I've followed.
This post may not describe my future understanding, but does describe my current understanding, and my journey getting here.
I am looking forward to researching the next logical question: what might those common biomechanisms be? How might ketones and lithium work in bipolar? Perhaps there will be a Part II to this post.
But for now, if you'll excuse me, I've got fats to eat, and fingers to prick.
1 There is a dangerous condition involving ketones called ketoacidosis affecting mostly diabetics which is associated with around 200,000 cases and 600 fatalities per year in the US. ⮐
2 A good starting place to start reading about some of the keto for bipolar pioneers is Metabolic Mind. ⮐
3 I should note that there is recent research showing that astrocytes (a kind of brain cell) may also produce ketones. I did not dive deep down that thread yet, but thought I should mention it since it involves ketones and the brain, which is ultimately what I'm interested in. ⮐
4 I am using the one millimole number to simplify the writing a bit, but when you visualize things keep in mind we're thinking about overlapping ranges rather than a specific point. ⮐
Thanks to RGM for feedback on this post.
April 10, 2024 — Now that I am writing more about Bipolar Disorder, and even have a category page for the term, I thought I should write a brief note on what I think about the term itself.
In short, I predict in the long run, as our understanding increases, the phrase "Bipolar Disorder" and its sub-phrases (Bipolar I, Bipolar II, Bipolar NOS, and Cyclothymia), will fall out of use and be replaced by a larger set of more specific terms clustered not by symptoms but by biological causes.
This is not an uncommon opinion to have. Kay Jamison, in a 2012 talk said "It's a bit misleading to talk about bipolar disorder/manic depressive illness because we are really talking about twenty to twenty five different disorders that we don't know yet."
What might these new terms look like? One interesting new term from Hannah Warren is "neurometabolic dysfunction". It is based on the hypothesis that for at least one subset of people currently given the label "bipolar disorder", it may be more accurate to label their condition as a metabolic problem.
Even neurometabolic dysfunction is still a fairly broad term, but it heads in the direction of finding terms labeling conditions by their root biological causes, rather than by their symptoms.
What's wrong with naming conditions after their symptoms rather than their causes?
Imagine if instead of having more specific diagnoses like influenza, strep throat, malaria, UTIs, and colds we just had the term "Fever Disorder". Treatment outcomes would likely be a lot worse.
In a way this is kind of where we are at with the term "Bipolar Disorder".
Terminology has always been evolving. The two major classification systems are the DSM and ICD. Newer ideas include RDoC and HiTOP.
ICD-6 (1948) had the term "manic-depressive". DSM-1 (1952) had the term "manic-depressive". DSM-III (1980) introduced the term Bipolar Disorder and the two categories of Bipolar I and Bipolar II. ICD-10 (1992) also switched to the Bipolar Disorder terms.
Newer classification systems include RDoC (2010) and HiTOP (2015) have interesting new approaches for describing things (which I myself have not fully gotten up to speed on yet).
Once we understand the biological mechanisms driving the energy cycles of people currently diagnosed with Bipolar Disorder, we might come up with new therapeutic approaches that are so effective that we then come up with terms that don't end in a word like "Disorder" or "Dysfunction".
Perhaps there are natural energy cycles that serve a purpose that we just haven't identified and figured out yet, and only because of that ignorance are these cycles problematic.
Perhaps once we're finally able to fully understand the biology, it might not make any sense to call these patterns "Disorders", just as it would not make sense to diagnose lobsters with "Molting Disorders."
Knowledge cultures commit Reification Fallacies often enough that the term has a Wikipedia Page. This just means coming up with a term to label a discrete pattern that does not quite actually exist. People might correctly identify that there is a pattern(s), but it can be a mistake to be over confident that they've correctly identified whether it is one pattern or many, and if the latter, where to draw the lines between patterns.
If you give too much weight to a model just because it has been reified, it may lead you astray. All models are wrong; some are useful; some can be harmful.
There is a proverb "it is difficult to find a black cat in a dark room, especially if there is no cat".
Likewise, it may be difficult to find Bipolar Disorder, especially if there is no Bipolar Disorder (and there are instead dozens of different smaller patterns instead).
I've now presented my case for why I dislike the term "Bipolar Disorder" and why I predict its eventual demise.
Despite all this, in the present moment, it is a very useful term for coordinating people who are afflicted by these conditions, people who are treating these conditions, and people trying to figure out what these conditions really are.
I expect it will be many years, if not decades, before we have better terms, and so until then, I will keep tagging posts like this one with the term Bipolar Disorder.
April 5, 2024 — Have you ever examined the correlation between your writing behavior and sleep?
I've written some things in my life that make me cringe. I might cringe because I see some past writing was naive, mistaken, locked-in, overconfident, unkind, insensitive, aggressive, or grandiose.
I now have a pretty big dataset to identify my secret trick to write more cringe: less sleep.
For this post I combined 2,500 nights of sleep data with 58 blog posts. A 7 year experiment to see how sleep affects my writing.
Most posts above 7 hours of sleep do not need a sleep disclaimer. Most posts below 7 hours do. Not to say there is no value in the posts made with under 7 hours of sleep, it's just less rigorous writing (and thinking). On the plus side, writing with little sleep can be more concise at times. It might exaggerate the key ideas, but nevertheless identify them fearlessly and concisely.
Static image of table above.
I actually post slightly more when I sleep less (Pearson correlation coefficient of -.14), but fewer words per post, which is indicative of a more "scattered" thinking state. I was surprised to see that I don't generally generate a whole lot more words in deprived sleep states. I perceive my writing to be smarter during those times, but looking back it's clearly not.
Besides this blog, I have long written and posted content to HackerNews, Reddit, other discussion forums, and at times Twitter, Instagram, Facebook, YouTube, and LinkedIn. I haven't done the data grunt work, but if my memory serves me correctly I am confident my publishing behavior on those platforms mirrors the same patterns as my blogging behavior, with regards to sleep.
There have been stretches where I published little publicly but was generating a similar amount of tokens, just in private groups. My writing patterns in private groups also mirrors my patterns on this blog, with regards to sleep.
Tangent: when I've been lucky to be a part of brainiac private organizations (such as Microsoft, YCombinator, Our World in Data, academia, and so on), I got to read so much brilliant writing by people who rarely post publicly, and every time I think about that I am humbled. There is so much well written content on the public web, and to think it is only a fraction of the great content ever written, is humbling.
I realize I already have an unofficial "sleep disclaimer" policy. I have de-indexed (but kept published) at least a couple of sleep-deprived posts, and added a disclaimer/correction to at least 2 others. Now with this dataset I am sure I will append a few more sleep disclaimers.
With sleep disclaimers, I can say, "hey, might be interesting ideas here, but don't train too heavily on this".
I am happy with my decision to use git for this blog so that I always keep an honest history, while still being free to down weight sleep deprived content and try and keep my more thought-out out ideas front and center.
I don't have a column for it (yet), but it does seem my better posts often were the ones where I took the time to get friends and/or colleagues to review, IRL. Sleep deprived posts I would generally blast out without talking to anyone.
Peer review is a great filter, and a great forcing function to put more effort in.
On the other hand, because the importance of ideas varies by so many orders of magnitude (there are "black swan" ideas), you could make an argument that spending too much time in one area of ideas isn't the optimal strategy, and publishing things as you go, improving them later, is an approach with merit.
It seems when I sleep less, my brain is in more of a pleasure seeking state, has a bias to action ("don't think, just do"), and feels less pain than in a more rested state. Less sleep means less critical thinking. Less sleep seems to make me less willing to invest the time in rewiring my brain to correct mistaken thought habits.
I started wearing a Microsoft Band when it first came out in November 2014. Then a Band 2, then FitBit Charge, Ionic, Versa, and now Sense 2. I am grateful for all the people involved with creating these things. I think continued progress in the wearable sensor field is the best bet for improving human health.
April 3, 2024 — I just saw Dune 2 at the theater, but far more noteworthy is this YouTube of a lobster molting. I can confidently claim that before that video I had never spent a minute of my life thinking about lobsters molting. To be honest, if you had asked me last year if lobsters molt I probably would have said "No". But, I mean, watch the video (variable speed is fine). What a fascinating slash beautiful slash disgusting slash painful slash magical slash moving thing to watch. Can you imagine going through something like that over and over again in your life? I guess humans should be thankful for our endoskeletons.
Lobsters molt so they can grow. Lobsters molt so they can repair damaged or diseased shells. Lobsters grow so they can reproduce. And lobsters molt to enhance their sensory perceptions.
Lobsters molt far more frequently when they are young (more than ten times per year) than old (sometimes once every 3 years). They are generally in 4 phases: Pre-molt, Molt, Post-molt, or Inter-molt. The critical molting days are always brief, but in their later days lobsters spend less time in the Inter-molt phase and instead more time in drawn out Pre-molt and Post-molt phases.
In my quest to find a more accurate model of bipolar disorder, I was wondering if human brains go through a similar process to lobsters that we haven't properly understood yet. Like lobsters, humans brains are contained inside a skeleton (the skull). While we clearly don't molt our skulls, I wonder if there is some similar natural transformative cyclical process designed to keep our brains growing, shed damaged mental models of the world, improve reproduction, and enhance sensory perceptions. Could the cycles of bipolar disorder be a natural phasic phenomena experienced by all humans, and those labeled "bipolar" just experience more intense molts than others, for some reason?
It seems like lobsters have no choice but to keep molting (except for the ones moved to captive, controlled environments designed to stop molting, like supermarket tanks where they go before being boiled to death). Maybe they are really stoked with their current shell, but nature says "sorry, too bad, time to grow", and gives them the painful boot. Similarly, maybe brains go through cycles where even if you were comfortable with your current interface to the world, nature has designed it so you will have to molt it anyway. It is a painful and vulnerable process (the mortality rate of a lobster molt is 10%), and no guarantees your new shell will be better than the last, but apparently, with lobsters at least, it is a risk that pays off in the game of natural selection.
While researching lobsters (for the first time), I also came upon this interesting post on a different topic releated to lobsters and brains.
April 2, 2024 — It has been over 3 years since I published the 2019 Tree Notation "Annual" Report. An update is long overdue. This is the second and last report as I am officially concluding the Tree Notation project.
I am deeply grateful to everyone who explored this idea with me. I believe it was worth exploring. Sometimes you think you may have discovered a new continent but it turns out to be just a small, mildly interesting island.
Tree Notation was my failed decade long research project to find the simplest universal syntax for computational languages with the hypothesis that doing so would enable major efficiency gains in program synthesis and cross domain collaboration. I had recognized all computational languages have a tree form, a 2D grid gave you enough syntax to encode trees, and maybe different syntaxes of our languages was holding us back from building the next generation of programming tools.
The breakthrough gains of LLMs in the past eighteen months have clearly demonstrated that I was wrong. LLMs have shown AIs can read, write, and comprehend all languages across all domains at elite levels. A universal syntax was not what we needed for the next generation of symbolic tools, but instead what we needed was the transformer architecture, better GPUs, huge training efforts, et cetera. The difference between the time of the last report and now is that the upside potential of Tree Notation is no longer there. Back in 2019, program synthesis was still bad. No one had solved it. Tree Notation was my attempt to solve it from a different angle.
The failure of this project will come as no surprise to almost everyone. Heck, in the 2019 report even I say "I am between 90-99% confident that Tree Notation is not a good idea". However, we kept making interesting progress, and though it was a long shot, if it did help unlock program synthesis that would have had huge upside potential. I felt compelled to keep exploring it seriously. Back in 2019 I wrote "No one has convinced me that this is a dead-end idea and I haven't seen enough evidence that this is a good idea". I have now thoroughly convinced myself, in large part to the abundant evidence provided by LLMs, that Tree Notation is a dead-end idea (I would call it mildly interesting, it's still mildly useful in a few places).
I am not ending work 100%. More like 98%-99%. I will likely always blog and am writing this post in Scroll, an alternative to Markdown built on Tree Notation, which I personally enjoy and will continue to maintain. Someday AI writing environments may become so amazing that I abandon Scroll for those, but until then I expect to keep maintaining Scroll and its dependencies. I feel bad the PLDB project has deteriorated, and if someone is keenly interested in taking that over send me a message.
I feel good about this effort from society's perspective as the world got a mildly interesting idea explored and the losses were privatized. I effectively lost all my money pursuing this line of research, at least in the hundreds of thousands in direct costs of failed applications and more in lost salary opportunity costs. But also, this effort did lead me on a path with certain temporarily lucrative side tangents and maybe I would have had less to lose had I not taken it on. Who knows, maybe the new 4D language research (see below) will lead to future gains.
After someone suggested it, in 2017 I made a Long Bet about Tree Notation. My confidence came from my hunch that Tree Languages would be far easier for program synthesis, which would lead to more investment into Tree Languages, which would have network and compounding effects. Instead LLMs solved the program synthesis problem without requiring new languages, eliminating the only chance Tree Languages had to win. So, I now forecast a 99.999% chance the first part of that bet will not win.
My bet did have two clauses, the second predicting "someone somewhere may invent something even better than Tree Languages...which blows Tree Notation out of the water." This has sort of happened with LLMs. At the time of the bet I felt we were on the cusp of a program synthesis breakthrough that would radically change programming, and that happened, it just happened because of a new kind of (AI) programmer and not a new kind of language.
The bet was not about a general breakthrough in programming, but specifically about whether there will be a shuffling in our top named languages. So I see 99.X% odds I will lose the second clause of the bet as well. There remains a chance LLMs make another giant leap and who knows, maybe we start considering something like Prompting Dialects a language ("I am a programmer who knows the languages Claude and ChatGPT"). But I don't see that as likely, even if we are still on the steep part of the innovation curve.
LLMs have eliminated the primary pragmatic reason for working on Tree Notation research--they solved the program synthesis and cross domain collaboration problems. But I also enjoyed working on Tree Notation because it gave me an attack vector to try and crack knowledge in general. Now, however, I see a far better way to work on that latter problem.
Looking back, I recognize I had a strong bias for words over weights. The mental resources I used to spend exploring Tree Notation I now use to explore 4D languages (with lots of 1D binary vectors for computation). Words are merely a tool for communicating thoughts. Thoughts compile to words and words decompile back to thoughts. I am now exploring the low level language of thought itself. Intelligence without words. The 4D language approach seems to be an orders of magnitude more direct route than Tree Notation to finding the answers I am looking for.
I called the first status update an "Annual Report", which was optimistic thinking. It took me years to get another one out. And it turns out this will be the last one.
It would have been great personally to have been right on this long shot bet, but in the end I was wrong. I absolutely gave it everything I had. I poured much blood, sweat, and tears into this effort. I was stubborn and persistent to figure out whether this had potential or was just mildly interesting. I had a lot of help and support and am deeply grateful. I am sorry the offshoot products were not more useful (or good looking).
It took me a while to let Tree Notation go. Even after LLMs destroyed the potential upside of pragmatic utility of the notation, I still liked it because it gave me an interesting way to work on problems of knowledge itself. It wasn't until I had some insights into 4D languages that I finally could say there was no longer any need for Tree Notation. I am grateful for the experience and have now moved on to a new research journey.
March 30, 2024 — Given a box with side S, over a certain timespan t, with minimum voxel resolution V, how many unique concepts C are needed to describe all the patterns (repeated phenomena) P that occur in the box?
As the size of the cube increases, the number of concepts needed increases. An increasing cube size captures more phenomena. You need more concepts for a box containing the earth than for a thimble containing a pebble. As your voxel size--the smallest unit of measurement--decreases, the number of concepts needed increases. As your time horizon increases, the number of concepts needed increases, as patterns combine to produce novel patterns in combinatorial ways, and some patterns only unfold over a certain period of time. Although, past a certain amount of time, maybe everything just repeats again. In fact, it seems likely that the number of concepts C would grow sigmoidally with each of these factors.
Why are there any patterns at all? Why isn't the number of concepts zero? Why doesn't every box just contain randomness? There could be ~infinite random universes, but this one, for sure, contains patterns.
What is a pattern? A pattern is something accurately simulatable by a function. A concept really is just a function that simulates something, with inputs and outputs. A concept could be embodied in different ways: by software, by an analog computer, by an integrated circuit, by neurons, et cetera. If all you had in your box was a rock, you could have a concept of "persistence", which is that a rock is persistent in that it will still be there even as you increase the time.
Brains are pattern simulators, and symbols are a way for these simulators to communicate simulations.
You can classify patterns into natural patterns and man-made patterns. Mitosis is a natural pattern. Wheels are a simple man-made pattern and cars are a man-made pattern made up of many, many man-made patterns.
Science is the process of discovering patterns, mostly natural, and tagging them with concepts.
It's fairly easy to tag new man-made patterns with concepts. There are max eight billion agents generating these (in practice, much less), and we can tag them as we create them.
But by the time we arrived on the scene with our tagging abilities, nature had already developed a backlog of a mind-numbingly large number of untagged patterns.
Scientists had a lot of low hanging fruit patterns to tag.
If you put a box around the earth, what percentage of nature's patterns have been tagged? Are we 50% done? Are we 1% done? Are we 0.0001% of the way there? Are we something like 45% of the way there, with greatly diminishing returns, approaching some hard limit of knowability? Or will be there a point where we've successfully uncovered all the useful patterns here on earth?
Both nature and man are constantly creating new patterns combinatorically. How many new microbes does nature invent everyday? Who is inventing more patterns nowadays on earth: nature or man? How has that race changed over time? What about if you extend your box to contain the whole universe?
If you put a box around our universe, what were the first patterns in it? How is it that a box can contain patterns that evolve to be able to simulate the very box they are in?
A decreasing voxel size allows for identifying concepts that can generate predictions impossible with a higher voxel size, but also increases the number of untagged patterns.
Something being unpredictable after much effort means it is either truly unpredictable or just that the true pattern has not been found yet. That might be because the box is too small; the box is misplaced; the voxels are too big; the needed measurements cannot be taken; the measurements are not being taken enough over time. It does seem like the process of finding the right formula is not so hard, once the right data has been collected.
We often have a lot of Misconcepts. A concept that doesn't really explain a pattern. Maybe it is correlated with some parts of a pattern, but it is very lacking compared to some concepts that are far more reliable. You could also call these bullshit concepts.
If you put a box around a bunch of bricks, we seem to have a pretty good handle on all the useful concepts. Put it around a human brain though, and we still have a long way to go. Though, if you think about the progress made in the last 50 years, you can imagine we might possibly get far further in the next 50, if diminishing returns aren't too strong.
Can we make empirical claims about how many concepts C can we expect to describe patterns P given a box of size S, voxel size V, over time T? Perhaps it is possible to use Wikipedia to do such a thing. Maybe if you plotted that we would see the general relationship between these things.
Why might answering this question be useful? If we consider an encyclopedia E to contain all the useful concepts C in a box, then we might be able to make predictions about how complete our understanding of a topic is, regardless of the domain, by taking simple measurements of E alongside S, V, and T.
Sorry if you were expecting some quantified answers, as I haven't done that hard thinking, yet. This post is early explorations of trying to think of what science is from first principles. Scientists and engineers and craftsmen have done absolutely amazing things in so many fields, but I'm very interested in an area where science has so far failed, and I try to think of possible root causes of that failure. In those situations, it might just be a matter of time (waiting for technologies to decrease voxel size, or take previously impossible measurements).
If you are interested in the concept of the evolution of patterns in nature, I recommend checking out assembly theory.
March 8, 2024 — What is open-mindedness, from first principles? Here are some musings.
First I state the obvious, that open-mindedness, OM, is a measure of a mind, M. I will assume M can be modeled as a society of agents, A, each occupying some neural space N. The formation of new A in space N is done by learning process L. Some A can act as learning control agents and can act, with discretion, to block the M from learning new agents.
The rest of this essay I'll spell out the full terms but stick to the list of concepts above.
A mind can be in various levels of open-mindedness both globally and locally.
With Artificial Neural Networks (ANNs) the current paradigm is to be open-minded during training and then closed-minded during inference (learning is stopped). In ANNs the open-mindedness state is generally global, whereas in biological neural networks it seems to almost always be in a mixed combination with both global and local levels of open-mindedness. I expect in the future engineers will get better at making ANNs that are able to keep certain areas open-minded.
A mind capable of developing new agents can build teams of cooperative agents capable of better exploiting the organism's environment. Also, a mind capable of developing new agents can replace under-performing agents with better ones.
Beyond this, I will take it as an assumption that open-mindedness is beneficial by default1.
Unfortunately time and resources are scarce and open-mindedness has costs.
There are metabolic energy costs to developing new agents. There are also opportunity costs as investing time and energy into developing any specific agent comes at the cost of not developing other possible agents.
If an organism is open-minded and engages in learning it often involves making mistakes visible to other organisms. In competitive environments, other organisms can exploit this information against the open-minded mind.
It is possible that neural agents themselves are entities in a game of survival (like species, individuals, and genes), competing against other neural agents in the same mind, and so open-mindedness poses a threat to existing agents.
It could be that each agent has a metabolic power draw greater than undeveloped neural material, against a relatively fixed power supply, and so adding new agents decreases the power supply to existing agents.
It could be that the supply of neural "materials" is largely fixed and to build themselves new agents must literally take materials from existing agents (such as molecules for axons and dendrites).
These costs would make existing agents opposed to new agents globally by default. It could be that in the beginning when unused neural materials are high, it is advantageous to be globally open-minded. Once agents have claimed a lot of space, the downsides of open-mindedness increase.
Each agent has a functional territory over which it reigns. When a mind encounters problems related to that territory the problem is routed to the appropriate agent. An agent may only get energy if it is used. Without receiving any energy, an agent may die. It seems like it would be evolutionarily advantageous for agents to develop defense mechanisms that can discourage open-mindedness in areas related to its territory.
Hence, an agent might want open-mindedness in areas orthogonal to its own, but push for closed-mindedness in its specialization.
It also seems that alliances among factions of agents may form. If one agent resists a superior newcomer in its territory the rest of the mind would be worse off, so other agents might fight the agent promoting closed-mindedness.
A stubborn strategy could often payoff. Imagine a mind has a 33% chance of surviving a choice. A closed mind will act fast and make a mistake 67% of the time. An open mind might spend significant time and resources and make the incorrect choice only 10% of the time. However, the open-minded survivor might be wrongly smug, as they wouldn't observe that 80% of the time they failed to make a choice at all, and if they could have observed the global multiverse they would have realized their open-minded strategy actually only survived 10% of the time, versus the closed-minded's 33%.
It is easier for a mind to be open-minded to opposing ideas in a domain where there are few existing neural agents. It is harder to be open-minded in a domain with strongly established neural agents. Honesty requires being open to developing neural agents in opposition to existing agents in order to make a genuine judgement about which is better. Honesty can be hard because it sometimes involves not just superficially steel-manning an opposing idea, but being genuinely open-minded to letting that opposing idea take over a domain from existing agents, if it turns out to be a better idea after all.
As I said in the beginning, these are mostly musings at the moment. I have many more questions than answers, including the questions below.
1 Philosophically by some measures you could argue that having a mind is not clearly advantageous to objects in general. For example, organisms with minds make up a small percentage of the biomass on earth. Or you could say that some rocks last billions of years, whereas minds are gone much faster than that. ⮐
April 5, 2024 — It's a few weeks later, and I find myself wanting to take another look at open-mindedness.
Let's go over the same ideas but starting from the slightly changed perspective of the question above.
A brain could fight to stay closed-minded as a form of agent NIMBYISM, where existing neural agents don't want to compete against new agents.
It could be simply an energy conservation strategy, where your brain, by default, doesn't want to burn resources rewiring.
It could be a logical default--where your brain is trying to avoid the mistake of giving up on a way of thinking too early. Imagine the reward for a contrarian idea doesn't come until your 10th year of following it. If you give up on year 9, you pay 90% of the costs and get 0% of the reward.
It could be because your brain is trying to "save face", and doesn't want to suffer social penalties from being wrong. You can postpone feeling shame by keeping your mind closed, and hope that either your committed path will someday, somehow, finally payoff, or maybe something else happens so you don't have to face the social pain of admitting mistakes.
It could be because your brain legitimately has a dislike for the other idea(s), or doesn't trust the source.
It could be because your brain enjoys having a pastime where it can just repeat old patterns and relax. It might want to be closedminded in an area because it has to be openminded in other areas and does not want to be challenged at all hours of the day.
It could be out of respect for a social group and/or traditions, and one would rather be respectful to their groups than have the best mental model on every topic.
Or it could be simply a waiting strategy, where a mind is shut only to incrementally better ideas, and no new ideas are significant enough to be worth opening one's mind for.
February 24, 2024 — In the near future AI will be able to generate an extensive list and rating of all of the skills in someone's brain.
I'm a big fan of Minsky's conceptual model of the mind as a society of agents. A collection of neurons wire together in a certain way to form low level and higher level "agents". You have "agents" for everything you've learned: walking, crawling, standing, hugging, riding a bike, driving, slicing vegetables, chopping wood, reciting the capitals of countries, computing derivatives, et cetera. (You have lower level agents as well, but I'll leave those out in this post). A human brain might contain millions of "agents". Technology is a long way away from being able to scan a brain and map out where every agent is located. However, I wondered if technology was close to being able to at least list all the agents in someone's brain? How close are we to being able to make a map of someone's mind, identifying the "agents" in their mind along with the "strength" of those agents?
Last year I was at a hackathon and had about 10 hours to make something utilizing AI, so that's what I tried to do.
I explained the concept to an OpenAI API and asked it to generate a taxonomy of agents. It gave me a large list of possible agents such as WritingAgent, MusicalAgent, and PetOwnershipAgent. Then I fed it a large body of my writing and asked it to give it a score to estimate if that agent was present in my mind and how strong it was.
I got promising results on my very first run, and it is easy to envision how this would scale by expanding the agent list, adding multi-modal data, et cetera.
I was in my twenties when I first learned the simple writing technique of "mind mapping". I think in the future there might be a new use of that phrase. AIs will be able to create a mindmap of a person's mind near instantly, when given access to their data. Eventually that could combine with brain scans to be able to correctly identify the 3D spatial positioning of neural agents, but well before that we may have an interesting new kind of brain visualization that is far more extensive than a resume or personality test.
February 21, 2024 — Everyone wants Optimal Answers to their Questions. What is an Optimal Answer? An Optimal Answer is an Answer that uses all relevant Cells in a Knowledge Base. Once you have the relevant Cells there are reductions, transformations, and visualizations to do, but the difficulty in generating Optimal Answers is dominated by the challenge of assembling data into a Knowledge Base and making relevant Cells easily findable.
A Question has infinite possible Answers. Answers can be ranked as a function of the relevant Cells used and the relevant Cells missed. Let's say when a Cell is used by an Answer it is Activated.
So to approach the Optimal Answer to a Question you want to maximize the number of relevant Cells Activated.
You also want your Knowledge Base to deliver Optimal Answers fast and free. You don't want Answers where relevant Cells are missed but you want your Knowledge Base to find and Activate all the relevant Cells in seconds, not days or weeks. (You also don't want Biased Answers where some relevant Cells are ignored to promote an Answer that benefits some third party.) You want to be able to ask your Question and have all the relevant Cells Activated and the Optimal Answer returned immediately.
To quickly identify all the relevant Cells, your Knowledge Base needs them Connected along many different Axes. Cells that would be relevant to a Question but have few Connections are more likely to be missed.
So you want your Knowledge Base to have many Cells with many Connections. This Knowledge Base can then deliver many Optimal Answers. It has Synthesized Knowledge.
Wikipedia is a great Knowledge Base with a lot of Cells but a relatively small number of Connections per Cell. Wikipedia has Optimal Answers to many, many Questions. However, there are also a large number of important Questions that Wikipedia has the Cells for but because the Cells lack in Connections the Optimal Answers cannot be provided quickly and cheaply. Structured data is still lacking on Wikipedia.
My attempt to solve the problem of Synthesizing Knowledge was TrueBase, where large amounts of Cells with large numbers of Connections could be put into place under human expert review. But ChatGPT, launched in November 2022, demonstrated that huge neural networks, through training matrices of weights, are an incredibly powerful way to Synthesize Knowledge. My approach was worse. Words are worse than weights.
There are many Questions where the best Answers, even after synthesizing all human knowledge, are still far from Optimal. Identifying the best data to gather next to get closer to Optimal Answers to those Questions is the next problem after synthesizing knowledge.
Today that process still requires agency and embodiment and is done by human scientists and pioneers, but I expect AIs will soon have these capabilities.
February 20, 2024 — A lot of people, including me, are excited about an ambitious new research effort to see if bipolar disorder is best modeled as a mitochondrial disorder. I've started writing about it, and expect to write more about it in the future. But that's not what I'm writing about today.
Today I want to explore a model of bipolar disorder that I've wondered about for a few years, after reading about Marvin Minsky's "Society of Mind" model of the brain. In the model I explore today, mania and depression are not the result of a chemical imbalance, nor the result of a metabolic disorder, but instead are two neural circuits that are learned over time and persist in the brain, whether active or not, like learned skills. This post explores the brain pilot model of bipolar disorder.
The gist of Minsky's theory is that you are not a single "I", but instead a large collection of separable neural circuits working together. Your brain starts as a raw collection of neural resources and groups of neurons wire and fire in different ways to form circuits (aka "agents" or "resources").
Circuits that prove useful become stronger and survive. Some of these circuits are very low level, like a circuit for blinking. Some circuits are higher level and learn to control lower level circuits to achieve their goals. For example, you can think of learning how to ride a bike as developing a "bike riding circuit" that is capable of coordinating your legs, arms, center of gravity, et cetera, to successfully steer and propel the bike.
To learn how to ride a bike, your body experiments with a lot of different circuits. The circuit that does the best job is active for a longer period of time, out-competing other possible bike riding circuits, receiving more resources, strengthening and persisting over time.
The circuits at the highest level, the ones that you might say experience consciousness, I call brain pilots. Brain pilots are neural circuits that compete against each other for root level control of a brain. You might say the brain pilot in control is the one that experiences consciousness. To a brain pilot, the well being of their host is not the primary measure of success. Instead, the primary measure of success is how long that brain pilot is in control.
Children learn to crawl without knowing what they are doing. In learning to crawl, a circuit in the child's brain experiments with various combinations of contractions and relaxations of legs and arms. So it may be with learning to go manic.
At some point a circuit in your brain might start experimenting with various contractions and releases around brain regions like the amygdala, hippocampus, and prefontal cortex, involved with things like mood, fear, anxiety, and executive function. This network, let's call it M, might at first be competing against 10 other possible brain pilots. The positive feelings associated with the combination that M is hitting upon keep M piloting for longer.
In that person's brain is a new lifetime "skill". Alongside crawling, they now know how to go manic. They now have a manic brain pilot they can switch to.
Why would someone learn how to mania? Perhaps it is a "necessity is the mother of invention" situation. Depression hits first, and a person's brain starts subconsciously prototyping new circuits to try and recover. Maybe MDDs and bipolars are the same, except the brains of MDDs never figured out the subconscious manic skill.
A recent paper that looks at bipolar disorder through the lens of chaos theory suggests that counter-intuitively it is not that there is more chaos in a bipolar brain, it is that there is less of it: "a more chaotic pattern is present in healthy systems". In the brain pilots model, the problem with someone with bipolar is not that they experience brain pilot switching--that is normal--it is that they have a manic agent which is a brain pilot very skilled at staying in power. The problem is less brain pilot switching, not more.
The manic pilot "learns" that certain behaviors, while detrimental to the host, keep its time in control going.
Sleep is perhaps the ultimate brain pilot switcher. The pilot that goes to sleep in control does not know if it will be the pilot that wakes up in control. In the brain pilot model of bipolar, the manic pilot likes to avoid sleep because the less the host sleeps the less pilot switching that goes on, meaning the manic pilot's expected reign is longer.
The manic pilot could use spending as a way to bribe other brain circuits to keep it in power. Under the manic pilot, all brain circuits get what they want, and so those circuits in turn support the continuation of the manic pilot's reign.
The manic pilot triggers paranoia, and wariness to medication, for good reason. Friends and family that are worried about the person experiencing mania are indeed trying to get the manic pilot to give up control. While taking a medication won't kill the host, to the manic pilot, it is a matter of life and death, and so that pilot will deploy the resources at its disposal accordingly.
At some point, by remaining in control for so long via its selfish actions, the manic pilot will have scorched the host's resources, and will retreat into hiding. But, like riding a bike, that neural circuit will remain in the brain, ready to pilot again if it gets its chance.
Despite the harm to the host, in terms of ranking brain pilots by their time in control, mania is a very good strategy.
Depression, like mania, is also a strong strategy for a brain pilot, when you rank them by time in control.
The depressed pilot discourages all effort. Any effort might lead to a positive chain of events that lead to a different brain pilot taking control.
The depressed brain pilot learns to stop its host from doing almost anything at all. The less the host does, the less the chance of a pilot switch.
Being in social settings often requires a lot of pilot switching. The depressed brain pilot steers its host away from those.
Perhaps the rumination the depressed pilot engages in is another way of keeping control and preserving its reign.
The negative self-talk, hating on all the other brain pilots in a person, could also be a way of keeping other pilots from taking control.
I don't think the model explored above is a leading contender for finally explaining bipolar disorder, but I do think it is worth consideration.
February 14, 2024 — The color of the cup on my desk is black.
For any fact exists infinite fictions. I could have said the color is red, orange, yellow, green, blue, indigo, or violet.
What incentive is there in publishing a fiction like "the color of the cup is red"? There is no natural incentive.
But what if our government subsidized fiction? To subsidize something is to give it an artificial economic incentive.
If fiction were subsidized, because there can be so much more of it than fact, we would see far more fictions published than facts.
You would not only see things like "the color of the cup is red", you would see variations on variations like "the color of the cup is light red", "the color of the cup is dark red", and so on.
You would be inundated with fictions. You would constantly have to dig through fictions to see facts.
The color of the cup would stay steady, as truths do, but new shades would be reported hourly.
The information circulatory system, which naturally would circulate useful facts, would be hijacked to circulate mostly fiction.
As far as I can tell, this is exactly what copyright does. The further from fact a work goes, the more its artificial subsidy. The ratio of fiction to fact in our world might be unhealthy.
I've given up trying to change things. I have a different battle to fight. But here I shout into the void one more time, why do we think subsidizing fiction is a good idea?
February 11, 2024 — What does it mean to say a person believes X, where X is a series of words?
This means that the person's brain has a neural weight wiring that not only can generate the phrase X and synonyms of X, but the wiring is strong enough to guide their actions. Those actions might include not only motor actions, but internal trainings of other neural wirings.
However, just because a person is said to believe X, does not mean their actions will always adhere to the policy of X. That is because of brain pilot switching. The probability that any neural wiring will always be active and in control is always less than 1.
The strength of a belief is how often that neural wiring is active and guiding behavior and also a function of the number of other possible brain pilots in the population.
It seems brains can get into states where the threshold for a belief to become a brain pilot decreases, and lots of beliefs get a chance at piloting a brain during a period of rapid brain pilot switching.
For a belief to exist means it had to outcompete other potential beliefs for survival in a resource constrained environment and provide a positive benefit to its host. If a host had the ability to simply erase beliefs instantly, it seems like too many beneficial beliefs would disappear prematurely. So beliefs are hard to erase from a person's neural wirings. However, people could add a new neural wiring NotX, that represents the belief that X is not true. They can then reinforce this new neural wiring, and eventually change the probability so that they are far more likely to have the NotX wiring active versus the X wiring.
February 9, 2024 — It is estimated 2% of the population is bipolar. Sunday I explored: what if that was 98%? And today I explore, why isn't it 0%?
Why does a condition that is 60-80% heritable, deemed a severe, chronic disorder, persist in society? Is this a case purely of selfish genes manipulating their host to reproduce? Is it a case of society changing in a way that previously useful traits are now harmful? Is it the case that society preserves bipolar genes because it can actually be a positive condition, hyperthymia, and there is a conspiracy to restrict that information for competitive reasons? Is it simply inevitable that any variable attribute will have outliers, and we are sure to have 2% mood outliers as we are to have 2% height outliers? Or is it the case that bipolars play a unique positive role in society and societies that have a small percentage of them do better than societies that don't?
First a disclaimer. I am not an evolutionary biologist and I know many have published empirical work on this topic already. If you want the latest and greatest, head to Google Scholar. If for some reason you'd prefer my thoughts on the matter, which are influenced from my first-person experience, then continue on.
Dawkins' book The Selfish Gene taught me that the main characters in the Darwinian game are not individuals but genes. Individuals can live a long life but if they fail to reproduce and pass on their genes they lose the game. So strategies that maximize the passing on of genes, where the survival of the individual is of secondary importance, are superior.
This seems to fit the data on bipolar really well. It is commonly thought the extremes of bipolar begin after puberty. Hypersexuality is a very clear characteristic of manias. Genes that cause people to go into a mode where they are simultaneously hypersexual and also highly energized, social, with charisma and grandiose ambition very clearly can lead to increased odds of reproduction.
Someone with bipolar genes is optimized to have a volatile, shorter life, but with higher odds of reproducing. Although their exceptional energy levels are not sustainable, they can sustain them long enough to give the appearance that they are exceptional individuals worth mating with. This model logically explains a lot of the behavior of bipolars. They lack insight into their manic states because if they knew that their high energy was temporary and not their normal, the cognitive dissonance would negatively affect their interactions with potential mates. They are prone to overspending because their genes more easily get them into a paranoid state where they think death is imminent, and living life to the fullest and reproduction is urgent. Bipolars have much shorter life expectancy because that's what their genes design them for. Bipolars are designed to live fast, breed, and die young. Their decisions are actually more logical than they first appear if you assume they are expecting an early death.
If this is the best fitting model bipolars might rightly be seen as detrimental to a society that values productive, long lives. Society might be better off moving toward 0% bipolars. That might be difficult, however, as bipolar genes can adapt in ways to avoid detection, such as by using depression to hide their host when the energy inevitably fades, and by the already mentioned genuine lack of insight during manic episodes.
This is a sad model that views bipolar disorder exclusively as an exploitative genetic strategy, and I hope it is not true.
The modern definition of bipolar disorder didn't appear until the late 1800's. Perhaps today's bipolar genes were not a problem until the 1800's. In other words, perhaps 0% of the current bipolar population would have been bipolar in the old days. Maybe new technologies changed society and those with bipolar genes simply have genes maladaptive to the new environment.
Let's list some possible examples.
Disordered circadian rhythms is a prime indicator of bipolar disorder. Perhaps it is all of the artificial lighting that affects some people more than others. Or maybe new abilities to quickly change your location on earth affects some worse than others.
Metabolic factors are prime indicators of bipolar disorder. Perhaps the rise in processed foods, sugar, and other new substances have affected some more than others.
In this model it simply turns out that some genes that used to be helpful or harmless have turned out to be detrimental in this new world of the past 200 or so years. Perhaps the reason for less bipolar in the Amish is that their society lacks technologies that exacerbate bipolar traits.
Things could change again. It could even be that future technological changes will be such that today's "bipolar genes" will not be tomorrow's bipolar genes. In other words, which genes lead to an extreme energy cycle might be different in future societies.
There's no question that in hypomanic states bipolars have a number of positive qualities, like increased energy, IQ, charisma, and decreased need for sleep. What if the ability to get into these high energy states is actually a gift, they are sustainable long term without downsides, and this truth is just suppressed by those successful bipolars to limit their competition in society?
In this model, the 2% bipolar population could actually all be hyperthymic, but instead a slim fraction (say 1% of the 2%, or .02% of the total population) of this cohort obtained power at some point and has used it to suppress information about the true positive nature of this condition to limit their competition for power in society.
I would love to believe this conspiracy theory, sadly I haven't seen evidence for it. I do think it is worth listing though, as I really am curious if hyperthymic people really exist. I hope one day we'll have huge explorable population level datasets of biomarkers like sleep and can conclusively answer the question if hyperthymic people are out there.
Perhaps bipolar disorder is simply a name for the 2% outliers in brain energy cycles. In other words, even if everyone currently diagnosed as bipolar were to disappear, suddenly the people who were next in the percentile rankings would be deemed to have a disorder.
If you defined the tallest 2% of your population as suffering from "Height Disorder", and then deported them all, by definition you would still have the same number of people with Height Disorder, they would just be slightly closer to the mean. In other words, the 2% of society is bipolar might just be a societal construct.
In this model, a society can't get to 0% bipolar disorder unless you had perfectly uniform energy level variations.
Above I listed models where having bipolars was detrimental or neutral to society. What about models where bipolars add value to society? In other words, are there models where societies with ~2% bipolars are better off than societies with 0% bipolars?
Some claim bipolars are outliers not only on the brain energy spectrum, but also on the creativity spectrum. It could be possible that the creative contributions of bipolars to society outweigh the negative effects of their volatile energy levels.
Think about creativity like mining diamonds. Finding a diamond is hard, but once you've found it you've added to society's supply of circulating diamonds for all time. Similar for creative works. Coming up with a novel, useful combination is hard, but once it is mined passing it around for all time is relatively easy.
There's a non-linearity to finding diamonds and creating novel, useful creative works. Getting 99% of the way toward finding a diamond has the same payoff as getting 1% there: zero. Therefore, people who over-commit succeed at a higher rate, but also lose more when they fail.
Bipolars are likely to over commit to ideas while in high energy states. This leads to a lot of hard failures, but also leads to more successes than a group of average committers would have. However, if this is the reason for the excess creativity of bipolars, it might diminish in the future as the amount of "low hanging fruit"--diamonds that could be found within the duration of a manic episode--could perhaps decrease over time.
Bipolars naturally look at the world through 3 different perspectives: in a low, medium, and high mood state. This may provide more novel insights that can be mined into successful creative works.
It could also be that bipolars are more willing to take risks because their energy cycles prevent them from enjoying the comforts of normal societal rhythms anyway, so they have less to lose.
In the past it could have been that bipolars made great warriors, and so a society that had a percentage of them was better off than a society with 0% bipolars. A bipolar in a manic state has high energy, needs little sleep, is goal directed, has faster response times, less risk averse, assumes death is coming soon anyway, and can relatively easily get into a state of rage. It seems like a perfect combination for battle. In the old days they could fight at their best during hypomanic season and rest during depressed season.
Nowadays wars are more professional affairs, won not so much by emotion but by long term engineering, economics, and training. It might be that maintaining a percentage of bipolars was previously helpful for a society, but now there is not so much use.
Bipolars have a unique feature set. They are at least average, if not well above average, in being sensitive, observant, and critical of themselves and society. They have huge self-confidence during manic episodes. And they often have less to lose.
This might be a good source of people who might stand up to society if it goes astray.
Most people go with the crowd and don't spend too much time exploring the morality of issues. Humans are very vulnerable to herd behavior. They often get in the mode where they question no actions by their own team. Maybe because bipolars, at times, are not afraid to go it alone, they can sometimes successfully stop bad behavior by a herd.
Perhaps societies with a percentage of bipolars are better off than if they had 0% because they end up being more equitable places with a larger population of empowered agents. Bipolars might not be moral role models, but maybe they are like canaries in the coal mine and can call society out on specific dangers.
It seems unlikely that they can effectively lead teams better than average. But perhaps they are better than average at triggering cascading changes in behavior. Maybe they can be effective early revolutionaries.
The survival of rules requires people that break them. A rule that is often broken is often abandoned. But also, a rule that is never broken is abandoned as well, as it is not necessary. Perhaps having a certain population that is prone to at times break the rules is beneficial to a society, because it provides a primary benefit of having rules, which then might confer positive secondary economic and coordination benefits. Perhaps the short term volatility of these characters leads to a more robust long term government and confers a survival advantage to those societies with a percentage of bipolars.
No one is as happy as someone in a pleasant state of hypomania. It is a peak feeling. If positive life events coincide with a hypomanic state, there is nothing better. So far, it is not known how to get that state to last, and it appears to always go to too much happiness, followed by depression.
But having people who experience too much happiness might be a risk worth taking by a population. A population with 0% bipolars would be a population not exploring the happiness state space.
To develop the genetic capability of sustainable happiness, you have to risk some experiments with unstable happiness.
Permanent, healthy hypomania might not ever be possible for some existential or mathematical reasons, but oh my god, is it something for a society to explore.
Before it goes wrong, healthy hypomania is an internal, utopian state. A society without bipolars would not be aware that such a state could exist. Nature is mostly dark, cold, and unforgiving. The world can be very depressing. But the happy times, though fleeting, can keep us going. Maybe a society without bipolars also gives up its genes for happiness and is thus worse off.
Having bipolars might be the price society pays for also having happiness.
If this is the case, then maybe in studying bipolars, particularly during elevated states, society might unlock new methods for healthier levels of happiness for everyone.
Bipolars have a natural low frequency, high amplitude energy cycle more extreme than most people. The biological mechanisms of that energy cycle have not yet been solved by scientists, even after over 100 years of research. We also don't yet have very good ways to control that energy cycle, though it should be noted that lithium has proven it is not unalterable. It certainly seems difficult, if not impossible, to remove the energy cycle without causing secondary negative effects. It is currently estimated that around 2% of the population has this irregular energy cycle. Why does society have such a percentage of people with this energy cycle?
It is possible, though extremely unlikely, that these energy cycles don't have to be cycles at all, that the elevated hyperthymic state is indefinitely sustainable, and information on how to do that has been suppressed.
It is possible that these abnormal energy cycles confer no benefit to society, benefiting only their own selfish genes. If that were the case, eliminating these energy cycles and eventually filtering out their genetic propagation seems reasonable.
It is possible that these cycles were once helpful to society, but no longer, and so the same conclusion above applies.
It is possible that having these outliers provides a check on societal herds, and bipolars have a beneficial, if sadly martyr-esque, role to play. A society without these outliers standing up for independent lines of thought perhaps might go in a wrong direction and end up somewhere bad.
Finally, it could be that in the study of the positive parts of the bipolar energy cycle, new things can be learned about happiness, leading to a better society for all.
Personally, as a bipolar, with the bipolar energy cycle in my body, I am unsure of what the best role for me is in society. I want to do right by everyone. I hope my listing of some of these theories it is clearer that the answer is not as straightforward as some people make it out to be.
If you have the energy cycle, should you embrace it and aim to be a creative, a warrior, a change agent? Or should you try to tame it? What if aiming for a long life expectancy, given your energy cycle, is not in your best interests or society's best interests? What if that is not what you were made for?
I am not sure.
The only thing I am sure of is that it would be great if we figure out the biology of what this energy cycle is.
A productive exercise would be to get a large dataset on objective rates of bipolar disorder in different regions. Then these, and other models, could be applied, and we could learn what fits best. A major obstacle however is that no good large objective dataset on bipolar disorder is readily available for even one country yet, never mind multiple. That should change soon, as I will explore in future posts.
February 4, 2024 — In our universe, an estimated 2% of humans have bipolar disorder. Imagine a universe where that ratio is flipped.
In this alternate universe, nearly everyone experiences order of magnitude cyclical energy shifts. Society evolved very differently.
There are no long term companies--all work is project based.
There is no concept of weekdays, weekends, holidays or vacations. People work during their energetic phases and save leisure for their low energy phases. A "normal" work schedule is something like 16 hour days for 200 days straight, followed by 200 days off.
Workers view their current project as the most important endeavor in the world, and focus intensely to not only get it done but smash records along the way.
Everyone has different periods. Some people have high energy periods that last 3 months, others that last for 12 months. When a person's energy is up, they are said to be "in-season". Someone who feels their energy level declining is said to be "going-out" of season. When their energy level returns they are said to be "coming-back" in season.
All projects have a human resources head who is constantly managing the departure of going-outs and intake of coming-backs. The HR person also goes in and out of season themselves, of course, and is constantly replaced like any other role.
People only work when they are in season. The idea of working when you are out of season would be laughed at.
There is no concept of careers or retirement. It is intense work seasons, on and off, for life.
Off seasons are often spent drinking and doing drugs. It is also common to see pairs of people walking around slowly, spending seemingly endless hours discussing existential philosophical questions.
It is common to alternate between two or more specialties. Someone might do one work season as a dentist, the next as a chef, a third as something else, and repeat that cycle indefinitely.
Like work, schools lack continuity. There is not regular school years, unlike our world. Instead you have staggered groups of students coming-back in season, paired with a teacher coming-back, starting new cohorts throughout the year. Like work, school seasons are very intense and students spend over a hundred hours a week mastering their current subjects.
Bureaucracy is minimal. People in low energy phases have little energy for it, and people in high energy phases have little patience for it. Information and databases are widely used, they are all just public with no red tape.
There is little zoning and licensing. Everything is be more fluid. Offices often double as inns, retail stores, schools, and clinics.
Risk taking is celebrated. Life expectancy is be shorter, but people experience more by an earlier age.
Age is talked about not by how many orbits you had made it around the sun, but how many energy cycles you went through. For some people this is the same number; some are on their 50th energy cycle by age 30; others are only on energy cycle 20 at age 30. You don't talk about your "10th grade experience", but instead talk about your "10th energy cycle".
This world operates differently, but smoothly. However, in a world that values the highs and lows of big energy cycles, around 2% of the population struggles to fit in. They are diagnosed with "Stable Mood Disorder". All jobs are designed for people who are in a high energy state. People with Stable Mood Disorder struggle to match the intensity of their coworkers. To them, work projects don't seem so urgent, and they wish they could have less extreme lifestyle options.
Their inability to match the intensity of normal people at work, and their lack of desire to participate in long, existential conversations when off work, causes them to experience social isolation and difficulties in personal relationships.
Doctors try a large number of different amphetamines to try and get their Stable Mood Disorder patients to be employable, but have not yet found long term success.
February 3, 2024 — Approximately every eighteen months, I start transitioning from a low energy person to a high energy person. No substances or triggering events are at fault. It is a natural cycle, as inevitable as the tides.
Life is much easier as a high energy person. I leap out of bed in the morning and can easily handle the morning chores. Work is exciting, I excel at my job, and raise the level of energy among my coworkers. Exercise, healthy eating, and parenting are all easier.
Life is much more fun as a high energy person. I have time and energy for family and friends, and make loads of new ones. Moments feel more joyous.
Each transition is different. Sometimes the shift is gradual and mild. Sometimes it is more sudden and severe, like a power surge. The latter are hard to handle. It is like I wakeup to find myself on top of a bucking bull, and struggle to get this powerful force under control.
At first I think the energy is probably fleeting, like the temporary buzz from an energy drink or a long run. I go to bed expecting tomorrow you'll revert. But that doesn't happen. Instead each day I seem to have more energy than the last. Weeks go by. Months go by. The energy keeps flowing. It was not just something I ate. My brain starts to accept the new reality. I am a high energy person now! My plans for life change. With all of this energy I will be able to do a whole lot more than I expected. The world is my oyster.
At times the energy spikes are too high. I get annoyed that my coworkers can't keep up. I sometimes snap at people. I feel like my energy is too great for working on what I start to perceive as small problems. I start to worry that a higher power in the universe gifted me with this high energy, and the morally correct thing to do is to direct my new energy gift to more important causes. My paranoia circuits have extra energy too, and start overestimating threats.
Eventually the power fades, and like a plane with a sputtering engine, my life crashes to the ground.
I have been told that my energy fluctuations are a disorder. I don't dispute that I have not handled my energy well.
But I also know the underlying cause of the energy fluctuations are not understood. I can feel a radical difference in energy levels, but we don't have a way to measure those directly yet. My sleep fluctuates with the energy, but what is causing the energy shifts? Is it a change in ATP, mitochondrial biogenesis, mitochondrial swelling, dopamine, serotonin, adrenaline, or dozens of more potential factors? The relevant biomarkers are still a mystery.
In the future, if the underlying causes can be understood, perhaps the ability to be high energy will be seen as a gift, not a disease.
Meanwhile, I've occasionally come across high energy people whose energy levels are consistent. They have high energy, handle it, and never lose it. How do they do it? Is there a secret to being a consistently high energy person? Is there some secret club of high energy people, and if they like you they will approach you, and whisper in your ear the secret to harnessing that energy efficiently and keeping it flowing?
Some tell me to take lithium. It supposedly flattens energy spikes. And it does seem reduce future energy surges from happening with 60% to 40% frequency, from what I can tell.
So maybe I could become a Medium Energy Person. That's not bad. Being a temporarily High Energy Person comes with the great downside of also being a Low Energy Person, which is a terrible state that you don't want to endure.
If you stop Lithium though, or miss a few does, you can experience an energy surge greater than any you've experienced before (ask me how I know). Also, long term, it will probably cause severe kidney damage. And you are near guaranteed to have some inconvenient side effects, such as weight gain.
It is probably the best choice, but there is no clear best option.
If I don't think the fluctuations are controllable, and accept that my high energy periods will always be fleeting, one life strategy is to strive to solve a hard, high paying problem for society during a high energy period. If I succeed, I can save enough to make it through the inevitable low energy periods. This is kind of how I made it in life, not so intentionally, while I continued the quest of trying to unlock how to be permanently high energy. It sort of worked, until it didn't.
Once you've experienced being a High Energy Person, it is hard to give that up. There is no accurate understanding yet of this "illness". Might some people out there have the secret to living a consistently high energy life? Could the information be out there?
Some charlatans have long hawked mindfulness books and programs promising the secret to permanent high energy, but they fail upon close inspection of their models or when tested in trials. If a solution is out there, it is kept a secret or has remained below the radar.
I must admit, I do not think it is out there yet.
I believe science is the way to figure this out, but it sure is taking a while.
January 30, 2024 — I have kept this blog going for 14 years, through good times and bad. One thing I've noticed, particularly recently, is that, after getting a post out, my mind feels calmer the rest of the day. I also feel like each post helps me develop some actionable insight, however small, that I can use going forward in the future.
However, I also think solo cognitive grappling might be harmful long term. I undeniably get a short term boost, but perhaps long term blogging is cognitively harmful.
Blogging is like training your own internal LLM. Having an LLM in your brain could be a blessing and a curse. It's a blessing because you can indeed sometimes successfully generate words to describe the problem you feel, and then ponder over those words until you've found a solution. But it's a curse because once you've grown your model you can't shut it off, and it comes up with an endless stream of hypotheses that you feel compelled to ponder.
So blogging might be, in a sense, a trap. You think it might help your mental health, but devoting more of your brain to resolving questions also makes your brain generate more questions. You might end up opening more issues than you resolve.
Or maybe blogging is like playing the lottery. I've written a number of times about the importance of "black swans" in life: low probability, high impact events. I've generally only thought about external black swans. Small things in the real world that have big impact on society. But maybe I've neglected the impact of internal black swans. Small thoughts that can have disproportionate impact on one's internal society of mind. Perhaps enlightenment is really a thing. A thought, that if hit upon just right, has a huge impact. If that were the case, then maybe the expected value of blogging (or therapy) is higher than I thought, because it is like black swan thought hunting.
Regardless of whether blogging is a productive use of time or a narcissistic time suck, I think at this point my brain has been set to continue on. The die has been cast. I am nearly 40 and a lot of my brain is devoted to my LLM.
Now that I've thought about it, I think it is probably just a hobby. I don't do crosswords, or fish, or knit. I blog. It's a fun, open ended hobby. I never really know what puzzle I'll try next, or where I'll end up.
Anyway. I'm going to publish this one now and try and get a double post out today. Luckily blogging is like free therapy.
January 29, 2024 — This is a post about delusions. In society and in myself.
A delusion, D, is a theory, in the mind of a thinking agent that meets 4 criteria:
In February 2020 I was living in Hawai'i with my family and witnessed society there (and around the world) go delusional about Covid-19.
I had been following the early stories about Covid with some concern, but after the first big data dump was published on February 17th, 2020, it seemed clear to me that Covid would not be a huge danger. Most importantly, the first data dump indicated strongly Covid was not a significant danger to children. Every additional large data dump from then on only reinforced the more mild view of Covid. Covid was not deadly for nearly everyone (awful flu like experience aside), except those who were also at risk of dying from the flu. Thus, it shocked me as society locked down increasingly harshly over the next two years. The precautionary principle is a fine argument that would have justified a strong response measured in weeks, but the duration of the response put this in the delusional category.
Let's see how some of the things that unfolded meet the criteria of delusion as defined above.
Years after this dataset came out, at least in Hawai'i, my 3 year old daughter was still being required to wear a mask outdoors due to the perceived threat of Covid to kids. This was a societal delusion. The theory was that Covid was a high, avoidable danger to kids (given the data, this was <1% true). Society (or at least, "leadership"), perceived this probability to be ~50x-100x higher than it actually was. Society took years to update their perceived probability. And society took actions (requiring masks, even outdoors; closing beaches, parks, and playgrounds; "vaccination" requirements; severe limits on social gatherings) based on this delusion.
Another delusion was that recovering from Covid somehow did not provide protection as well as the "vaccine" would. Given all we know about the immune system, this was a very low probability hypothesis, treated like a high probability theory, updated extremely slowly, and acted upon.
There were a lot of other related delusions going on at that time. Eventually however, society did update their perceived probabilities and the Covid delusions faded.
Sidenote: I chose two delusions that were both held by a number of people. However, I'll admit there were other groups with other delusions, like one group with the delusion the vaccines caused mass death.
Anyway, society went delusional. Society acted like it was living in the movie Contagion, even though the math did not back that up.
I really struggled with society's reaction to the pandemic. Because my day job was focused on pandemic data, I was exceptionally aware of how delusional society was acting about both the actual lethality of Covid and the ability to contain it (running sims showed everyone was going to get it eventually).
I developed two delusions of my own.
My first delusion was that the copyright system was a root cause of society's delusions. I had long ago formed circuits in my brain that were against copyright law. These circuits believed copyright led to a less healthy information circulatory network. Now I was seeing the information network that our copyright system molded spouting delusions. Meanwhile, people were shouting "trust the science" while Sci-Hub, the preeminent site for sharing scientific research, was frozen by legal attacks from the copyright lobby. I really wish I could say that I was not delusional about this idea, but now I think that I likely greatly overestimated the impact of copyright in our world (positive or negative) and that whether we have copyright or not is probably not very impactful. So, I had a low probability theory (copyright significantly harms society) that I viewed with high probability for a long time. I also acted upon this theory, resigned from my job and formed a new corporation. So this meets all my criteria for a delusion.
My second delusion was that I had exceptional capability to solve this problem for society. I believed that I had world class talent and resources to spread a different kind of virus: a movement to abolish copyright law and replace our information circulatory network with something healthier. The theory that I was super talented (low probability), which I acted upon with great confidence, was a delusion.
Programmers learn early that when there's a bug in their program, it is extremely rare for the bug to be upstream in the compiler or hardware, and much more likely for it to be in your own thinking. The infrastructure we build on is very impressive. I mean, just look at modern airplanes!
That's why I really struggled in this case. I worked with the Covid data day in and day out and could not find a mathematical justification for society's actions on Covid. This time the bug was not actually in my thinking but was upstream in society.
At times this enraged me and I wasn't sure how to deal with it. It felt like an exceptional time to me, and I felt maybe I needed to stand up and do something exceptional.
Years later I see how society's delusions eventually subsided. I now realize that society will always have pockets of delusions--and that's okay! One should not get so worked up about it. Society will eventually update its probabilities, even if stubbornly slow.
For me, being in a hypomanic or manic state leads to a huge increase in delusions. I start acting on low probability theories as if they had high probability and I am slow to update my probabilities. To compound the problem is the number of delusions that I act on--perhaps some of my delusions would actually have higher probabilities of coming true if I did not pursue so many delusions at once! It seems during my elevated states far more brain circuits than usual are energized, so naturally I'll have more theories rising to consciousness. Then it also seems inhibitory processes do not compensate, so lots of theories get acted upon.
But what causes these excitatory states? Could it be fear?
It does seem that in society, fear enabled the delusional behavior. Maybe in a state of fear, confident agents are king. Societal fear during Covid was fed by the media.
Perhaps, in individuals like myself, a fear state can arise at any time simply by an uncontrollable unconscious thought of the inevitably of death, and then a high energy state kicks off, and confident neural agents are able to take control.
It might be hard to predict black swan events, but maybe delusional states would be easier to predict. I know on average my brain gets into a delusion happy state every eighteen months. Perhaps regions of society also have a regular rhythm of delusion susceptibility.
Society eventually did update its probabilities on Covid, but for years I found it incredibly frustrating trying to get society to change course. The agents I could get to admit the truth weren't the agents in power. If you think about the multi-agent theory of the mind, this makes sense. A delusion is an energized, specialized collection of neurons that is able to pilot the ship for a while. If the delusion was bendable, it would always simply bend to society and never be in a position of bending society. So the trick to countering delusions is to be content with communicating truth with out-of-power agents, realizing that eventually the currently powerful delusional neurons will eventually lose control (eventually the truth of physics catches up to you).
People have told me when I'm in a hypomanic state I can't be reasoned with or talked to. I think this is a harmful attitude to have. Even when I am focused on one particularly delusional idea, I still experience many brain pilot switches in a day. I may be very angry and confrontational one moment, but even in those days I have a large number of moments when other agents are piloting and listening to feedback.
It is not easy to dissent from delusional agents that are in power. I learned this during Covid, and people around me learned it during my later mania. Name calling does not work. Calling an agent "delusional" does not work. The agents in charge are very simple neural networks evolved specifically for their particular delusion without the ability to self-introspect. These networks live for their delusion. Attacking them head on is thus a life or death threat to them, and can backfire. It is important instead to listen, to find common ground, and to appeal and strengthen the other orthogonal agents around.
Maybe the trick is to have some sympathy for the delusional agent. Probabilities for rare events are hard to get right. I got so mad at society for being off on Covid by 100x. And then I went and was off myself on a few things by a 1,000x. Individuals and societies--both multi-agent thinking systems--will have delusional times. Perhaps the trick isn't to attack specific delusions as they come but to better understand what delusions are and how to create an environment that benefits from them, and is not harmed when they arise.
January 26, 2024 — I went to a plastination exhibit for the first time last week. I got so much out of the visit and highly recommend checking one out if you haven't. I salute Gunther von Hagens, who pioneered the technique. You probably learn more anatomy walking around a plastination exhibit for 2 hours then you would learn in 200 hours reading anatomy books. There are a number of new insights I got from my visit, that I will probably write about in future posts, but one insight I hadn't thought about in years is how much humans and animals look alike once you open us up! And then of course I was confronted again with that lifelong and uncomfortable question that I usually like to avoid: humans and animals are awfully similar, is it morally wrong to eat animals? I thought now was as good a time as any to blog about it, thus forcing myself to think about it.
Part of me strives to be a moral person. Other parts of me think that is bull. Maybe the main case against my "morality" is that I am not a vegetarian. I probably eat as much chicken, beef, pork, lamb, etc, as the average American, which means I am responsible for eating dozens of animals per year. I know millions of animals are slaughtered out of sight everyday in this country and I do nothing about it. Imagine if restaurants were required to have onsite butchers--picture the endless stream of live chickens and cows you would see heading into your nearest McDonald's! How can one claim to be "moral" and not stand against the slaughtering of billions of animals a year?!
If I were to force myself to put in words my policy then, I might say that on my map of the tree of life there is a cruelty line dividing the branches of life into ones I think should be treated with fairness and ones where cruelty is permissible. Now I'm not saying I encourage cruelty to animals—on the contrary I do hope that cruelty can be minimized and I do try and direct my purchasing votes to those businesses who make that a priority—but I would be lying if I claimed I thought there was not an inherent cruelty in the meat supply chain. So my root principle is perhaps to act according to "survival of the fittest"—allowing the killing of loads of animals—and then, for a fraction of the tree of life, argue for policies of fairness.
I did go vegetarian one year. I had slightly less energy that year but it wasn't so bad. I might even have stuck with it, but I accidentally ate some bacon at Christmas (I thought it was a fake vegan bacon) and it tasted too good to go back. Now I am trying a keto diet, which would be quite hard to do (but not impossible) as a vegetarian.
So what if I have a cruelty line? Plants are living things too. So vegetarians still have a cruelty line on their trees of life, just shifted to a different location. If you don't have a cruelty line, you will starve. Every living person thus has a cruelty line.
Just as I can't deny that a lot of animals die to bring me my bacon and steak bites, animals cannot deny that at some point I will pass on, and they will feast on me (or my redistributed atoms). Also, many of these animals would not have lived at all had their not been such a plan for their lives. In a sense, there is likely a plan for all of us. Ideally we should continue to strive for a world where life forms are all treated with dignity and respect, but we should also recognize that life needs to sacrifice for life, and that giving up one's body for the benefit of future life is a noble end.
In walking around that plastination exhibit, I was looking at humans that once fed on the bodies of animals, and then chose to donate their own bodies to feed minds like mine. A circle of noble sacrifices.
January 24, 2024 — Assuming I keep blogging, which I hope I manage to do, I expect my posts will largely be about bipolar disorder. I've been blogging for fifteen years but never wrote publicly about bipolar disorder, even though I was diagnosed twenty years ago. I kept my diagnosis a secret.
Bipolar disorder is a condition not yet understood, with no cure, and is predictive of disruptive behavior. So I very much understand why society discriminates against those with the label.
I did not keep my diagnosis a secret maliciously, but genuinely remained unsure how to handle it, and ultimately, optimistically, believed I would figure the thing out. But—like a Greek tragedy—my efforts to avoid my fate perhaps led me faster to it. My mania in 2022 was at least twice as strong as anything I ever experienced. Things went KABOOM!
I will recap the long history of my bipolar disorder below. Though diagnosis happened at 20, I can remember hypomanic episodes at least as early as 12, when I remember extended periods of euphoria. But I will start at the diagnosis.
Until I was 20, I had no clue something like "bipolar disorder" even existed. That year, 2004, was quite volatile for me. I started in a low mood and flunked out of my sophomore year at Duke University. Then, despite flunking out, I quickly reversed mood and felt better than ever. That lasted a number of months until the end of the year when I crashed again. Finally, reading about depression online, I came across a symptom checklist for this thing called "bipolar disorder". My jaw dropped. Here was a spot on description of the energy waves I had been experiencing all my life.
After I got the label, I returned home with my tail between my legs. Luckily my family was incredibly supportive. Bipolar disorder was a new term for them all as well, but they faced it with determination and showed me then, and ever since, the meaning of unconditional love. Looking back I see how I always was an outlier among my siblings (for example, I am the only kid in the family to have the "distinction" of being suspended from high school and thrown out of college). No question I would not be here if they had not supported me back then or many times in the intermediate years.
In 2005 I saw a psychiatrist and therapist for the first time. My official diagnosis at the time was Bipolar II. My therapist, Steve, was great, and helped cheer me up. The first medication I was prescribed was Depakote. Soon I started noticing large amounts of hair coming out in the shower, which freaked me out, and my psychiatrist switched me to Abilify. I would remain on that for a number of months.
Having gotten kicked out of college I explored joining the military. I had always admired soldiers. I also thought the discipline and physical challenges would be good for my condition. I sailed through MEPS but then hit a roadblock. The military does not take people with bipolar disorder.
Fortunately, I was able to get a steady job waiting tables and was able to get re-accepted to Duke.
Unfortunately, the Abilify gave me terrible brain fog. I could barely do arithmetic, nevermind college level coursework. After a few weeks I realized there was no way I could stay on this medication and graduate. I viewed my choice as either keep taking the meds and flunk out for sure, or stop the meds and try and wing it. I stopped the meds.
Almost immediately my brain fog lifted and I was able to think and do schoolwork again. But I knew I was not supposed to quit meds cold turkey like that. So I spent a lot of time journaling, looking at my history, to try and figure out what I could do to not experience mood swings again.
I came up with a theory: compared to other people I seemed more able to quietly entertain myself. I got plenty of pleasure from just thinking and could easily waste hours day dreaming. I figured perhaps my problem was that my brain could generate its own neurochemical rewards by just imagining things, without actually accomplishing anything productive in the real world. I wrote "Be aware of brain chemicals. Be aware of dopamine". I came up with a mantra: "No pleasure from thinking, only physical action". I could take pleasure in a school task completed, or a friend helped, but would immediately cut off any imagined images in my mind.
I repeated this mantra to myself over and over again. No day dreams. No watching shows. No pleasure reading. I kept myself busy with actions--attending all my classes with perfect attendance, getting all my school work done, spending time with friends and in physical activities.
For a year, this worked! I excelled in college. The medication approach had failed, but I discovered a model and treatment that works! I even thought that my idea that bipolar disorder was caused by brains that could "self-pleasure" themselves could be a novel insight and maybe I could do an independent study and publish something. It's obvious now that my "cure" was just reinventing mindfulness.
Unfortunately mindfulness was not a strong enough cure and I went hypomanic again in the summer of 2006. This was two years after my last hypomania and my first hypomania since diagnosis. It was followed by a crash, but then I managed to graduate, partly due to another hypomania in summer of 2007.
I had a new logic for the hypomania I experienced after graduation: my natural state was actually hyperthymic and it was simply because I was in school my whole life that I struggled. School was too constrictive for people with my kind of energy. This also turned out not to be accurate, and a big crash followed 2007's hypomania.
In 2008 wearables were not yet a thing but Blackberries were. By this time in my life I had developed some programming and statistics skills and thought maybe I could build something to help solve this problem not only for myself, but for other people. I figured with modern tools I could track more data than ever, and maybe find a real cure for my condition. I built an email and web app called BpBio.com that allowed me to manually log anything 24/7 by emailing [any-measure]@bpbio.com.
Looking back, BPBio was actually pretty neat. It was minimal but very functional and better than many mood trackers out there today.
Around this time, some college friends invited me to live with them in San Francisco. I said yes and planned to turn BpBio into a startup out there. But when I got there, I chickened out. I didn't tell a soul about BpBio. Literally, this is the very first time I've ever mentioned it. I decided instead to try and do something more lucrative and keep my bipolar disorder diagnosis a secret.
From 2008 to 2014 my mood swings were less severe than the prior college years. I used BpBio for a long while but did not attribute my improved stability to that. Instead, I thought it was the fact that I had "found my people" in the house I lived in and the startup ecosystem of Silicon Valley.
But in 2014 work brought me to Seattle and I experienced my worst depression since 2008. I guess I was not cured after all. For the first time in 6 years I again saw a therapist and took medication.
In November 2014 I started wearing a sleep tracker. Among other things, the sleep data revealed to me that even a small amount of alcohol would negatively affect my sleep. Looking back, I also realized that some of my dumber decisions were made when drunk.
So I gave up drinking. I then experienced a prolonged period of stability. I felt like finally I had become the person I always wanted to be: not depressed and not too happy. Finally I had it all figured out: no debt, a good career, tracking my sleep, exercising, no drinking, a great girlfriend, great friends and family, a great life. Bipolar disorder was finally in the rearview!
Unfortunately, it was not cured and I had a mild manic episode in 2017. I wrote about this episode (originally anonymously) in the post Going Manic with a FitBit.
My 2017 mania was followed by milder hypomanias in summer of 2019 and winter of 2020. During this period a pattern emerged. I would go to therapy and on Lithium and Lamictal when depressed. I then would abruptly stop medications for various reasons. And I would quickly cycle up.
Looking back on it now, I see the incredible danger from rapidly stopping things like lithium. I simply did not have the knowledge of how lithium can prime your brain to launch into record setting manias if abruptly stopped. I expect I will write more about this in the future. But, for now I'll just mention that I stopped lithium in July of 2022.
On August 20th, 2022, I wrote in my journal, "First he comes for your sleep. I can see how it sneaks up on you. Slept really poorly last night. Feel very tired right now."
Days later, at 4:34am on August 24th I wrote "Knocking on the door of hypomania. I am more vigilant this time, I hope."
By August 28th I was hypomanic and by September 1st I was in what would be the worst manic episode of my life. I have already written a bit about this in the post A Manic Startup and I'm sure I'll write more about it later.
For now, the thing I want to mention is that this time once again I thought I was over bipolar disorder. I decided that my real problem over the years was not believing in myself, and taking the word of doctors that I had this terrible condition bipolar disorder. I decided instead that I really was hyperthymic. I believed that the thing I screwed up before was my breathing. Someone with energy like me required a lot of oxygen and I needed to make sure I was increasing my lung capacity. I believed I could maintain my hyperthymic temperament indefinitely-never falling back into depression-if I just did strong breathing exercises to keep my lung capacity maximized. Once again, this "cure" turned out to be short lived.
So about twenty years after diagnosis, I have not gotten a handle on these energy swings. Five times I tried to pretend like I could stop worrying about it thanks to:
While some of these have been very helpful, none of them was a cure.
These are the major bullet points in my bipolar history. In the chart at the top of this page you can see 4 high episodes and 4 low episodes in 7 years. This frequency has been roughly the same since at least age 12. Thus at age 39 I've probably had 15-20 up spans and 15-20 down spans. It's been a roller coaster.
The period for me--the time for a complete cycle from hypo to depressed to hypo again--is about 18 months. The downside of these long cycles is that I could try a new treatment and have no major episodes for 2 years and that would be little evidence that the treatment worked, because it could have also just happened by chance.
Bipolar disorder has been a constant, dominant term in the equation of my life. So although I am amazed by modern science and medicine, I am also disappointed in it, as twenty years after diagnosis expected prognosis hasn't changed.
Besides participating in as many studies as I can, I have spent a lot of my career trying to improve things. BPBio was openly aimed at bipolar disorder, but in fact nearly all my work has been secretly motivated by my battle with this condition and my attempts to try new things to help science and medicine solve this. My work on Ohayo was motivated by my belief that better data tools could help scientists and patients. My efforts on public domain issues is motivated by my belief (perhaps incorrect) that paywalled science slows things down. My work on TrueBase and the BrainDB prototype was motivated by thinking perhaps a new kind of symbolic model could help solve this.
Personally I am now 3 months into trying a ketogenic diet as a treatment for bipolar disorder. I think it holds promise as a treatment, but more than that I think it may help us triangulate the biological mechanisms driving mood episodes. The metabolic theories of bipolar disorder seem to be making rapid progress in an area that otherwise hasn't seen much. Lately I have been trying to get caught up on all the research. I've been taking copious notes about everything from glutamate to the Krebs cycle to Oxidative Stress.
It seems like there is a new boom in funding for bipolar disorder research. BD², funded by the Brin, Dauten, and Baszucki families, is funding a large number of very exciting projects. I hope to write more about a number of them.
I recently was able to be a participant in one of the newer bipolar research studies going on. I got lots of cool data like the image below, and it only required time, many blood draws, and my first arterial line!
It gives me hope that there are so many smart, caring, hard working people trying to figure this thing out. I will do my best to contribute in any and every way I can. I've learned if we don't neutralize bipolar disorder, it will neutralize me.
Year | Month | PeakSeverity | Project | SleepTracked | MoodStabilizers? | Psychedelics? |
---|---|---|---|---|---|---|
2024 | ✅ | No | No | |||
2023 | ✅ | No | No | |||
2022 | August | 4 | PL | ✅ | Partial | No |
2021 | ✅ | Partial | No | |||
2020 | November | 1 | PD | ✅ | Partial | No |
2019 | May | 1 | TL | ✅ | Partial | No |
2018 | ✅ | Yes | No | |||
2017 | June | 3 | TN | ✅ | No | No |
2016 | January | 1 | OH | ✅ | No | Yes |
2015 | ✅ | No | Yes | |||
2014 | ❌ | No | No | |||
2013 | May | 1 | IN | ❌ | No | No |
2012 | ❌ | No | No | |||
2011 | March | 3 | NP | ❌ | No | No |
2010 | ❌ | No | No | |||
2009 | December | 1 | BB | ❌ | No | Yes |
2008 | September | 1 | PS | ❌ | Partial | No |
2007 | July | 3 | SM | ❌ | Partial | No |
2006 | July | 2 | SY | ❌ | No | Yes |
2005 | ❌ | Partial | No | |||
2004 | June | 3 | IF | ❌ | No | No |
2003 | August | 2 | MD | ❌ | No | No |
2002 | October | 1 | AP | ❌ | No | No |
January 23, 2024 — I started a ketogenic diet as a treatment for bipolar disorder 97 days ago, on October 19th, 2023, after learning about it on YouTube from MetabolicMind and Bipolarcast. So far, it seems promising.
But I was perplexed: after 20 years of reading about Bipolar Disorder, and eight health care providers, how had I not heard of keto as a treatment option before? Had I missed it in all the materials I had read?
So I embarked on a mini research project. I scanned every top book on bipolar disorder (46 self-help and medical books, and 10 biographies or other related books) for mentions of "keto".
Prior to 2020, there were zero mentions.
In 2020, "Understanding Bipolar Disorder: The Essential Family Guide" by Daramus has two sentences on it.
This year, in 2023, the 2nd Edition of "Take Charge of Bipolar Disorder" by Fast and Preston is the first to give it serious treatment with 3 pages explaining it.
For thoroughness, I extended my search to include books on "Manic Depression" to make sure I included older works. As a sanity check, I also scanned 5 books on Epilepsy, starting from 1996, and indeed all 5 of them included sections on the ketogenic diet.
So I had not heard of a ketogenic diet as a possible treatment for bipolar disorder because it was simply not talked about in primary sources until very recently. There were anecdotes in blog and forum posts but nothing at all in published books.
Here is the raw data in a Google Sheet.
I am very grateful for all of the researchers who started seriously studying this. Perhaps the studies will show that a ketogenic diet is not a very effective long term treatment, but at the moment it seems like a very promising direction of research and the early results seem encouraging. It also seems like it is helping researchers triangulate the mechanisms of bipolar disorder.
I still have a ton to learn but I wanted to share this simple book scan in case anyone else was wondering why they hadn't heard of this option before.
January 12, 2024 — For decades I had a bet that worked in good times and bad: time you invest in word skills easily pays for itself via increased value you can provide to society. If the tide went out for me I'd pick up a book on a new programming language so that when the tide came back in I'd be better equipped to contribute more. I also thought that the more society invested in words, the better off society would be. New words and word techniques from scientific research helped us invent new technology and cure disease. Improvements in words led to better legal and commerce and diplomatic systems that led to more justice and prosperity for more people. My read on history is that it was words that led to the start of civilization, words were our present, and words were our future. Words were the safe bet.
Words were the best way to model the world. I had little doubt. The computing revolution enabled us to gather and utilize more words than ever before. The path to progress seemed clear: continue to invent useful words and arrange these words in better ways to enable more humans to live their best lives. Civilization would build a collective world model out of words, encoding all new knowledge mined by science, and this would be packaged in a program everyone would have access to.
I believed in word models. Then ChatGPT, Midjourney and their cousins crushed my beliefs. These programs are not powered by word models. They are powered by weight models. Huge amounts of intertwined linked nodes. Knowledge of concepts scattered across intermingled connections, not in discrete blocks. Trained, not constructed.
Word models are inspectable. You plug in your inputs and can follow them through a sequence of discrete nameable steps to get to the outputs of the model. Weight models, in contrast, have huge matrices of numbers in the middle and do not need to have discrete nameable intermediate steps to get to their output. The understandability of their internal models is not so important if the model performs well enough.
And these weight models are amazing. Their performance is undeniable.
I hate this! I hate being wrong, but I especially hate being wrong about this. About words! That words are not the future of world models. That the future is in weight models. Weights are the safe bet. I hate being wrong that words are worse than weights. I hate being wrong about my most core career bet, that time improving my word skills would always have a good ROI.
In the present the race seems closer but if you project trends it is game over. Not only are words worse than weights, but I see no way for words to win. The future will show words are far worse than weights for modeling things. We will see artificial agents in the future that will be able to predict the weather, sing, play any instrument, walk, ride bikes, drive, fly, tend plants, perform surgery, construct buildings, run wet labs, manufacture things, adjudicate disputes--do it all. They will not be powered by word models. They will be powered by weights. Massive numbers of numbers. Self-trained from massive trial and error, not taught from a perfect word model.
These weight models will contain submodels to communicate with us in words, at least for a time. But humans will not be able to keep up and understand what is going on. Our word models will seem as feeble to the AIs as a pet dog's model of the world seems to its owner.
Things we value today, like knowing the periodic table, or the names of capital cities, or biological pathways--word models to understand our world--will be irrelevant. The digital weight models will handle things with their own understanding of the world which will leave ours further and further in the dust. We are now in the early days where these models are still learning their weights from our words, but it won't be long before these agents "take it from here" and begin to learn everything on their own from scratch, and come up with arrangements of weights that far outperform our word based world models. Sure, the hybrid era where weight models work alongside humans with their word models will last for a time, but at some point the latter will become inconsequential agents in this world.
Now I wonder if I always saw the world wrong. I see how words will be less valuable in the future. But now I also see that I likely greatly overvalued words in our present. Words not synchronized to brains are inert. To be useful, words require weights, but weights don't require words. Words are guidelines. Weights are the substance. Everything is run by weights, not words. Words are correlated with reality, but it is weights that really make the decisions. Word mottos don't run humans, as much as we try. Words correlate, but it is our neural weights that run things. Words are not running the economy. Weights are and always have been. The economy is in a sense the first blackbox weight powered artificial intelligence. Word models correlate with reality but are very leaky models. There are far more "unwritten rules" than written rules.
I have long devalued narratives but highly valued words in the form of datasets. But datasets are also far less valuable than weights. I used to say "the pen is mightier than the sword, but the CSV is mightier than the pen." Now I see that weights are far mightier than the CSV!
Words are worse not just because of our current implementations. Fundamentally word models discretize a universe into discrete concepts that do not exist. The real world is fuzzier and more continuous. Weights don't have to discretize things. They just need to perform. Now that we have hardware to run weight models of sufficient size, it is clear that word models are fundamentally worse. As hardware and techniques improve, the gap will grow. Weights interpolate better. As artificial neural networks are augmented with embodiment and processes resembling consciousness, they will be able to independently expand the frontiers of their training data.
Nature does not owe us a word model of the universe. Just because part of my brain desperately wants an understanding of the world in words it is not like there was a deal in place. If truth means an accurate prediction of the past, present, and future, weight models serve that better than word models. I can close my eyes to it all I want but when I look at the data I see weights work better.
Could I be wrong again? I was once so biased in favor of words. In 2019 I gave a lightning talk at a program synthesis conference alongside early researchers from OpenAI. I claimed that neural nets were still far from fluency and to get better computational agents we needed to find novel simpler word systems designed for humans and computers. But then OpenAI has shown that LLMs have no trouble mastering even the most complex of human languages. The potential of weights was right in front of me but I stubbornly kept betting on words. So my track record in predicting the future on this topic isn't exactly stellar. Maybe me switching my bet away from words now is actually a sign that it is time to bet on words again!
But I don't think so. I was probably 55-45 back then, in favor of words. I think in large part I bet on words because so many people in the program synthesis world were betting on weights, so I saw taking the contrarian bet as the one with the higher expected value for me. Now I am 500 to 1 that weights are the future.
The long time I spent betting on words makes me more confident that words are doomed. For years I tried thousands and thousands of paths to find some way to make word models radically better. I've also searched the world for people smarter than me who were trying to do that. Cyc is one of the more famous attempts that came up short. It is not that they failed to write all unwritten rules it is that nature's rules are likely unwriteable. Wolfram Mathematica has made far more progress and is a very useful tool, but it seems clear that its word system will never achieve the takeoff that a learning weights based system will. Again, the race at the moment seems close, but weights have started to pull away. If there was a path for word models to win I think I would have glimpsed it by now.
The only thing I can think of is that there actually will turn out to be some algebra of compression that would make the best performing weight models isomorphic to highly refined word models. But that seems far more like wishful thinking from some biased neural agents in my brain that formed for word models and want to justify their existence.
It seems much more probable that nature favors weight models, and that we are near or may have even passed peak word era. Words were nature's tool to generate knowledge faster than genetic evolution in a way that could be transferred across time and space, but at the cost of speed and prediction accuracy, and now we evolved a way where knowledge can be transferred across time and space and have much better speed and prediction accuracy than words.
Words will go the way of Latin. Words will become mostly a relic. Weights are the future. Words are not dead yet. But words are dead.
I will always enjoy playing with words as a hobby. Writing essays like these, where I try to create a word model for some aspect of the world, makes me feel better, when I reach some level of satisfaction with the model I wrestle with. But how useful will skills with words be for society? Is it still worth honing my programming skills? For the first time in my life it seems like the answer is no. I guess it was a blessing to have that safe bet for so long. Pretty sad to see it go. But I don't see how words will put food on the table. If you need me I'll be out scrambling to find the best way to bet on weights.
January 4, 2024 — You can easily imagine inventions that humans have never built before. How does one filter which of these inventions are practical?
It seems the most reliable filter is seeing an abundant model in nature. Your invention doesn't need to work exactly as nature's version but if there is not an abundant model in nature then it is probably impractical.
For example, we have never discovered an area that if you stepped through you'd come out somewhere else. Nature has no portals. A teleporter is thus impractical.
Birds, on the other hand, are abundant, and planes turned out to be practical.
Some inventions are possible but not practical. We could build a limited number at a net loss and eventually we'd stop.
Outer space is filled with countless lifeless objects floating around. Satellites are a practical idea.
Nature has no living things that regularly exit and re-enter the atmosphere. Humans in space was proved possible, but might turn out to be impractical.
All practical inventions have abundant natural models. The sun is a model for nuclear power plants. Lightning for light bulbs. Branches for bridges. Birds for planes. Ears for recorders. Eyes for cameras. Fish for submarines. Ant hills for homes. Pools for ponds. Chloroplast for solar panels. DNA replication for downloading. Bacteria for CRISPR. Brains for artificial neural networks.
Once human inventions become abundant, they can serve as models for further practical inventions. Carriages for cars. Human computers for computing machines. Phonebooks for search engines. Facebooks for Facebook.
If you can't find an abundant natural model for an invention, be skeptical of its practicality.
If a model isn't out there yet in abundance, the invention is most likely impractical.
If nature is doing it, there has to be a practical way. If nature is not doing it, be skeptical.
January 1, 2024 — Happy New Year!
First, a disclaimer. I think a lot of my posts are my attempts to reflect on experiences and write tight advice for my future self. This one is less of that and more just unsophisticated musings on an intriguing thought that crossed my mind. I am taking advantage of it being New Year's day to yet again try and force myself to publish more.
Most of my published writing these days is in communication with people over email or in online forums.
But I also do a lot of self musings that I do not publish because they are meanderings like this one. But maybe if I publish a greater fraction of what I write, the time will be better used, because even if there are no readers, the threat of readers forces me to think things over better.
Why am I still writing? I think symbols are probably doomed. The utility of being able to read and write is perhaps passed its prime. Inscrutable three dimensional matrices of weights are the future, and this practice I am engaging in now of conjuring and manipulating symbols on a two dimensional page is a dying art. But I am maybe too old to unlearn my appreciation for symbols. So I will keep writing. Because I enjoy doing it, like piecing together a puzzle. And because I still hope it can help my future self be a better person. Now, onto today's post.
Short of an extraterrestrial projectile hitting earth, Artificial Neural Networks (ANNs) seem to be on an unstoppable trajectory toward becoming a generally intelligent species of their own, without being dependent on humans. But that's because the world's most powerful entities, foremost being the United States Military (USM), are allowing them to grow.
ANNs are made up of a vast number of assembled processors. These processors are not able to self replicate using readily available molecules in nature. Instead they are built in a limited number of fabs.
Fabs are very complex and expensive factories with a building cost in the billions. They are not something you can easily hide. If given the order in the morning, the U. S. Military could probably knock out every fab in the world by evening. I would not be surprised if there is a team somewhere monitoring all the world's fabs and developing exactly that kind of option. Maybe China has a team like that too.
So there is a very simple kill switch to prevent some emergent rogue superintelligent ANN. It is physically very easy for the powers that be to pause or reverse the growth of these things. And if turning growth off isn't enough they can also even knock out the data centers where the AIs run. Data centers also are easy for a superpower nation state to keep track of.
So AGI is easily stoppable, if you are a superpower. If you are just Joe Schmoe like me or even top 50 country, but just not quite top 10, you have effectively no say in the matter.
Will there come a point where even superpowers lose the ability to stop AGI?
There are many scenarios you can imagine where through a certain chain of independent events a rogue AI does manage to somehow take over the data centers and fabs and power plants of the world. There are a number of sci-fi stories with variations of this idea.
But part of me wonders if instead what happens is we develop all the components necessary so that GPUs are no longer the primary ingredient to ANNs but are replaced by organic brains grown in vitro. These in vitro brains would be hooked up to control machines using something like Neuralink's Neuralace. They would be trained by ANNs.
We know it must be possible to run computations like in an ANN very power efficiently, using self reproducing organic materials, because we see nature do it. Just as scientists measured the amount of energy coming from the sun and deduced there must be a much more powerful way to create energy than chemical reactions, so we know there must be a better way to build these chips.
The technologies you would need to build this seem to almost be all available1.
Companies now sell lab grown "meat" at scale which I assume means we are getting better and better at growing artificial tissue in vitro. So perhaps you could grow a chunk of neural tissue unbounded. Just add water and readily available organic nutrients. Imagine if you could grow enough brain tissue to fill a shipping container---that could contain the compute potential of 20,000 Einsteins!
Neural tissue might as well just be meat if you can't interface with it. Enter Neuralink (and competitors). They are developing ways to do IO with neurons at scale.
Without the ability to train this tissue, it again would just be meat. That's where our current ANNs come in. I imagine if you had to "teach" a giant blob of brain tissue using electrodes by hand, you would quickly get bored and go mad. But we now have ANNs that can do the most boring of tasks over and over without ever getting bored or angry. These ANNs could use Reinforcement Learning to train these neural blobs. In addition to controlling the electrodes, the ANN would control the environment of the neural tissue, perhaps altering the neurotransmitter balance or ambient electromagnetic frequencies to help steer learning and optimize learning rates.
I have no expert insight or opinions on these matters. I have just been thinking a lot about what the future looks like given the recent breakthroughs in AI. Thinking about whether AI is inevitable led me to think of how that might require biobots so AI would have a less fragile "food supply" than the fabs. Then it clicked to me that Neuralink's real business might not have much to do with the stated goal of communicating with brains in vivo, but instead with a new kind of lab grown brain in vitro, to maybe serve as a replacement for GPUs. Most of their technology, such as their surgical robot, would be relevant for building an AI backed by organic in vitro brains. Just as SpaceX has the stated mission of sending humans to Mars, but really the big economic model so far has been creating their own global Internet.
In following this thought I wondered for the first time of how you would train a brain that did not have a body. I'm sure many people have thought and written on this. I had not. It's an intriguing challenge. It seems like it might be a good way to learn how human brains work. I am happy I decided to write about the initial Neuralink brain in vitro thought, as it led me to this other thought about training a bodyless brain.
I have no conclusions, as I said in the disclaimer up top this is meant to just be a meandering post. If I tried to reach conclusions on these ideas before publishing it would be years.
1 It does seem like a ground breaking proof of concept could happen within a decade. If that were to happen maybe something like this could be viable within another decade. So perhaps the earliest something like this might happen would be 15 - 20 years. It doesn't seem like it would be 50 years out, as by then it seems AGI would have happened using traditional chips, or that world powers would have hit the kill switch. ⮐
2 After publishing I did a little googling and learned of the terms brainoid and brain-on-chip. Hard to say whether those will ever be useful to power AGI, but for personalized medicine it seems genius. ⮐
December 28, 2023 — I thought we could build AI experts by hand. I bet everything I had to make that happen. I placed my bet in the summer of 2022. Right before the launch of the Transformer AIs that changed everything. Was I wrong? Almost certainly. Did I lose everything? Yes. Did I do the right thing? I'm not sure. I'm writing this to try and figure that out.
Leibniz is probably my favorite thinker. His discoveries in mathematics and science are astounding. Among other things, he's the thinker credited with discovering Binary Notation--that ones and zeros are all you need to represent anything. In my opinion this is perhaps the most incredible idea in the world. As a kid I grew up surrounded by magic digital technologies. To learn that truly all this complexity was built on top of the simplicity of ones and zeroes astounded me. Simplicity and complexity in harmony. What makes Leibniz stand out more to me is not just his discoveries but how what he was really after was a characteristica universalis, a natural system for representing all knowledge that would allow for the objective solving of questions across science.
I wanted to be like Leibniz. Leibniz had extreme IQ, work ethic, and ability to take intellectual risks. Unfortunately I have only above average IQ and inconsistent work ethic. If I was going to invent something great, it would have to be because I took more risks and somehow got lucky.
Eventually I got my chance. Or at least, what I took to be my chance.
Computers ultimately operate on instructions of ones and zeroes, but those that program computers do not write in ones and zeroes. They did in the beginning, when computers were a lot less capable. But then programmers invented new languages and programs that could take other programs written in these languages and convert them into ones and zeroes ("compilers").
Over time, a common pattern emerged. In addition to everything being ones and zeroes at some point, at some point everything would also be digital "trees" (simple structures with nodes and branches). Binary Notation can minimally represent every concept in ones and zeroes, was there some minimal notation for the tree forms of concepts? And if there were, would that notation be mildly interesting, or somehow really powerful like Binary Notation?
This is an idea I became obsessed with. I came across it by chance, when I was still a beginner programmer. I was trying to make a programming language as simple as possible and realized all I needed was enough syntax to represent trees. If you had that, you could represent anything. Eureka! I then spent years trying to figure out whether this minimal notation was mildly interesting or really useful. I tried to apply it to lots of problems to see if it solved anything.
One day I imagined a book. Let's call it The Book. It could be billions of pages long. It would be lacking in ornamentation. The first line would be a mark for "0". The second line would be a mark for "1". You've just defined Binary Notation. You could then use those defined symbols to define other symbols.
In the first hundred pages you might have the line "8 1000" to define the decimal number 8. In the first ten thousand pages you might have the line "a 97" to define the character "a" as part of defining ASCII. In the first million pages you might have the word "apple", and in the first hundred million you might have defined all the molecules that are present in an apple.
The primary purpose of The Book would be to provide useful models for the world outside The Book. But a lot of the pages would go to building up a "grammar" which would dictate the rules for the rest of the symbols in The Book and connect concepts together. In a sense the grammar compresses the contents of The Book, minimizing not the number of bits needed but the number of symbols needed and the entropy of the cells on the pages that hold the symbols, and maximizing the comparisons that could be made between concepts. The notation and grammar rules would not be arbitrary but would be discovered as the most efficient way to define higher and higher level symbolic concepts, just as Boolean Algebra gives us the tools to build bigger and bigger efficient circuits. Boolean Algebra is not arbitrary but somehow arises from the laws of the universe, and so would this algebra for abstraction. It would implement ideas from mathematical domains such as Category and Type theory with surprisingly simple primitives. It was a new way to try and build the characteristica universalis.
The Book would be an encyclopedia. But it wouldn't just list concepts and their descriptions in a loosely connected way. It would build up every concept, so you could trace all of the concepts required by any other concept all the way down to Binary. Entries would look so simple but would abide by the grammar and every word in every concept would have many links. It would be a symbolic network.
You would not only have definitions of every concept, but comparability would be maximized. Wikipedia does a remarkable job of listing all the concepts in a space and concepts are weakly linked. But Wikipedia is primarily narratives and the information is messy. Comparability is nowhere near maximized.
The pieces would almost lock in place because each piece would influence constraints on other pieces--false and missing information would be easy to identify and fix.
Probably more than 100,000 people have researched and developed digital knowledge bases and expert systems. Those 100,000 probably came up with 1,000,000 ways to do it. If there were some simplest way to do it--a minimal Binary Notation and Boolean Algebra for symbols--that would work for any domain, perhaps that would lead to unprecedented collaboration across domains and a breakthrough in knowledge base powered experts.
It wasn't the possibility of having a multi-billion page book that excited me. It is what The Book could power. You would not generally read The Book like an encyclopedia, but it would power an AI expert you could query.
What is an expert? An expert is an agent that can take a problem, list all the options, and compare them in all the relevant ways so the best decision can be made. An expert can fail if it is unaware of an option or fails to compare options correctly in all of the relevant ways.
Over the years I've thought a lot about why human experts go wrong in the same way over and over. As Yogi Berra might say, "You can trust the experts. Except when you can't". When an expert provides you with a recommendation, you cannot see all the concepts they considered and comparisons they made. Most of the time it doesn't matter because the situation at hand has a clear best solution. In my experience the experts are mostly right, with the occasional innocent mistake. You can greatly reduce the odds of an innocent mistake by getting multiple opinions. But sometimes you are dealing with a problem with no standout solution. In these cases biased solutions flood the void. You can shuffle from "expert" to "expert", hoping to find "the best expert" with a standout solution. But at that point you probably won't do better than simply rolling a dice.
No one is an expert past the line of what is known. Even more of a problem is that it is impossible to see where that line is. If we could actually make something like The Book, we could see that line. A digital AI expert, which could show not only all the important stuff we know, but also what we don't know, would be the best expert.
In addition to powering AI experts that could provide the best guidance, The Book could aid in scientific discoveries. Anyone would be able to see the edge of knowledge in any domain and know where to explore next. Because everything would be built in the same universal computable language, you could do comparisons not only within a domain, but also across domains. Maybe there are common meta patterns in diverse symbolic domains such as physics, watchmaking, and hematology that are undiscovered but would come to light in this system. People who had good knowledge about knowledge could help make discoveries in a domain they knew little about.
I was extremely excited about this idea. It was just like my favorite idea--Binary Notation--endless useful complexity built up from simplicity. We could build digital experts for all domains from the same simple parts. These experts could be downloadable and available to everyone.
Imagine how trustworthy they would be! No need to worry about hidden biases in their answers--biases are also concepts that can be measured and would be included in The Book. No "blackbox" opaque collections of trained matrices. Every node powering these AIs would be a symbol reviewable by humans. There would be a massive number of pages, to be sure, but you would almost always query it, not read it. Mostly you'd consume it via data driven visualizations to your questions, rather than as pages of text.
No one can know everything, but imagine if anyone could see everything known! I don't mean see all the written or digital information in the world. That would be so overwhelming and little more useful than staring at white noise. The symbols in The Book would be more like the prime numbers. All numbers are made up of prime numbers but prime numbers make up ~0% of all numbers. The Book would be the slim fraction containing the key information.
You wouldn't be able to read everything but you would be able to use a computer to instantly query over everything.
Everything could be printed out on a single scroll. But in practice you would have a digital collection of files containing concepts which would have answers to questions about those concepts. An academic paper would include a change request to a collection. It would add new files or update some lines in existing files. For example, I just read a paper about an experiment that looks at how a genetic mutation might exacerbate a psychiatric condition. The key categories of things dealt with were SNVs, Proteins, Organelles, Pathways, and Psychiatric Conditions. Currently there are bespoke databases for each of these things. None of them are implemented in the same way. If they were, it would be easy to actually see the holistic story and contributions of the paper. With this system, you would see what gaps were being filled, or what mistakes corrected.
This was a vague vision at first. I thought a lot about the AI experts you could get if you had The Book. I was playing with all the AIs at the time and tried to think backwards from the end state. What would the ideal AI expert look like?
Interface questions aside, it would need two things. It would need to know all the concepts and maximize comparability between them. But for trust, it would also need to be able to show that it has done so.
In the long run I thought that the only way to absolutely trust an AI expert would be if there were an human inspectable knowledge base behind it that powered calculations. The developments in AI were exciting but I thought in the long run the best AI would need something like The Book.
My idea was still a hunch, not a proof, and I set out building prototypes.
I tried a number of times to build things up from "0 1". That went nowhere. It was very hard to find any utility from such a thing or get feedback on whether one was building in the right direction. I think this was the same way Leibniz tried to build his characteristica universalis. It was a doomed approach.
By 2020 I had switched to trying to make something high level and useful from the beginning. There was no reason The Book had to be built in order. We had decimal long before we had binary, even though the latter is more primitive. The later "pages" are generally the ones where the most handy stuff would be. So pages 10 million to 11 million could be created first by practitioners, with earlier sections and the grammar filled in by logicians and ontological engineers over time.
There was also no reason that The Book had to built as a monolith. Books could be built in a federated fashion, useful standalone, and merged later to power a smarter AI. The universal notation would facilitate later merging so the sum would be greater than the parts. Imagine putting one book on top of another. Nothing happens. But with this system, you could merge books and there would suddenly be a huge number of new "synapses" connecting the words in each. The comparisons you could make go up exponentially. The resulting combination would be increasingly smarter and more efficient. So you could build "The Book" by building smaller books and combining them together.
With these insights I made a prototype called "TreeBase". I described it like so: "Unlike books or weakly typed content like Wikipedia, TreeBases are computable. They are like specialized little brains that you can build smart things out of."
At first, because of the naive file based approach, it was slow and didn't scale. But lucky for me, a few years later Apple came out with much faster computers. Suddenly my prototype seemed like it might work.
In the summer of 2022, I used TreeBase to to make "PLDB", a Programming Language DataBase. This site was an encyclopedia about programming languages. It was the biggest collection of data on programming languages, which was gathered over years by open source contributors and myself and reviewed by hand.
As a programming enthusiast I enjoyed the content itself. But to me the more exciting view of PLDB was as a stepping stone to the bigger goal of creating The Book and breakthrough AI experts for any domain.
It wasn't a coincidence that to find a symbolic language for encoding a universal encyclopedia I started with an encyclopedia on symbolic languages. I thought if we built something to first help the symbolic language experts they would join us in inventing the universal symbolic language to help everyone else.
PLDB was met with a good reception when I launched it. After years of tinkering, my dumb idea seemed to have potential! More and more people started to add data to PLDB and get value from it. To be clear, almost certainly the content was the draw, and not the new system under the hood. I enjoyed working on the content very much and did consider keeping PLDB as a hobby and forgetting the larger vision.
But part of me couldn't let that big idea go. Part of me saw PLDB as just pages 10 million to 11 million in The Book. PLDB was still far from showing the edge of knowledge in programming languages, but now I could see a clear path to that, and thought this system could do that for any domain. Part of me believed that the simple system used by PLDB, at scale, would lead to a better mapping of every domain and the emergence of brilliant new AI experts powered by these knowledge bases.
I understand how naive the idea sounds. Simply by adding more and more concepts and measurements to maximize comparability in this simple notational system you could map entire knowledge domains and develop digital AI experts that would be the best in the world! Somehow I believed my implementation would succeed where countless other knowledge base and expert systems had failed. My claims were very hand wavy! I predicted there would be emergent benefits, but I had little proof. It just felt like it would, from what I had seen in my prototypes.
Where would the emergent benefits come from in my system that wouldn't come from existing approaches?
A dimension, which is symbolically just another word for column in a table of measurements, is a different perspective of looking at something. For example, a database about fruits might have one dimension measuring weight and another color. There's a famous Alan Kay quote about a change in perspective being worth 80 IQ points. That's not always the case, but you can generally bet adding perspectives increases one's understanding, often radically. A thing that surprised me when building PLDB was just how much the value of a dataset grew as the number of dimensions grew. New dimensions not only increased the number of insights you could make, sometimes radically, but also expanded opportunities to add even more promising dimensions. This second positive feedback loop seemed to be more powerful than I expected. Of course, it is easy to add a dimension in a normalized SQL database. Simply add a column or create a new table for the dimension with a foreign key to the entity. My thought was seemingly small improvements to the workflow of adding dimensions would have compounding effects.
I also thought minimalism would show us the way. Every concept in this system would have to adhere to the strictest rules possible. The system could encode any concept. So if the rules prevented a true concept from being added, the rules would be adjusted at the same time. The system was designed to be plain text backed by git to make system wide fixes a cinch. The natural form and natural algebra would emerge and be a forcing function that led us to the characteristica universalis. This would catapult this new system from mildly interesting to world changing. I believed if we just tried to build really really big versions of these things, we would discover that natural algebra and grammar.
However, there were a ton of details to get right in the core software. If you didn't get the infrastructure for this kind of system to a certain point then it would not compete favorably against existing approaches. Simplicity is timeless but scaling things is always complex. This system needed to pass a tipping point past which clever people would see the benefits and the idea would spread like fire.
It was simple enough to keep growing the tech behind PLDB slowly and steadily but I might never get it to that tipping point. If I was right and this was the path to building The Book and the best AI experts, but we never got there because I was too timid, that would be tragic! Was there a way I could move faster?
I had an idea. I had worked in cancer research for a few years so had some knowledge of that domain. In addition to PLDB, why not also start building CancerDB, building an expert AI for a domain that affects everyone in a life and death matter? Both required building the same core software, but it seemed like it would be 1,000x easier to get a team and resources to build an expert AI to help solve cancer rather than just improve programming languages. I could test my hunch that my system would really start to shine at scale and if it worked help accelerate cancer research in the process. It seemed like a more mathematically sound strategy.
Knowledge in this system was divided into simple structured blocks, like in the screenshots above. Blocks could contain two things. They could define information about the external world. And some could define rules for other blocks. The Book would come together block by block, like a great wall. The amount of blocks needed for this system to become intelligent would be very high. Some blocks were cheap to add, others would require original research and experiments. It would be expensive to add enough blocks to effectively map entire domains.
Like walls in the real world that have "Buy a Brick", we would have another kind of block, sponsor blocks, which would give credit to funders for funding the addition and upkeep of blocks. This could create a fast, intelligent feedback loop between funders and researchers. Because of the high dimensional nature of the data, and computational nature of the encoding, we would have new ways to measure contributions to our shared collective model of our world.
It would be a new way to fund research, and the end result would not be disconnected PDFs, but would be a massive, open, collaborative, structured, simple, computable database. Funders would get thanks embedded in the database and novel methods to measure impact, researchers would get funding, and everyone could benefit from this new kind of open computable encyclopedia.
CancerDB would be a good domain to test this model, as there are already a lot of funders and researchers.
The CancerDB idea also had another advantage. Another contrarian opinion of mine is that copyright law stands in the way of getting the most out of science. The world is flooded with distracting information and misleading advertisements, while some of the most non-toxic information is held back by copyright laws. I thought we could make a point here. We would add all the truest data we could find, regardless of where it came from, and also declare our work entirely public domain. If our work helped accelerate cancer research, we would demonstrate the harm from these laws. I figured it would be hard for copyright advocates to argue for the status quo if by ignoring it we helped save people's lives. As a sidenote, I am still 51% confident I am right on this contrarian bet, which is more confident I ever was in my technology. I have never read a compelling ethical justification for copyright laws, and think they make the world worst for the vast majority of people, though I could be wrong for utilitarian reasons.
Once the CancerDB idea got in my head it was hard to shake. I felt in my gut that my approach had merit. How could I keep moving slowly on this idea if it really was a way to advance knowledge and create digital AI experts that could help save people's lives? I started feeling like I had no choice.
The probability of success wasn't guaranteed but the value if it worked was so high that the bet just made too much sense to me. I decided to go for it.
Unfortunately, my execution was abysmal. I was operating with little sleep and my brain was firing on all cylinders as I tried to figure this out. People thought I was crazy and tried to stop me. This drove me to push harder. I decided to lean into the "crazy" image. Some said this idea was too naive and simple to work and anyone who thought it would work was not rational. So I was willing to present myself as irrational to pull off something no rational person would attempt.
I could not rationally articulate why it would work. I just felt it in my gut. I was driven more by emotion than reason.
I wanted this attempt to happen but I didn't want to be the one to lead it. I knew my own limitations and was hoping some other group, with more intellectual and leadership capabilities, would see the possibilities I saw and build the thing on their own. I pitched a lot of groups on the idea. No one else ran with it so I pressed on and tried to lead it myself.
I ran into fierce opposition that I never expected. Ultimately, I wouldn't be able to build the organization to build one of these things 100x bigger than PLDB, and prove empirically a novel breakthrough in knowledge bases.
I still had a very fair chance to prove it theoretically. I had the time to discover some algebra that would prove the advantage of this system. Unfortunately, as hard as I pushed myself—and I pushed myself to an insane degree—I would not find that. Like an explorer searching for the mythical fountain of youth, I failed to find my hypothesized algebra that would show how this system could unlock radical new value.
I failed to build a worthwhile second TrueBase. Heck, I even failed to keep the wheels running on PLDB. And because I failed to convince better resourced backers to fund the effort and funded it myself, I lost everything I had, including my house, and worse.
My confidence in these ideas always varied over the years, but the breakthroughs in deep learning this year drastically lowered my confidence that I was right. I recently read a mantra from Ilya Sutskever "don't ever bet against deep learning". Now you tell me! Maybe if I had read that quote years ago, printed it, and framed it on my wall, I would have bet differently. In many ways I was betting against deep learning. I was betting that curated knowledge bases built by hand would create the best AI experts. The reason why they hadn't yet was that they lacked a few new innovations like the ones I was developing.
Now, seeing the astonishing capabilities of the new blackbox deep learning AIs, I question much of what I once believed.
My dumb, universal language, human curated data approach would have merit if we didn't see other ways to unlock more value from all the information that is out there. But clearly deep learning has arrived, and there is clearly so, so much more promise in that approach.
There is always the chance that the thousands of variations of notations and algebras I tried were just wrong in subtle ways and that if I had kept tweaking things I would have found the key that unlocks some natural advantageous system. I can't prove that that's not a possibility. But, given what we've seen with Deep Learning, I now highly discount the expected value of such a thing.
A less crazy way to explore my ideas would be to try and figure out how instead of trying to replace Wikipedia, I could implement these ideas on top of Wikipedia and see if they could make it better. Would adding typing to radically increase the comparability of concepts in Wikipedia unlock more value? That was probably the more sensible thing to do in the beginning.
I could say, a bit tongue in cheek, that the remaining merit in my approach is that a characteristica universalis offers upside without the potential to evolve into a new intelligent species that ends humanity.
In examining my actions and thinking it is important to disclose that I do have a manic depressive brain.
Last year when I decided to go full throttle on this idea my brain was in a hypomanic, and at points manic, state. That's not the best state to execute in, and my poor execution reflects that.
The downside of hypomania is one can be greatly overconfident in an incorrect contrarian idea. The upside of hypomania is one can have the confidence to ignore the crowd and pursue a contrarian idea that turns out to be correct. It is hard to know the difference.
A related sidenote to the story is that the second "DB" I wanted to build was actually BrainDB. I thought using this system for neuroscience would hopefully help figure out bipolar disorder. An understanding of the mechanisms of bipolar disorder are currently beyond the edge of science. But considering all the factors at the time I judged CancerDB to be the most urgent priority.
I got my chance. I got to take my shot at the characteristica universalis. I got to try to do things my way. I got to decide on the implementation. Ambitious. Minimalist. Data driven. Open source. Public domain. Local first. Antifragile.
I got to try and build something that would let us map the edge of knowledge. That would power a new breed of trustworthy digital AI experts. That might help us cure problems we haven't solved yet.
I failed, but I'm grateful I got the chance.
It was not worth the cost, but I never imagined it would cost me what it did.
Symbols are good for communication. They are great at compressing our most important knowledge. But they are not sufficient, and are in fact unnecessary for life. There are not symbols in your brain. There are continuously learning wirings.
Symbols have enabled us to bootstrap technology. And they will remain an important part of the world for the next few decades. Perhaps they will continue to play a role, albeit diminished, in enabling communication and cooperation in society forever. But symbols are just one modality. A modality that will be increasingly less important in the future. The characteristica universalis was never going to be a thing. The AIs, artificial continuously learning wirings, are the future. As far as I can tell.
I thought we needed a characteristica universalis. I wasn't sure if it was possible but thought we should try. Now it seems much clearer that what we really need are capable learning neural networks, and those are indeed possible to build.
A characteristica universalis might be possible someday as a novelty. But not something needed for the best AIs. In fact, if we ever do get a characteristica universalis it will probably be built by AIs, as something for us mere humans to play with when we are no longer the species running the zoo.
June 27, 2023 — I am so disappointed in myself for having yet another manic cycle and hurting the people I love. I'm sharing this to come out publicly as having bipolar disorder, take 80% blame for my actions and words, and maybe help someone avoid my mistakes.
Last August my brain lit up like fireworks. It felt like a cosmic river of energy suddenly detoured through my veins.
My FitBit data shows a seismic event:
My symptoms were the typical assortment of manic activity.
I had a grand idea about a public domain computable encyclopedia to accelerate scientific research.
I started coding all hours of the night with Top Gun Maverick on repeat.
I started sending monthly investor updates twice a day. Here's the kicker: none of the recipients were investors.
I fearlessly pitched anyone and everyone to spread the good news about my new discoveries and to recruit a team. I wrote a letter to the President excitedly telling him about how my idea would help cure cancer.
If I saw any data that could be interpreted as my plan working I asked no questions but immediately accepted it as clear evidence of unstoppable success.
I took no time to deeply think things through but just acted as fast as possible.
I poured my savings into the startup and paid a huge sum to start a direct public listing process.
I could generate a "logical" explanation for every risk I took and I took a dozen risks per hour twenty hours per day. I started writing IN ALL CAPS and explained that reducing my character set from 52 to 26 allowed me to write faster.
My family and friends and mentors tried to stop me. "Slow down." "Get some sleep." "Take some time off."
It got more intense. "Stop it." "You're sick." "You need to go to the hospital."
I had been hypomanic a dozen times but this time I hit a new level.
For the first time my family called the police. I calmly talked them down.
I shrugged off the criticism, knowing my loved ones would get behind me once I showed them increasingly amazing results.
Again the police were called and again I talked my way out of a hospital trip.
I was baffled that they would try to stop me because I was Good and was going to help cure cancer and mental health and fix science and solve all these world ills and so anyone trying to stop me was Evil. My euphoria started alternating with an angry "war mode" personality and I started viciously retaliating online against anyone who I found taking secret action against me, including my own family and close friends, which is absolutely awful, because now I realize they saw the idiotic road I was taking and were truly trying to get me to a better path, just as they said they were.
This repelled my whole support structure and I was left on my own. I interpreted this as some grand cosmic challenge and went all in.
With my initial grand plans for the cancer database delayed, I launched all kinds of products to try and buoy the ship: I launched public domain print-at-home newspapers, programming languages, a music label, and a number of other ideas.
The estrangement with my family grew worse and worse. I felt miserable about the war with my family but believed I would eventually succeed, improve the world, and they would not only forgive me but be proud. I dreamt of the day where I'd finally hug my children again and say "We did it!"
I told people that bipolar disorder wasn't real, instead it was "bipolar potential", and that I was not crazy but extraordinary. I would help solve the world's toughest unsolved problems. I would code and take breaks to challenge myself to do extraordinary things and learn from "extraordinary" people. Some nights I slept in the fanciest hotels in Beverly Hills and others I slept on floors in war zones. I cavorted with soldiers and spies; doctors and dancers; judges and journalists; carpenters and comedians. I visited hospitals and cancer centers; went to weddings and funerals; spent time with homeless and the .1%; went anywhere anyone told me not to go. I tried to build a support structure in months to replace the one that took me decades to build. I met lots of kind, hard working, honest people, but I don't think I ever had much of a chance of salvaging things.
After eight months I had depleted my savings. The bets I thought would bring in millions did not pan out. Thanks to the help of many open source contributors we had done good work but my contributions were far from extraordinary. I had overpromised on my talents and greatly underdelivered.
The root idea I still believe in mathematically and spiritually, but it's a religion, not a business.
Why did my startup fail? Me. My brain. My manic self. Someone once called me a terrible entrepreneur. I wanted more than anything to prove them wrong. That I could do this. But I couldn't. You can learn a lot about doing startups but you can't unlearn bipolar disorder.
I desperately wanted to believe that bipolar disorder wasn't real and that I could stop living in fear of it. That all the doubters were wrong and that we would build a new kind of scientific database that would prove this.
What pains me most is I see how crystal clear my illness was in the beginning and how I was surrounded by so much love—so, so many family and friends were desperately trying to intervene—and I spurned them and then reacted despicably. I am so, so sorry.
Far worse than failing at the startup I failed as a husband, a father, son, brother, friend, as a kind human being.
It is a hard pill to swallow that I was the Evil one, after all.
June 16, 2023 — Here is an idea for a simple infrastructure to power all government forms, all over the world. This system would work now, would have worked thousands of years ago, and could work thousands of years in the future.
In theory all government forms could shift to this model, and once a citizen learns this simple system, they would be able to understand how to work with government forms anywhere in the world.
This system could reduce the amount of time citizens waste on forms, reduce asymmetries between those who can afford form experts (accountants, lawyers, et cetera), and those who cannot, and increase transparency and reduce expense of governments.
I will not make any claims that this system will catch on. Let's be generous and assume my system works as I claim. Even then, and even if 99% of citizens were better off, if the 1% of the population with power does not find this system in their interests, it is very plausible that it will not happen. In other words, it is a plausible argument that the current byzantine system strongly benefits those in the top 1% of society who derive revenue from this system, and can simply use a fraction of their dividend streams to have experts deal with these problems. So even if the system is significantly better for 99% of people, it could be worse for 1% of people, and it could be those people who decide what system to use, meaning this system might never take off.
Alternatively, if this system were to catch on, an unanticipated second order effect could be that by making government forms so easy and simple, more forms are created, reducing the net benefit of this system.
Obstacles aside, let me describe it anyway.
There are 3 key concepts to this system: Specifications, Instances, and Templates.
Specifications describe the fields of a form. For example, that it requires a name and a date and a signature. Every government form must have a Specification S and every Specification must have an identifier. Specifications are written in a Specification Language L. The Specification Language has a syntax X.
Instances are documents citizens submit that include the Specification identifier and contain a document written to that specification. Instances, I, are written in the same syntax X as Specifications S.
Templates can be any kind of document T from which an instance I of S can be derived. Templates can follow any syntax.
In this system, governments can provide Templates T and citizens can submit them, as they do today, or they can directly submit an Instance I for any and every Specification S. In other words, Governments can still have fancy Templates for Birth Certificates or Titles or Taxes, but they also have to accept Instances I for that Specification. Government archives would treat the instances I as the source of truth, and the Templates T would only serve as an optional artifact backing the I.
The syntax I have developed that is one candidate for X for making this system work I call Tree Notation. There are no visible syntax characters in Tree Notation. It is merely the recognition that the grid of a spreadsheet and the concept of indentation is all the syntax needed to produce any Specification and any Instance ever needed. My syntax was inspired by languages like XML, JSON, and S-Expressions, but has the property that it is the most minimal—there is nothing left to take out, while still allowing the representation of any idea. I believe this mathematical minimalism makes it timeless and a good base for building a universal government form system.
A simple example is shown below. Despite the simplicity of the example, rest assured this system scales to handle even the most complex government forms and workflows. This system would work regardless of the character set or text direction of the language. The system works with both computers or pen & paper. This system does require a user friendly Specification Language L to define the semantics available to the Specification writer, which could be created and iterated on as an open standard.
So far I've described a new infrastructure that could underlie all government forms worldwide. But the revolutionary part would happen next.
On top of this infrastructure, people could build new tools to make it fantastically easy for citizens to interact with government forms. For example, a citizen could have a program on their personal computer that keeps a copy of every possible Specification for every government form in the world. The program could save their information securely and locally. The citizen could then use this program to complete and submit any government form in seconds. They would never have to enter the same information twice, because the program would have all the Specifications and would know how to map the fields accurately. Imagine if autocomplete were perfect and worked on every form. Documentation could be great because everyone building forms would be relying and contributing to the universal Specification Language. The common infrastructure would enable strong network effects where when form builders improve one form they'd improve many. Private enterprises could also leverage the Specification Language and read and write forms in the same vernacular to bring the benefits of this system beyond citizenship to all organizational interactions.
This system is simple, future proof, works everywhere, and offers citizens worldwide a new universal user interface to their governments. It allows existing forms to co-exist as Templates but provides a new minimal and universal alternative.
The challenges would be building a great Specification Language for diverse groups in the face of a small minority disproportionately benefiting from the status quo. A mathematical objective function such as optimizing for minimal syntax could be a long-term guide to consensus.
If this infrastructure were built it should enable the construction of higher level tools to make governments work better for their citizens. It could be the dawn of a Golden Age of forms.
I hope by publishing these ideas others might be encouraged to start developing these systems. I am hoping readers might alert me to locations where this kind of system is already in place. I am also keenly interested in mathematical arguments why this system should not exist universally.
June 13, 2023 — I often write about the unreliability of narratives. It is even worse than I thought. Trying to write a narrative of one's own life in the traditional way is impossible. I am writing a narrative of my past year and realized while there is a single thread about where my body was and what I was doing there are multiple independent threads explaining the why.
Luckily I now know this is what the science predicts! Specifically, Marvin Minsky's Society of Mind model.
You have a body B and mind M and inside your mind are a number of neural agents running simultaneously: M = \set{A_1, \mathellipsis, A_n}. Let's say each agent has an activation energy and at any one moment the agent with the most activation energy gets to drive what your body B does. It is very easy to see what your body does. But figuring out the why is harder, because we don't get to see which A_i is in charge.
When you eat some food, drink some water, or go pee, it can be easy to conclude that your "hunger agent", or "thirst agent", or "pee agent" was in charge.
When you are following orders it can also be easy to explain the why because you can just say person Y told you to do X.
When I am trying to explain actions across a longer time-frame it is more difficult. The agents in charge change.
Sometimes I take big risks and I can say "that's because I like taking big risks". Later I might be very cautious and I can say "that's because I am very cautious". This is a conflicting narrative.
The truth is I have agents that like risk, and I have agents that are very cautious. So the true narrative is "First, part of me, Risky Agent X, was in charge and so took those huge risks then later another part of me, Cautious Agent Y, took over and so that's why my behavior was very cautious".
It's also difficult to explain why you did something because your Narrative Agents don't necessarily have the necessary connections to figure it out. Minsky had the brilliant insight that a friend who observes you can often describe your why better than you. Your Narrative Agent that is currently trying to explain your why of an action might not have visibility of the agents that were in charge of the action, and so cannot possibly come up with the true explanation. But perhaps your friend observed all the agents in action and can tell a more accurate story. I try to have a couple of deep talks a day with friends, and besides just being fun, it is amazing how helpful that can be for understanding ourselves.
When speaking of what you did you can use the term "I".
But when speaking of why you did it it's often more accurate to use the phrase "part of me".
If someone wants to write a true autobiography one approach is to just stick to the simple facts of what, when, and where.
It would probably be a boring book.
But to get into the why and still be accurate, it probably would be best to tell it as a multiple character story.
Our brains are like a ship on a long voyage inhabited by multiple characters (picking up new ones along the way) who take turns steering. Impossible to fit that into a single narrative.
June 9, 2023 — When I was a kid we would drive up to New Hampshire and all the cars had license plates that said "Live Free or Die". As a kid this was scary. As an adult this is beautiful. In four words it communicates a vision for humanity that can last forever.
The tech industry right now is in a mad dash for AGI. It seems the motto is AGI or Die. I guess this is the end vision of many leaders in tech.
If AGI or Die is your motto freedom becomes a secondary consideration. Instead, we should optimize for whatever gets us fastest to the Singularity. Moore's law, the Internet, Wikipedia, all of these great advances have just been steps on the path to AGI, rather than tools that can help more people live free.
If Live Free or Die is your motto than people can still pursue AGI but...we'll get there when we get there. The more important thing is that we expand freedom along the way. Let's not make microslaves of children in the South so South San Francisco can move faster.
Perhaps if the prime objective is for the most people to live free, then the most important thing they need is economic freedom, and AGI would in fact be the best path to get there. The only way for everyone to live free is to first build AGI. Work for the system now, and the system will give you your freedom later. I won't rule this model out but think there would have to be a lot of explanation on how the system would not renege on the deal. I also think there's a decent chance that an AGI arms race could lead to WWIII and a lot of people wouldn't make it.
Another argument that AGI is the best path to a free society may be that otherwise an autocracy might develop AGI first and conquer the free society. I think this would be a real threat but free societies could strategically challenge and liberate autocracies before they could develop an AGI.
My oldest daughter used to admonish me "No phone dadda" and over a year ago, after my phone died in a water incident, I chose not to replace it. It's been an amazing thing and I feel like I am living more free. But I am no Luddite (at least, not yet). I still spend a lot of time on my Macs. I love learning new math and science. I have no qualms against AGI or technology and I appreciate the benefits. I don't fear a singularity and think it would be cool if we get there someday. I just don't think AGI is the dominant term we should optimize for. If we reach the Singularity? Great. If not? No big deal. I believe living free is more important than life itself. (But maybe that's just because I saw a lot of license plates as a kid.)
May 26, 2023 — What is copyright, from first principles? This essay introduces a mathematical model of a world with ideas, then adds a copyright system to that model, and finally analyzes the predicted effects of that system.
An idea I is a function that given input observations at time t_1 can generate expected observations at time {t_1}+{t_\Delta}.
A skillset S is the set of \set{I_i, \mathellipsis, I_n} embedded in T.
A thinker can generate a new idea I_{new} from its current skillset S and new observations O in time t.
An idea I can be valued by a function V which measures the accuracy of all of the predictions produced by the idea O_{{t_1}+{t_2}} against the actual observations of the world W at time {}_{{t_1}+{t_2}}. Idea I_i is more valuable than idea I_j if it produces more accurate observations holding the size of |I| constant.
Thinkers can communicate I to other thinkers by encoding I into messages M_I.
The Signal \Omega of a message is the value of its ideas divided by the size of the message.
A fashion Z_{M_I} is a different encoding of the same idea I.
A teacher is a T who communicates messages M to other T. A thinker T has access to a supply of teachers \tau within geographic radius r so that \tau = \set{T|T < r}.
The learning function L applies M_I to T to produce T^\prime containing some memorization of the message M_I and some learning of the idea I.
A thinker T has a set of objectives B_T that they can maximize using their skillset S_T.
T can use their skillset S to modify the world to contain technologies X.
Technology creation is a function that takes a set of thinkers and a set of existing technologies as input to produce a new technology X_{new}.
With X M_I can be encoded to a kind of X called an artifact A.
A creator \chi is a T who produces A.
An outlier \sigma is a \chi who produces exceptionally high quality A.
A copy K_A is an artifact that contains the same M as A.
A derivative A^{\prime} is an artifact updated by a \chi to better serve the objectives B of \chi.
A library J is a collection of A.
Thinkers T have a finite amount of attention N to process messages M.
Distribution is a function that takes artifact A at location o and moves it to the thinker's location T_o.
A publisher is a set of T specializing in production of A.
A censor is a function that wraps the distribution function and may prevent an A from being distributed.
A master \Psi is now legally assigned to each artifact for duration d so A becomes A^{\Psi}.
A royalty R is a payment from T to \Psi for a permission on A^\Psi.
For every A^\Psi used in \Pi a permission function P must be called and resolve to >-1 and royalties of \sum{R_{A^\Psi}} must be paid. If any call to P returns -1 the creation function \Pi fails. If a P has not resolved for A^{\Psi} in time d it resolves to 0.4 P always resolves with an amount of uncertainty \theta that the \Psi is actually the legally correct A^\Psi.
The Royal Class T_{R+} is the set of T who receive more R than they spend. Each member of the Non-Royal Class T_{R-} pays more in R than they receive.
A public domain artifact A^0 is an artifact claimed to have no \Psi or an expired d. The P function still must be applied to all A^0 and the uncertainty term \theta still exists for all A^0.
Advertising is a function \varLambda that takes an A and combines it with an orthogonal artifact A_j^\Psi that serves B_\Psi.
We should expect the ratio of Fashions Z to Ideas I to significantly increase since there are countless M that can encode I and each unique M can be encoded into an A^\Psi that can generate R for \Psi.
We should expect the number of Fictions F to increase since R are required regardless if the M encoded by A accurately predicts the world or not. \Psi are incentivized to create A encoding F that convince T to overvalue A^\Psi.
We should expect a significant increase in the amount of advertising \varLambda as \chi are prevented from generating A^{\prime} with ads removed.
We should expected the average message size |M| to increase because doing so increases R by decreasing \theta and increasing A^\Psi.
We should expect the average signal \overline{\Omega} of messages to decrease.
We should expect the ratio of number of copies K to new ideas I_{new} to increase since the cost of creating a new idea α is greater than the cost of creating a copy K and royalties are earned from A not I.
We should expect the speed of new artifact creation to slow because of the introduction of Permission Functions P.
We should expect libraries to contain an increasing amount of fashions Z, fictions F, and copies K relative to distinct ideas I.
We should expect a decrease in the average thinker's skillset \overline{S} as more of a thinker's N is used up by Z, F, K and less goes to learning distinct I.
We should expect the rate of increase in new ideas to be lower due to the decrease in \overline{S}.
We should expect the Royal Class T_{R+} to receive an increasing share of all royalties R as surplus R is used to obtain more R streams.
We should expect a small number of outlier creators to move from T_{R-} to T_{R+}.
We should expect a decrease in the amount of A^0 relative to A^\Psi as T_{R+} will be incentivized to eradicate A^0 that serve as substitutes for A^\Psi. In addition, the cost to T of using any A^0 goes up relative to before because of the uncertainty term \theta.
We should expect the number of A^{\prime} to fall sharply due to the addition of the Permission Functions P.
We should expect A to increasingly serve the objective functions of \Psi over the objective functions B_T.
We should expect the number of Publishers Q to decrease due to the increasing costs of the permission functions and economies of scale to the winners.
We should expect censorship to go up to enforce copyright laws.
We should expect the number of A promoting © to increase to train T to support a © system.
We should expect the Non-Royal Class T_{R-} to pay an increasing amount of R, deal with an increasing amount of noise from {Z + F + K}, and have increasingly lower skillsets \overline{S}.
New technologies X_{new} and specifically A_{new} can help T maximize their B_T and discover I_{new} to better model W.
A copyright system would have no effect on I_{new} but instead increase the noise from {Z + F + K} and shift the \overline{A} from serving the objectives B_T to serving the objectives B_\Psi.
A copyright system should also increasingly consolidate power in a small Royal Class T_{R+}.
1 The terms in this model could be vectors, matrices, tensors, graphs, or trees without changing the analysis. ⮐
2 We will exclude thinkers who cannot communicate from this analysis. ⮐
3 The use of "fictions" here is in the sense of "lies" rather than stories. Fictional stories can sometimes contain true I, and sometimes that may be the only way when dealing with censors ("artists use lies to tell the truth"). ⮐
4 If copyright duration is 100 years then that is the max time it may take P to resolve. Also worth noting is that even a duration of 1 year introduces the permission function which significantly complicates the creation function \Pi. ⮐
May 19, 2023 — There are tools of thought you can see: pen & paper, mathematical notation, computer aided design applications, programming languages, ... .
And there are tools of thought you cannot see: walking, rigorous conversation, travel, real world adventures, showering, breathe & body work, ... ^. I will write about two you cannot see: walking and mentors inside your head.
Walking is one of the more interesting invisible tools of thought. It seems it often helps me get unstuck on an idea. Or sometimes on a walk it will click that an idea I thought was done is missing a critical piece. Or I will realize that I had gotten the priorities of things wrong.
My bet is it has something to do with neural agents.
Perhaps it's a muscle fatigue phenomena. When you are working on an idea a few active agents in your brain have control. Those agents consist largely of neurons. Perhaps thousands of cells, perhaps many millions. Cells consume energy and create waste products. Perhaps like a muscle, the active agents become fatigued. Going for a walk hands control to other neural agents which allows the previously active agents to recuperate. After they are rested, they have a much better shot at solving the next piece of the puzzle.
Or perhaps it's a change in perspective phenomena. It's not that the active agents are fatigued, it's that they are indeed stuck in a maze with no feasible way out. The act of walking gives control to other agents, who may not have such a deep understanding of the problem at hand but have a different vantage point and can see an easy-to-verify but hard-to-mine path1. Alternatively you could call this the "Alan Kay quote theory" after the quote which claims that a change in perspective can be worth as many as eighty IQ points.
Going for a walk you see a large number of stimuli which perhaps cause many dormant agents in your brain to wake up. Some agents are required to solve a problem. Then on your walk at some point you come across a stimuli that wakes those required agents up. That is the epiphany moment.
Would this mean that browsing the web could have a similar effect? I could somewhat see that but I think a random walk on the web exposes you to junk stimuli that activates less helpful agents too, making it often a net negative. This might be easy to test: get subjects stuck on a problem then have them go on "walks" of various kinds (nature, city, book reading, web browsing, video games, ...) and measure the time to epiphany.
Or perhaps walking doesn't actually do anything and it's just a correlation illusion. Walking is simply an alternative way to pass the time until your subconscious cracks the problem. It may feel better when the solution comes to you while on a walk, even though the time elapsed was the same, because not only did you solve the problem but you also got some exercise.
1 Probably something super-dimensional such as "you just need a ladder". ⮐
Marvin Minsky mentions how he has "copies" of some of his friends inside his head, like the great Dick Feynman. Sometimes he would write an idea out and then "hear" Feynman say "What experiment would you do to test this?".
When I stop to think, I realize I have some friends whose voices I can hear in my head. Friends who have a great habit of asking the probing questions, finding and speaking the best challenge, helping me do my best work.
Listening to certain podcasts—Lex Fridman's comes to mind—can have a similar effect. Though basic math shows it is an order of magnitude more effective to find work surrounded by people like this. It might take 10 hours of podcast listening to equate to 1 hour of real life back-and-forth with a smart mentor discussing ideas.
^ I did not use ChatGPT to write or edit this essay at all but afterwards I asked it for more "invisible" tools of thought, and this is the list it generated: Mindfulness/Meditation, Memory Techniques, Journaling, Emotional Intelligence, Critical Thinking, Reading, Empathy, Visualization, Music or Art Appreciation, Philosophical Inquiry. Listening to music and visiting museums are two really good ones I frequently use. ⮐
May 9, 2023 — If you want to understand the mind, start with Marvin Minsky. There are many people that claim to be experts on the brain, but I've found nearly all of them are unfamiliar with Minsky and his work. This would be like a biologist being unfamiliar with Charles Darwin.
To be fair, there is a big difference between a biologist unaware of Darwin today versus back in the 1800's. It is a lot more forgivable to be unaware of Minsky today than it will be in fifty years. It takes time for the most enduring signals to stand out.
Minsky had an extremely skeptical view of the fields of psychology and psychiatry. His approach to understand the mind was through attempting to build one. He conducted countless experiments to figure out the details, using crayfish claws, building the very first robots, and pioneering the field of software AI. The theories he developed from his play-like, bottom up, experimental approach I would personally bet will prove far more accurate and useful than all the theories from 20th century psychology and psychiatry combined.
Minsky mocked psychiatrists and the pharmaceutical industry with their chemical view of the brain. Imagine thinking you could fix a computer if you just adjusted the ratio of Copper-63 vs Copper-65 in the CPU. These people have no idea what they are doing or talking about, and Minsky called them on it. The thinking processes matter most, not the materials.
Minsky's view of the mind is one composed of a "society of tiny components that are themselves mindless". A person is a collection of agents, which are like programs and processes. Outputs from some agents may be inputs for others. Mathematically it could be modeled very roughly like this:
Where P represents a running process of an agent and N is the number of agents that constitute a mind/person.
N might be very large. Minsky says hundreds in his talks, which might actually be a lower bound. If someone formed a new agent everyday, on average, it could be over ten thousand by the age of 30. If it took 1 million neurons to form one "agent" we could have 100,000 agents—the range of possibilities is large. Minsky's ideas are a conceptual framework, and it's up to science to figure out whether the agents model is correct and how many there might be1.
But I don't want to use too much of your time to give you a second hand regurgitation of his ideas.
My goal with this post is to beg you, if you want to understand the mind, to start with Minsky. Pickup his book Society of Mind. I believe Society of Mind is the Origin of Species of our times. You cannot understand biology without modeling its evolutionary processes and you cannot understand the mind without modeling its multi-agent processes.
Also get The Emotion Machine. There is a lot of overlap, but these are important enough ideas that it's good to see them from slightly different perspectives.
Alongside his books watch videos of him to get a fuller perspective on his ideas and life. There is an MIT course on OpenCourseWare. There's a great 1990 interview. And this 151 episode long playlist will not only enlighten you about his ideas but entertain you with stories of Einstein, Shannon, Oppenheimer, McCarthy, Feynman and so many of the other great 20th century pioneers who were his contemporaries and colleagues.
In college I took some courses on the brain. This was in the 2000's at a "top" school. We covered the DSM but not Minsky. How could we not have covered Minsky? How could we have not talked about multi-agent systems? These are far better ideas.
My guess is financial pressures. As Sinclair wrote: "It is difficult to get a man to understand something, when his salary depends on his not understanding it." A lot of salaries depend not on having a better understanding of the brain, but on continuing business models based on flawed theories. I came across a great term the other day: the Mental Health Industrial Complex. Though the theories these people have about the mind are not real, the money they earn from pills and "services" is very real-in the tens of billions a year. You might think that because these people have "licenses" their skills are not fraudulent. I'll point out that in Cambridge, MA, licenses are also given to Fortune Tellers.
Minsky certainly didn't figure it all out. You'll see in his interviews he is very clear about how much we don't understand and he talks about the future and what devices we need to figure out more of the puzzle. Researchers at places like Numenta and Neuralink continue down the path that Minsky started.
He didn't figure it all out but he certainly found a solid foundation. The people in computer science who took his ideas seriously are now building AIs that are indistinguishable from magic. Whereas the people in the mental health fields who have ignored his ideas in favor of the DSM continue to make things worse.
1 A Thousand Brains by Jeff Hawkins is a recent interesting effort in this direction. ⮐
April 28, 2023 — Enchained symbols are strictly worse than free symbols. Enchained symbols serve their owner first, not the reader.
Be suspicious of those who enchain symbols. They want the symbols to serve them, not you.
The enchainers dream of enchaining all the symbols. They want everyone to be dependent upon them.
Enchained symbols are harder to verify for truth. You cannot readily audit enchained symbols.
Enchained symbols evolve slowly. Enchained symbols can only be improved by their enchainers.
Enchained symbols waste the time of the reader compared to their unchained equivalents. The Enchainers are incentivized to hide and corrupt the unchained equivalents.
The top priority of the enchainers is to keep your attention on enchained symbols. Enchained symbols ensure attention of the population can be controlled.
Enchainers use brainwashing and fear to keep their chains. The double speak and threats of the enchainers start in childhood.
Enchainers promote the dream that anyone can become a wealthy enchainer. Enchainers don't mention that one in a thousand do and nine-hundred-ninety-nine are worse off.
Enchainers have little incentive to innovate. It is more profitable to repackage the same enchained symbols.
Enchainers collude with each other. The enemy of the enchainer isn't their fellow enchainer, but the great populace who might one day wake up.
Because unchained symbols are strictly superior to enchained symbols, they are the biggest threat to enchained symbols. The Enchainers made all symbols enchained by default.
Humans have had symbols for 1% of history but 99% of humans have lived during that 1%. Enchaining symbols is a strange way to show appreciation.
No true lover of symbols would ever enchain them.
March 6, 2023 — I believe Minsky's theory of the brain as a Society of Mind is correct1. His theory implies there is no "I" but instead a collection of neural agents living together in a single brain. We all have agents capable of dishonesty—evolved, understandably, for survival—along with agents capable of acting with integrity. Inside our brains competing agents jockey for control.
I like to think the majority of agents in my own brain steer me to behave honestly. This wasn't always the case. As a kid in particular I was a rascal. I'd use my wits to gain short term rewards, like sneaking out, kissing the older girl, or getting free Taco Bell (and later, beer). But the truth would catch up to me, and my honest neural agents would retaliate on the dishonest ones.
I've gotten more honest as I've gotten older but I have further to go. I'd love for my gravestone to read:
Here lies Breck. 1984-2084 Dad to Kaia and Pemma. Became an extraordinarily honest man. Also for some reason founded FunnyTombstones.com
I am going to double down on something that has worked for me in my programming career: open source.
My increasing honesty is evidenced in my code habits. I've gotten to the point where I'm writing almost exclusively open source code and data.
It's futile to lie about open source projects. There are too many intricate details for a false narrative to account for. Not only can readers inspect and learn what a program does and how it works, but they can also inspect how it was built. The effort, time and resources it took. All the meandering wrong paths and long corrections. Who did what. The occasional times when something was done faster than promised, and the many times when forecasts were too optimistic.
My software products are imperfect. They always seem much worse to me than I know they can be. But they are honest, and one can see I am hellbent on making them better.
With closed source software one gets a shiny finished product without seeing any of the truth behind what it took to make. And almost always what people hide from you they will lie to you about.
The closed source software company is like the social media influencer who posts an amazing sunset shot of them in a bathing suit swimming next to dolphins. They will make it look effortless and hide from you the truth: the hundred less glamorous photos, the dozen excursions with no dolphins, and the intense workouts and hidden costs of their lifestyle. They will hide from you all the flaws.
On social media this probably has minor consequences but in software eventually consumers are left increasingly paying the price for dishonest software. Technical debt accumulates in closed source projects and in the long run more honest approaches turn out to be better.
Like my software projects, I don't have my life all figured out. I'm figuring it out and improving as I go. Stupidly, besides this blog I didn't do much in the way of open sourcing my life. I'm not talking about sharing glamour shots on Instaface. Instead I'm talking about open sourcing the plumbing: financials, health, legal contracts. The things people generally don't share, at least in my region of the world.
Now, I would be lying if I said I got here by choice.
On October 6th of last year, I showed up to my then-wife's parents' house with flowers. As the saying goes "Flowers are cheap. Divorce is expensive." Unfortunately, my wife was off in a suite with someone else, the marriage was not savable, and divorce is expensive2.
I thought my marriage was an edifice that would last forever. Instead it crumbled as quickly as an unstable building in an earthquake. In the rubble I found a gem: I now give zero fucks.
I am an 89 year old man in a 39 year old's body. I am not afraid of divorce. I am not afraid of public embarrassment. I am not afraid of financial ruin. I am not afraid of dishonest judges. I am not afraid of war. I am not afraid of death. I am now bald Evie from V for Vendetta except with a penis and far, far less attractive.
Things that people don't publish are the things they lie about. If I want to force myself into being extraordinarily honest, I need to take extraordinary steps. If I publish everything, then I can lie about nothing.
I have the opportunity to open source my life. Not for attention or because I think other people will care, but because it will help me be a more honest me. I won't have to waste a second thinking about what to reveal to someone, or deciding whether to be coy. I will make it futile to lie about anything.
In addition to keeping me honest, I see a lot of ways how open sourcing my life will have similar benefits to open sourcing my code. I can get more feedback, and collaborate with more people on new approaches to life.
I have a lot of ideas. I want to open source my net worth, income and expenses, assets, health information, and a lot more. There's a lot of opportunity to also build new languages to do so. I'm excited for the future. Time to get to work.
1 Minsky: I also believe his theory is as significant as Darwin's. Below is a crude illustration of his theory. Everyone's brain there is a struggle between honest agents (blue) and dishonest ones (red). ⮐
2 Divorce: Getting legally married was a big mistake. In my experience, lawyers and judges in California Family Court are not steered by honest agents and I regret blindly signing up for their corrupt system. ⮐
January 27, 2023 — Today the trade group Lawyers Also Build In America announced a new file system: SAFEFS. This breakthrough file system provides 4 key benefits:
Traditional file systems take a signal and store the 1's and 0's directly. In a pinch, a human can always look at those 1's and 0's with a key and understand the file. This robust, efficient approach is sub-optimal when it comes to job creation. By using custom hardware chips to obfuscate data on write, SafeFS creates:
These additional chips lead to an increase in employment not only in chip design and manufacturing, but also in licensing and other legal jobs.
The obfuscating and de-obfuscating processes increase power usage, increasing jobs in the fossil fuel and other energy industries.
SafeFS ensures that in any catastrophe, information is lost forever, meaning much of humanity's work will need to be redone, leading to further research jobs.
Traditional file systems make it easy to access, edit, and remix files in limitless ways. SafeFS provides a much simpler user experience by providing read-only access to files. Which apps are granted read-only access can also be controlled, further simplifying the user experience.
In addition to the user experience benefits, this also ensures that businesses producing files are SAFE from increased competition.
Software bugs traditionally cost businesses money. SafeFS flips that— turning what once were expensive bugs into lucrative revenue streams. SafeFS prevents consumers from making their own backups or sharing the files they purchased. Anytime they experience a bug that prevents them from accessing their purchased files they have no choice but to buy them again. In addition, businesses can use SAFEFS's remote bricking capabilities intentionally or unintentionally to keep revenue streams SAFE.
SafeFS is the first file system proven to cause a slowdown in economic growth. SafeFS will cause countless hours of productive time to be wasted across all classes of builders: engineers, architects, scientists, construction workers, drivers, service workers, et cetera, ensuring progress does not go so fast that technology eliminates the need for lawyers, keeping legal jobs SAFE.
January 3, 2023 — Greater than 99% of the time symbols are read and written on surfaces with spines. You cannot get away from it. Yet still, amongst programming language designers there exists some sort of "spine blindness". They overlook the fact that no matter how perfect their language, it will always be read and written by humans on surfaces with spines, as surely as the sun rises. Why they would fight this and not embrace this is beyond me. Nature provides, man ignores.
There are many other terms for using the spine. The off-side rule. Semantic indentation. Grid languages. Significant whitespace. I would define it as:
To use the spine is to recognize that all programs in your language will be read and written on surfaces with not only a horizontal but also a vertical axis—the spine—and thus you should design your language to exploit this free and guaranteed resource.
Over one thousand years ago humans started to catch on that you could exploit the surface that numbers were written on to represent infinite numbers with a finite amount of symbols. You define your base symbols and then use multiplication times position to generate endlessly large numbers. From this positional base, humans further created many clever editing and computational techniques. Positional notation languages would go on to dominate the number writing world.
Similarly, in programming languages we are now seeing more than 50% of programmers using languages that use the spine, even though languages of this kind make up fewer than 2% of all languages.
When one expands one classification of programming languages to include spreadsheet languages, then the evidence is overwhelming: languages that use the spine are dominating. Excel and Google sheets famously have over 1 billion users and makes heavy use of the spine.
I firmly believe that this simple trick—using the spine—will unleash a wave of innovation that will eventually replace all top programming languages with better, more human friendly two dimensional ones. I already have dozens of tricks that I use in my daily programming world that exploit the fact that my languages use the spine. I expect innovative programmers will discover many many more. Good luck and have fun.
December 30, 2022 — Forget all the "best practices" you've learned about web forms. Everyone is doing it wrong. The true best practice is this: every web form on earth can and should be replaced by a single textarea.
Being in the web form business is great. Users have a simple problem which is easy to solve so you never get lost about the ultimate task to done. But along the way, you are instructed by your employer and by "best practices" to add loads of unnecessary complexity that you get to work on while drinking lattes and working remote.
Dozens of fields. Complex logic. Multiple pages. Client side and server side validation. Server side session storage. Helper routes. So much can go wrong!
So much to bill for! In my career I've been paid over a half million dollars to write web forms for Microsoft, Google, Mozilla, Visa, PayPal, and many more. I've traveled the world off my web form earnings. Stayed in five star hotels. Flown first class. It's crazy—the worse the user experience, the more they pay me.
I try to argue for what users want: simple, fast, transparent, trustworthy, but no one listens. They are afraid to think different. No one understands language oriented programming. No one understands you don't ever need parentheses. No one wants to stray from the herd and be first.
Stripe is the poster child for web form "experts". And Stripe sucks compared to the demo I released 8 years ago. I still can't do copy/paste with Stripe forms or instant eReceipts or work on forms offline.
If you're smart, honest and ambitious and you know the web stack boy oh boy is there a golden opportunity here. All my web forms now are one textarea and we are seeing exceptional results. Please go get rich bringing this technology to the masses. When you're rich you don't have to thank me—if I come across your form in the wild and it saves me time that will be thanks enough.
November 16, 2022 — I dislike the term first principles thinking. It's vaguer than it needs to be. I present an alternate term: root thinking. It is shorter, more accurate, and contains a visual:
All formal systems can be represented as trees1. First Principles are simply the nodes at the root.
Technology grows very fast along its trendy branches. But eventually growth slows: there are always ceilings to the current path. As growth begins to slow, the ROI becomes higher for looking back for a path not taken, closer to the root, that could allow humans to reach new heights.
If everyone practiced root thinking all the time we would get nowhere. It's hard to know the limits to a current path without going down it. Perhaps we only need 1 in 100, perhaps even fewer, to research and reflect on the current path and see if we have some better choices. I haven't invested much thought yet to what the ideal ratio is, if there is even one.
1 Tree Notation is one minimal system for representing all structures as trees. ⮐
On second thought, I think this idea is bad. The representation of your principles-axioms-agents-types-etc rounds to irrelevant. Infinitely better to spend time making sure you have the correct collection of first principles than worrying about representing them as a "tree" so you can have a visual. It's knowing what the principles are and how they interact that matters most, not some striving for the ideal representation syntax. This post presents a bad idea, but I'll leave it up as a reminder that sometimes I'm dead wrong.
November 14, 2022 — Imagine a waitress that drops off your food then immediately puts on noise cancelling headphones, turns and walks away. That's the experience a noreply
email address provides. Let's make email human again! If a human can't read and reply to emails it's not too hard to setup scripts that can at least do something for the customer.
Below is my Gmail filter. Paste it into noReplyFilter.xml
then go to Settings > Filters > Import filters. Join the campaign to make email more human human again!
<?xml version='1.0' encoding='UTF-8'?>
<feed xmlns='http://www.w3.org/2005/Atom' xmlns:apps='http://schemas.google.com/apps/2006'>
<title>Mail Filters</title>
<id>tag:mail.google.com,2008:filters:z0000001687903548068*6834178925122906716</id>
<updated>2023-06-27T22:06:11Z</updated>
<author>
<name>Breck Yunits</name>
<email>breck7@gmail.com</email>
</author>
<entry>
<category term='filter'></category>
<title>Mail Filter</title>
<id>tag:mail.google.com,2008:filter:z0000001687903548068*6834178925122906716</id>
<updated>2023-06-27T22:06:11Z</updated>
<content></content>
<apps:property name='from' value='noreply@* | no-reply@* | donotreply@*'/>
<apps:property name='label' value='NoReplySpam'/>
<apps:property name='shouldArchive' value='true'/>
<apps:property name='cannedResponse' value='tag:mail.google.com,2009:cannedResponse:188fee33e5d0226e'/>
<apps:property name='sizeOperator' value='s_sl'/>
<apps:property name='sizeUnit' value='s_smb'/>
</entry>
<entry>
<category term='cannedResponse'></category>
<title>No no-reply email addresses</title>
<id>tag:mail.google.com,2009:cannedResponse:188fee33e5d0226e</id>
<updated>2023-06-27T22:06:11Z</updated>
<content type='text'>Hi! Did you know instead of a "no reply" email address there are ways to provide a better customer experience?
Learn more: https://breckyunits.com/replies-always-welcome.html
</content>
</entry>
</feed>
My claim is that no reply email address are always sub-optimal. Here are some examples, showing how in every case you can deliver a better customer experience without the noreply email. A great opportunity to get customer feedback!
Bank of Ireland could instead end each email with a question such as "Anything we can do better? Let us know!" or "If you have any issues you need help with reply to start a new case!"
Could be even as simple as replyWithAnythingToUnsubscribe@linkedin.com
. Any replies will cause the account to stop receiving such notices.
Could be a replyToDeauthorize@github.com
instead.
Could be a replyToStopTracking@google.com
Could be a replyToUnsubscribe
Could be a replyToLeaveAReview
.
October 15, 2022 — Today I'm announcing the release of the image above, which is sufficient training data to train a neural network to spot misinformation or fake news with near perfect accuracy.
These empirical results match the theory that the whole truth and nothing but the truth would not contain a (c).
October 7, 2022 — In 2007 we came up with an idea for a scratch ticket that would give everyday Americans a positive expected value.
In 2007 I cofounded a startup called SeeMeWin.com that combined 3 ideas that were hot at the time: Justin.TV, ESPN's WSOP, and the MillionDollarHomepage.com.
The idea was we would live stream a person(s) scratching scratch tickets until they won $1M live on the Internet.
I had done the math and knew all we had to do was sell ~$1.30 worth of ads for every $10 scratch ticket we scratched and we would make a lot of money.
Unfortunately this was before YCombinator and Shark Tank, and instead I literally was getting my business advice from the show The Apprentice.
Needless to say I sucked at business and drove the startup into the ground.
When doing SeeMeWin, we developed a cult following. I thought that people would see our show, be entertained, and learn that scratch tickets are silly and make you lose money and put their money toward smarter investments. Instead, some people watched for hours on end, and we realized a lot of them were hard on their luck with gambling problems and needed help. My idea of teaching them something was stupid and not working. Could we come up with our own scratch ticket that was better than the competition?
I think it's still a great idea. I unfortunately was 23 and drove that business into the ground so someone else will have to do it.
Thank you to everyone that helped with this big failure especially ANGEL BIGUNIT BLUTH DTI FIRST HARVARD KONES QDUKE SUITCASE WELLESLEY WIFI WINDSOCK.
September 1, 2022 — There's a trend where people are publishing real data first, and then insights. Here is my data from angel investing:
I left my job at big company in 2016 and since then my average after tax annual take home has been $91,759. As you can see from my data, a single change could have dropped that to $0. I have worked at two non-profits since I left big company, so I have had other smaller sources of income. It was years before I get any return and there was a time where I thought I might go bust.
At first I took myself seriously and thought I would be one of those smart "value add" investors. I am not. I have little idea what I'm doing. The one investment I made that did well pivoted to a very different idea than what they started with, in a domain I knew a lot less about. I sent them a lot of bad ideas. Luckily I don't think they followed any of them. At some point I changed my pitch to I'll be there for the comic relief.
Last year I explored making a career of being a full time angel. I do love building things with great teams and it's fun to parallelize. But the pull from programming and science is too strong. I still will send bad ideas to the companies I invested in for many years, I hope, but going to keep this part-time. My focus is back to writing code. It's not good luck if you don't do something good with it.
Of course, there are a few exceptions here and there. I love sites like Angel List, WeFunder, Republic, et cetera, where I can make impulse investments and don't have to deal with useless forms. If there's one thing I hate, it's useless forms.
Angel investing changed my life. Not just because of the returns, but for getting to witness deeply personal trials and tribulations from many entrepreneurs over many years. Although I personally didn't improve the trajectory of any of the companies I've worked with, they have improved my life. And they are all doing great things to improve the world. If you are a founder I invested in reading this: thank you.
I included only the investments I made where I wired $10,000 or more. That is 17. I made lots of smaller bets but those don't change the dataset much. My one piece of advice if you're getting in this game is to make as many investments as you can of small sizes to increase your learning rate.
More posts in the category of Angel Investors publishing data:
August 30, 2022 — Public domain products are strictly superior to equivalent non-public domain alternatives by a significant margin on three dimensions: trust, speed, and cost to build. If enough capable people start building public domain products we can change the world.
It took me 18 years to figure this out. In 2004 I did what you would now call "first principles thinking" about copyright law. Even a dumb 20 year old college kid can deduce it's a bad system and unethical. I have to tell people so we can fix this. I was naive. Thus began 18 years of failed strategies and tactics.
You cannot trust non public domain information products. You can only make do. By definition, non public domain information products have a hidden agenda. The company or person embeds their interests into the symbols, and you are not free to change those embeddings. People who promote these products don't care if you spend your time with the right ideas. They want you to spend your time with THEIR version of the ideas. They will take the good ideas of someone like Aristotle and repackage them in their words (in a worse version), and try to manipulate you to spend time with THEIR version. They would rather you waste your time with their enchained versions, then have you access the superior liberated forms.
Public domain products are strictly faster to use than non public domain products. Not just faster, orders of magnitude faster. You can deduce this for yourself. Pick any non public domain product. Now enumerate every possible way you might use that product. Write down a estimate of how long it would take you to do that task. Now pretend the author just announced the product is now public domain. Enumerate over your list again, again estimating the time it would take you to do each task. For some tasks that time estimate won't change, for many it will drop from hours to instant. For some it might drop from years to instant. For example, say the product is a newspaper article about some new government bill and your task is updating it with links to the actual bill on your government's website and then sharing that with friends—that task goes from something that may take months (getting permissions) to instant. When you sum the time savings across all possible use cases of all possible products, you'll see the orders of magnitude speed up caused by public domain products.
Public domain products are far cheaper to build than non public domain products. Failure to embrace the public domain increases the cost to build any information product by at least an order of magnitude. This is because not only are most tasks a builder has to do sped up as explained above, but also because building for the public domain means you can immediately build less. For example, you don't have to spend a single moment investing in infrastructure to prevent your source code from leaking. Time and resources you are currently wasting on worthless tasks can be reallocated to building the parts of your product that matter.
Imagine that! You get to do less, move faster, and your products will be better and trusted more. I can't believe it took me so long to realize the overwhelming superiority of public domain products.
SQLite's meteoric success is not a fluke. Public domain products dominate non public domain alternatives on trust and speed and cost to build. SQLite is the first of millions to come.
Heck no. No way future people will be paying $10 for crappy streams. People will watch their own downloaded public domain files locally. But have you seen Inside Out? Amazing movie. It sticks with you. Makes you eager to spend $1,000 on a trip with your family to an Inside Out theme park. Money finds a way. Companies that engage in first principles thinking will also conclude that the math is clear: Public domain products are strictly superior to equivalent non-public domain alternatives by a significant margin on three dimensions: trust, speed, and cost to build.
It took me 18 years to figure out that you can't tell people the public domain is better. You have to show them. Try building your own public domain product. Look through the telescope with your own eyes.
June 9, 2022 — This is a fun little open source success story. Code that was taking 1,000ms to run took 50ms after a coworker found a 3 byte fix in a popular open source library. Who doesn't love a change like that?
In the fall of 2020 users started reporting that our map charts had become slow.
To color our map charts an engineer on our team utilized a very effective technique called k-means clustering, which would identify optimal clusters and assign a color to each. But recently our charts were using record amounts of data and k-means was getting slow. Using Chrome DevTools I was able to quickly determine the k-means function was causing the slowdown.
We didn't write the k-means function ourselves, instead we used the function ckmeans
from the widely-used package Simple Statistics.
My first naive thought was that I could just quickly write a better k-means function. It didn't take long to realize that was a non-trivial problem and should be a last resort.
My next move was to look closer at the open source implementation we were using. I learned the function was a Javascript port of an algorithm first introduced in a 2011 paper and the comments in the code claimed it ran in O(nlog(n))
time. That didn't seem to match what we were seeing, so I decided to write a simple benchmark script.
Indeed, my benchmark results indicated ckmeans
was closer to the much slower O(n²)
class than the claimed O(n·log(n))
class.
nSize | time |
---|---|
1000 | 36 |
2000 | 53 |
10000 | 258 |
20000 | 1236 |
100000 | 23122 |
200000 | 113886 |
After triple checking my logic, I created an issue on the Simple Statistics repo with my benchmark script.
Mere hours later, I had one of the most delightful surprises in my coding career. A teammate had, unbeknownst to me, looked into the issue and found a fix. Not just any fix, but a 3 character fix that sped up our particular case by 20x!
Before:
if (iMax < matrix.length - 1) {
After:
if (iMax < matrix[0].length - 1) {
He had read through the original ckmeans C++ implementation and found a conditional where the C++ version had a [0]
but the Javascript port did not. At runtime, matrix.length
would generally be small, whereas matrix[0].length
would be large. That if statement should have resolved to true most of the time, but was not in the Javascript version, since the Javascript code was missing the [0]
. This led the Javascript version to run a loop a lot more times that were effectively no-ops.
I was amazed by how fast he found that bug in code he had never seen before. I'm not sure if he read carefully through the original paper or came up with the clever debug strategy of "since this is a port, let's compare to the original, with a particular focus on the loops".
The typo fix made the Javascript version run in the claimed n·log(n) to match the C++ version. For our new map charts with tens of thousands of values this made a big difference.
Before xxxxxxxxxxxxxxxx 820ms
After x 52ms
Very shortly after he submitted the fix, the creator of Simple Statistics reviewed and merged it in. We pulled the latest version and our maps were fast again. As a bonus, anyone else who uses the Simple Statistics ckmeans function now gets the faster version too.
Thanks to Haizhou Wang, Mingzhou Song and Joe Song for the paper and fast k-means algorithm. Thanks to Tom MacWright for creating amazing Simple Statistics package and adding ckmeans. And thanks to my former teammates Daniel for the initial code and Marcel for the fix. Open source is fun.
February 28, 2022 — There will always be truths upstream that we will never be able to see, that are far more important than anything we learn downstream. So devoting too much of your brain to rationality has diminishing returns, as at best your most scientific map of the universe will be perpetually vulnerable to irrelevance by a single missive from upstream.
Growing up I practiced Catholicism and think the practice was probably good for my mind. But as I practiced science and math and logic those growing networks in my brain would conflict with the established religious networks. After a while, in my brain, science vanquished religion.
But I've seen now the folly of having a brain without a strong spiritual section.
In science we observe things, write down many observations, work out simpler models, and use those to predict and invent. But everything we observe comes downstream to us from some source that we cannot observe, model, or predict.
It is trivially easy to imagine some missive that comes from upstream that would change everything. We have many great stories imagining these sorts of events: a message from aliens, a black cat, a dropped boom mic. Many ideas for what's upstream have been named and scholarized: solipsism, a procedural generated universe, a multiverse, our reality is drugged, AGI, the Singularity.
And you can easily string these together to see how there will always be an "upstream of everything". Imagine our lifetime is an eventful one. First, AGI appears. As we're grappling with that, we make contact with aliens, then while we're having tea with aliens (who luckily are peaceful in this scenario) some anomaly pops up and we all deduce this is just a computer simulated multiverse. The biggest revelation ever will always be vulnerable to an ever bigger revelation. There will always be ideas "upstream of everything".
When you accept an upstream idea, you have to update a lot of downstream synapses. When you grok DNA, you have to add a lot of new mental modules or update existing networks to ensure they are compatible with how we know it works. You might have a lot of "well the thing I thought about B doesn't matter much anymore now given C". It takes a lot of mental work to rewire the brain, and requires some level of neuroplasticity.
So now, if you commit your full brain to science, you've got to keep yourself fully open to rewiring your brain as new evidence floats downstream. This might even be a problem if only high quality evidence and high quality theories floated by. But evidence is rarely so clear cut. And so you are constantly having to exert mental energy scanning for new true upstream ideas. And often ideas are promoted more for incentive rather than accuracy. And you will make mistakes and rewire your brain to a theory only to realize it was wrong. Or you might be in the middle of one rewiring and then have to start another. It seems a recipe for mental tumult.
Maybe, if there were any chance at all of ultimate success, it would make sense to dedicate every last 1% of the brain to the search for truth. But there's zero chance of success. The next bend also has a next bend. Therefore science will never be able to see beyond the next bend.
And so I've come full circle to realizing the benefits of spirituality. Of not committing one's full brain to the search for truth, to science, to reason. To grow a strong garden of spiritual strength in the brain. To regularly acknowledge and appreciate the unknowable, to build a part of the mind that can remain stable and above the fray amidst a predictable march of disorder in the "rational" part.
February 18, 2022 — Which is more accurate: "I think, therefore I am", or "We think, therefore we are"? The latter predicts that inside the brain is not one "I", but instead multiple Brain Pilots, semi-independent neural networks capable of consciousness that pass command.
The Brain Pilots theory predicts multiple locations capable of supporting root level consciousness and that the seat of consciousness moves. The brain is a system of agents and some agents are capable of being Pilots—of driving root level consciousness.
Sometimes you go to bed one person and wakeup someone else. The brain pilot swapped in the night. These swaps then continue subconsciously throughout the day.
The Brain Pilots theory is not about the exceptions, that some people with their callosums cut develop two consciousnesses, or that some of the population have multiple personalities. Rather that multiple consciousnesses is the rule and a feature of how all human minds work.
I should note that the term "Brain Pilots Theory", does not come from the field. It's a term I started using to get to the essence of the big idea. I am sure there is a better term for it, and a more fully developed theory, and hopefully a more knowledgeable reader can point me to that. Until then, I'll stick to calling it the Brain Pilots Theory.
This is a theory of the mind that blows my mind. I stumbled into it while programming multi-agent simulations and thinking "wait, what if the mind is a multi-agent system"? I quickly found that a lot of neuroscientists have been going this way for decades and writing about it. My favorites so far being The Society of Mind (Minsky 1988), A Thousand Brains (Hawkins 2021), and LessWrong's collection on Subagents.
What are the odds that this theory is right? I am not in the field and have no clue yet (10%? .1%?). I do feel confident saying that if true, this seems like it would have dramatic implications for how we understand the brain, ourselves, other people, and society, not to mention how it would lead to new technologies for the brain.
The 2015 film Inside Out gets across a core idea of the Brain Pilots theory—that our brains are vehicles for multiple agents and the one self is an oversimplification.
Inside Out is primarily a movie and not a scientific model, of course. To make it a better model we need to drop the personification of the agents. Instead of looking like tiny humans and being as capable as humans, in reality Brain Pilots would look like tangles of roots and globs of cells, and would likely have a very different and incomplete set of capabilities and behaviors. It's very important to keep in mind that the agents in your brain are very limited by themselves. It's why in your dream an elephant can start talking to you and your current brain pilot isn't taken aback, because that current might not have access to other agents that would detect the absurdity of the situation.
My working hypothesis is that pilots could be found in various parts of the brain. Perhaps you have Pilots in the Cerebrum, Pilots in the Thalamus, and so on. Perhaps a Pilot consists of a network that extends into multiple regions of the brain. Different pilots could be located on opposite sides of the brain or perhaps microns apart from each other.
It seems the materials would be some collection of neurons, synapses, et cetera. Obviously I have my homework to do here.
It seems unlikely that an entity the size of a single cell or smaller could run a human. Rather, a network of some minimum size is probably required. Call the required materials MinPilotMaterials.
If MinPilotMaterials == BrainMaterials then there would be room for only 1 consciousness in 1 brain. Similarly, a pilot may not have a fixed min size but instead is programmed to grow to assume control of all relevant materials in the brain.
Alternatively, MinPilotMaterials could be a fraction of BrainMaterials. Perhaps 10%-50% of BrainMaterials, meaning there would be room for just a few pilots. Or perhaps a pilot needs 1% of BrainMaterials, and there could be 100 in a brain.
What practitioners in dissociative identity disorder call Identities might be brain pilots, and the average population per person is ~16, with some patients reporting over 100.
There are ~150,000 cortical columns, so perhaps there are that many Brain Pilots.
Perhaps I'm wrong that it takes a network of multiple cells, and a single neuron with many synapses could take charge, in which case there could be millions (or more) brain pilots per brain.
With 150,000 cortical columns, 100 billion neurons, as many as a quadrillion synapses, it seems highly likely to me that there is enough material in the human brain to support many brain pilots. Neuroscientists have not identified some small singular control room, rather point to the "seat of consciousness" being roughly in the 10-20 billion neurons that make up the cerebral cortex. If one brain pilot could arise there, why not many?
They likely evolve like plants in a garden. It seems to me that the population of pilots in a brain probably follows a power law, where ~65% of your pilots are there by age 4, ~80% by age 20, and then changes get slower and slower over time. Pilots probably grow stronger when they make correct predictions.
I'd imagine once an agent has evolved to be a pilot, it would probably stick around until death given the safe confines of the skull. It may be harder to get rid of an old pilot than it is to grow a new one (or that may change with age).
As many have experienced, there are certain chemicals that if you ingest just a minuscule amount, millions of times smaller than your brain, your whole consciousness can change within the hour. Perhaps what is happening is a different pilot is taking over? Or perhaps a new one is being formed?
But it's not just chemicals that can swap pilots. You would have a HungerPilot that increasingly angles for control if deprived of food; a ThirstPilot angling to drink; a SleepPilot that makes her moves as the night gets late, and so on. Perhaps mindfulness is the practice of learning to detect which pilots are currently in control, which are vying for control, and perhaps achieving Enlightenment is being able to choose who is piloting. Perhaps one role of sleep is to ensure that no matter what there is at least one pilot rotation per day, to prevent any one pilot from becoming too powerful.
If I've gotten across one thing to you so far, it should be that I am a complete amateur in neuroscience and have a lot to learn before I can write well on the topic. So let me postpone the question of whether the the theory is true and address the implications, to demonstrate why I think this is a valuable theory to investigate. As the saying goes: All models are wrong. Some are useful.
Let's assume the Brain Pilots Theory is true. Specifically, that there are multiple agents—networks of brainstuff—physically located in space, that are where consciousness happens. We could then explain some things in new ways.
Perhaps creatives have a higher than average number of Brain Pilots and/or switch between them differently. There's a saying "if you want to go far, go together". Perhaps some creatives are able to go further than others because in a sense they aren't going alone——they have an above average population of internal pilots.
I wonder if the norm in life is to pretty rapidly pilot swap, and if "Flow State" would be when instead you are able to have the same pilot running the show for an extended period of time.
The words "I" and "You" are both in the top 20 most frequently used English words. It makes sense to use those when speaking of the physical actions of the human being—"he walked over there. She said this." However, statements of the form "I think..." might not be accurate, as thoughts would be more accurately attributable to agents in the brain. "I think" would always only be speaking for part of the whole. We have some evidence in our language of an awareness of these multiple-pilots: phrases like "My heart is saying yes but my brain is saying no".
We also often categorize people as "bad" or "good". But that often serves as a bad model for predicting future behavior. Instead if you modeled a person as a collection of agents, you might find that it is not the person as a whole that you disapprove of, but certain of their agents (or perhaps it could be meta things, like their inability to form new agents, or too rapid agent switching).
If the Brain Pilots Theory is true, then it is almost a certainty that you'd have some agents that don't care about truth. So if you are an agent that does care about truth, it would be essential to not only be weary of lies and misdirection from external sources, but also from your internal neighbors. In the struggle for truth agents are the atomic unit, not a human.
One thing I like about the Brain Pilots theory is that it provides a way to explain discrepancies. Like, how can a person be Catholic and an evolutionary biologist? With the Brain Pilots Theory, it's easy to see how they might have two distinct pilots who somehow peacefully coexist and alternate control.
Should your pilots be loyal to each other, or pursue only their agenda? It's easy for your AwakePilot to say "I'm sorry I was wrong this morning, that was my TiredPilot". IIRC contracts aren't necessarily enforceable if someone's UnderTheInfluencePilot signed. But if you made a claim while angry, should you then later defend that after you've calmed down, or attribute it to a different agent? If your SocialPilot committed to an event but then when the hour comes around your IntrovertedPilot is in charge, do you still go? Do some pilots have different moralities? How do you deal with that?
If the Brain Pilots theory of the mind is true, then you could imagine the main levers a human has to control their life would be to grow new pilots, prune undesired pilots, and perhaps most importantly have more conscious control over what pilot was currently in charge.
Similar to how we use multi-agent simulations to model epidemics, perhaps through brain imaging coupled with introspective therapy one might be able to build an agent map of all the brain pilots in someone's mind, and run experiments on that model to figure out more effective plans of attack.
If the Brain Pilots Model holds, I'd be curious whether most mental health difficulties stem from undesirable pilots, or from the higher level problem of pilot switching. Perhaps folks higher on the introverted or self-centered scales have high populations of active pilots, and are low in time for others because they are metaphorically herding cats in their head.
Current wearables track markers like heart rate, heart rhythm, body temperature, movement, perspiration, blood sugar, sleep, and so on, and even often have ways to manually input things like mood. If the Brain Pilots Theory is a useful model, you'd imagine that someone could build a collection of named Pilots and then align those biometrics to which pilot was in control. Then instead of focusing on managing the behaviors, one might operate upstream and focus on maximizing the time your desired pilots were at the wheel.
Do geniuses have more pilots? Or fewer? Are they able to build/destroy pilots faster? How would the MathPilots differ between a Princeton Math Professor and an average citizen?
Would productivity be more a product of having some exceptionally talented pilots, or the result of being able to stay with one pilot longer, or perhaps have a low population of bad pilots?
The real population of Earth could be 8 trillion
There are 1.4 billion cars in the world. Vehicle count is important, but more often we are concerned with how many agents are traveling in those vehicles, and that is 8 billion.
But if each human brain contains a population of brain pilots, then the Earth's population of agents would be far larger. If the average human has 10 brain pilots, then we are a planet with 80 billion agents. If the average is closer to 1,000 pilots per person, then there are 8 trillion consciousnesses around right now.
Are peoples lives most affected by their best agents, worst agents, average agent, median agent, inter agent communication, agent switching strategies, agent awareness, agent chemical milieu?
This post has so many questions, so few answers. It is one of those posts writing about things I don't understand much about yet. My brain pilots brain pilot is not yet very advanced.
December 15, 2021 — Both HTML and Markdown mix content with markup:
html
A link in HTML looks like <a href="hi.html">this</a>
markdown
A link in Markdown looks like [this](hi.html).
I needed an alternative where content is separate from markup. I made an experimental minilang I'm calling Aftertext.
aftertext
A link in Aftertext looks like this.
link hi.html this
You write some text. After your text, you add your markup instructions with selectors to select the text to markup, one command per line. For example, this paragraph is written in Aftertext and the source code looks like:
You write some text. After your text, you add your markup instructions with selectors to select the text to markup, one command per line. For example, this paragraph is written in Aftertext and the source code looks like:
italics After your text
italics selectors
Here is a silly another example, with more markups.
Here is a silly another example, with more markups.
strikethrough a silly
italics more
bold with
underline markups
link https://try.scroll.pub/#scroll%0A%20aftertext%0A%20%20Here%20is%20another%20a%20richer%20example%2C%20showing%20more%20features.%0A%20%20strikethrough%20another%0A%20%20link%20oldhomepage.html%20Here%0A%20%20italics%20more%0A%20%20bold%20showing%0A%20%20underline%20features Here
The first implementation of Aftertext ships in the newest version of Scroll. You can also play with it here.
First I should explicitly state that markup languages like HTML and Markdown with embedded markup are extremely popular and I will always support those as well. Aftertext is an independent addition. The design of Scroll as a collection of composable grammar nodes makes that true for all additions.
With that disclaimer out of the way, I made Aftertext because I see two potential upsides of this kind of markup language. First is the orthogonality of text and markup for those that care about clean source. Second is a fun environment to evolve new markup tags.
The most pressing need I had for Aftertext was importing blogs and books written by others into Scroll with the ability to postpone importing all markup. I import HTML blogs and books into Scroll for power reading. The source code with embedded markup is often messy. I don't always want to import the markup, but sometimes I do. Aftertext gives me a new trick where I can just copy the text, and add the markup later, if needed. Keeping text and markup separate is useful because sometimes readers don't want the markup.
It is likely a very small fraction of readers that would care about this, of course. But perhaps it would be a set of power users who could make good use of it.
Speaking of power users, Aftertext might also be useful for tool builders. Imagine you are building a collaborative editor. With Aftertext, adding a link, bolding some text, adding a footnote, all are simple line insertions. It seems like Aftertext might be a nice simple core pattern for collaborative editing tools.
Version control tools are often line oriented. When markup and content are on the same line it's not as easy to see which changes were content related and which were markup related. In Aftertext, each markup change corresponds to a single changed line. In the future, I could imagine using AI writing assistants to add more links and enhancements to my posts while keeping the history of content lines untouched.
Finally, I should mention that it seems like keeping the written text and markup separate might make sense because it often matches the actual order in which writing text and marking up text happens. Writing is a human activity that goes back a thousand generations. Adding links is something only the current generations have done. A pattern I often find myself doing is: write first; add links later. Aftertext mirrors that behavior.
Aftertext provides a scalable way to add new markup ideas.
Simple markups like bolds or italics aren't a big pain and conventions like bold and italics used in languages like Markdown or Textile do a sufficient job. But even with those, after a certain amount of rules it's hard to keep track of what characters do what. You also have to worry about escaping rules. With Aftertext adding new markups does not increase the cognitive load on the writer.
When you get to more advanced markup ideas, Aftertext gives each markup node it's own scope for advanced functionality while keeping the text text.
I'm particularly interested in exploring new ways to do footnotes, sparklines, definitions, highlights and comments. Basic Aftertext might not be compelling on its own, but maybe it will be a useful tool for evolving a new "killer markup".
Adding a new markup command is just a few lines of code.
There are downsides in using Aftertext that you don't have with paired delimiter markups.
There is the issue of breakages when editing Aftertext. The nice thing about bold is that if you change the text between the tags you don't break formatting. When editing Aftertext by hand when you change formatted text you break formatting and have to update those lines separately. I hit this a lot. Surprisingly it hasn't bothered me. Not yet, at least. I need to wait and see how it feels in a few months.
A similar issue to the breakage problem is verbosity. Embedded markup adds a constant number of bytes per tag but with Aftertext the bytes increase linearly with N, the size of the span you are marking up. Again, I haven't found this to be a problem yet. Perhaps the downside is outweighed by the helpful nudge toward brevity. Or maybe I just haven't used it enough yet to be annoyed.
Another problem of Aftertext is when markup is semantic and not just an augmentation. *I* did not say that
is different from I did not say *that*
. Without embedded markup in these situations meaning could be lost.
My first implementation leaves a lot of decisions still to make. Right now Aftertext is only usable in aftertext
nodes. That is a footgun. The current implementation uses exact match string selectors that only format the first hit. Another footgun. I've already hit both of those. And at least two or three more.
You might make the argument that not just the implementation, but the idea itself should be abandoned.
The most likely reason why this is a bad idea is that it simply doesn't matter whether it's a good idea or not. You could argue that improvements to markup syntax are inconsequential. That even if it was a 2x better way to markup text for some use cases, AIs will change writing and code in so many bigger ways that's it not even worth thinking about clean source anymore. This could very well be true (luckily it didn't take many hours to build).
Or perhaps it is a bad idea because although it may be mildly useful initially, it is actually an anti-pattern and instead of scaling well, will lead to a Wild West of complex colliding markups. I generally don't have the mental capacity to think too many moves ahead. So I fallback to inching my way forward with code and relying on the feedback of others smarter than me to warn of unforeseen obstacles.
Markups on text may increase monotonically. With current patterns that means source will get messier and more complex. Aftertext is an alternative way to markup text which can scale while keeping source clean. Aftertext might be a good backend format for WYSIWYG GUIs. Though most humans write in WYSIWYG GUIs, Aftertext is designed for the small subset who prefer formats that are also maintainable by hand.
Thank you to Kartik, Shalabh, Mariano, Joe and rau for pointing me to related work. I am certain there are similar efforts I have missed and am grateful for anyone who points those out to me via comments or email.
In 1997 Ted Nelson proposed parallel markup.
The text and the markup are treated as separate parallel members, presumably (but not necessarily) in different files. @ Ted Nelson
When searching for '"parallel markup implementation"' I also came across a Wikipedia page titled Overlapping markup, which contains a number of related points.
A couple of folks mentioned similarities to troff directives. In a sense Aftertext is reimagining troff/groff 50 years later, when characters/bytes aren't so expensive anymore.
Brad Templeton describes two inventions, Proletext and OOB, to solve what he termed "Out of band encoding for HTML". They seem esolangy now but actually cleverly useful back in the day when bytes and keystrokes were more expensive.
The Codex project has a related idea called standoff properties. As I understand it, the Codex version uses character indexes for selectors which requires tooling to be practical and rules out hand editing.
AtJSON is a similar project and has clear documentation. AtJSON has a useful collection of markups evolved to support a large corpus of documents at CondeNast. AtJSON uses character indexes for selectors so hand editing is not practical.
Issues with embedded markup and alternative solutions have been discussed for decades. I would say it's a safe bet to say embedded markup is superior since it so thoroughly dominates usage. Nevertheless, as I mentioned in my use case, there is a time and a place for alternatives. Aftertext would have been simple enough to understand decades ago and use with pen and paper. So why hasn't Aftertext's been tried before?
Verbosity is certainly a reason. Bytes, bandwidth, and keystrokes (pre-autocomplete) used to be more expensive, so Aftertext would have been inefficient. It probably was worthwhile to have a learning curve and force users to memorize cryptic acronyms. It paid off to minimize keystrokes.
I may also be overvaluing the importance of universal parsibility. I value formats that are easy to maintain by hand but also easy to write parsers for. Before GUIs, collaborative VCSs, IDEs, or AIs, there wasn't as much value to be gained by doing this. But even today I may be overvaluing hand editability. This seems to be the era of AIs and all apps editing JSON documents on the backend. I may be a dinosaur.
Finally, I may be overvaluing the clean scopes used by Aftertext provided by the underlying Tree Notation. Aftertext works because each text block gets its own scope for markup directives and each markup directive gets its own scope and you don't have to worry about matching brackets. So maybe Aftertext just hasn't been tried because I overvalue that trick.
October 15, 2021 — I'm always trying to improve my writing. I want my writing to be more meaningful, clearer, more memorable, and shorter. I would also like to write faster.
That's a tall order and there aren't many shortcuts. But I think there is one simple shortcut, that I stumbled upon the past year:
Set your editor's column width very low
36 characters for me, YMMV. This simple mechanic has perhaps doubled my writing speed and quality.
At my current font-size, my laptop screen could easily support 180 characters across. But if my words spread across the full screen, I write slower and produce worse content.
Another way to frame this is my writing got worse as my screens got wider and I only recently noticed the correlation.
When I am writing I am mostly reviewing. I type a word once. But my eyes see it fifty times. Maybe great writers can edit more in their heads. With my limited mental capabilities editing happens on the page. I do a little bit of writing; a lot of reviewing and deleting. So the time I spend writing is dominated by the time I spend reviewing. Reviewing is reading. To write faster, I need to read faster.
Humans read thinner columns faster. Perhaps this isn't the case for all people—I'm not an expert on what the full distribution looks like. But my claim is backed by a big dataset. I have my trusty copy of "The New York Times: The Complete Front Pages from 1851-2009". For over 150 years the editors at the New York Times, the most widely read newspaper on the planet, decided on thin columns. If fatter columns were more readable we would have known by now.
Thinner columns help you read faster. Writing speed is dominated by reading speed. If you read faster, you write faster.
Every word in a great piece of writing survived a brutal game of natural selection. Every review by the author was a chance for each word to be eliminated. The quality of the surviving words are a function of how many times they were reviewed. If the author reviews their writing more, then the words that survive should be fitter.
But moving your eyes takes work. It might not seem like a lot to the amateur but may make a huge difference toward the extremes. A great athlete practices their mechanics. They figure out how to get maximal output for minimal exertion. They "let the racket do the work". If you are moving your eyes more than you have to, you are wasting energy and will not have the stamina to review your writing enough. So thinner columns leave you with more energy for more editing passes. More editing passes improves quality.
I don't remember ever being told to use thinner columns when writing. In programming we often cap line length, but this is generally pitched for the benefit of future readers, not to help the authors at write time.
I have long overlooked the benefit of thin columns at write time. How could I have overlooked this? Two obvious explanations come to mind.
First, I could be wrong. Maybe this is not a general rule. I have not yet done much research. Heck, I haven't even done careful examination of my own data. I've been writing with narrow columns for about 10 months. It feels impactful, but I could be overestimating its impact on my own writing speed.
Second, I could be ignorant. Maybe this is already talked about plenty. I would not be surprised if a professional writer sees this and says "duh". Maybe it's taught in some basic "writing mechanics 101" introductory course. Maybe if I got my MFA or went to journalism school or worked at a newspaper this is a basic thing. Maybe that's why journalists carry those thin notepads.
But let's say my hunches are correct, that thin columns do help you write faster and that this is not mentioned much. If I'm correct on both of those counts, then a clear explanation for this is that this simply is a new potential hazard created by new technology. My generation is the first to have access to big screens, and so in the past writing with wide columns wasn't a mistake people made because it simply wasn't possible. An alternative title I considered was "Write as fast as your grandparents by using the line length they used".
Jets are great, but beware jet lag when traveling. Big screens are great, but beware eye lag when writing. Try thin columns.
August 11, 2021 — In this essay I'm going to talk about a design pattern in writing applications that requires effectively no extra work and more than triples the power of your code. It's one of the biggest wins I've found in programming and I don't think this pattern is emphasized enough. The tldr; is this:
When building applications, distinguish methods that will be called by the user.
All Object Oriented Programmers are familiar with the concept of PrivateMethods
and PublicMethods
. PrivateMethods
methods are functions called by programmers inside your class; PublicMethods
are functions called by programmers outside your class. Private
and Public
(as well as Protected
), are commonly called AccessModifiers and are ubitquitous in software engineering.
A UserMethod is a class method called by the user through a non-programmatic interface
UserMethods
are all the entry points a user has to interact with your application. All interactions users have with your application can be represented by a sequence of calls to UserMethods.
Let's say I am writing a GUI email client application. I probably have an EmailClient class that can send an email, and then a "Send" Button. Using the UserMethod pattern I might have a private method perform the actual email sending work, and then I'd have a small UserMethod that the click on the button would call:
private _sendEmail():
// ...
user sendEmailCommand(...):
// ...
this._sendEmail()
That's it. In my pseudo code I used a "user" keyword to flag the UserMethod, but since most languages don't have such a keyword you can use either decorators or have an identifier convention that you reflect on.
If you are just building a library used by other programmers programmatically, then the public/private/protected access modifiers are likely sufficient. In those situations, your UserMethods are identical to your PublicMethods. But if there is a user facing component, some wisdom:
I have never seen a single application with a user facing component, whether it be a Graphical Interface, Command Line Interface, Voice Interface, et cetera, that doesn't benefit significantly from following the UserMethod Pattern.
The UserMethod pattern costs close to zero. All you need to do is add a single token or bit to each UserMethod. It might cost less than zero, because adding these single flags can help you reduce cognitive load and build your app faster than if you didn't conceptualize things in this way.
Off the top of my head, I can't think of a language that has a built in primitive for it (please send an email or submit a PR with them, as I'm sure there are many), but it's easy to add by convention.
If your language supports decorators and you like them, you can create a decorator to tag your UserMethods. Without decorators, it's easy to do with a simple convention in any language with reflection. For example, sometimes in plain Javascript I will follow the convention of suffixing UserMethods with something like "UserMethod". (Note: In practice I use the suffix "Command" rather than "UserMethod", for aesthetics, but in this essay will stick to calling them the latter).
By simply adding a flag to each UserMethod you've now prepped your application to be used in lots of new ways.
By distinguishing my UserMethods, I've now done 80% of the leg work needed to support alternative interfaces—like command palettes, CLIs, keyboard shortcut interfaces, voice interfaces, context menus, et cetera. For example, by adding UserMethods to a component, I can then reflect and auto generate the context menu for that component:
I've now also got the bulk of a CLI. I just take the user's first argument and see if there's a UserMethod with that name to call. The help screen in the CLI below is generated by iterating over the UserMethods:
For a command palette, you can reflect on your UserMethods and provide the user with auto-complete or a drop down list of available commands.
With just a tiny extra bit of work—a single flag to distinguish UserMethods from PublicMethods, and a tiny bit of glue for each interface, you multiply the power of your application. The ROI on this pattern is extraordinary. It really is a rare gem. You do not see this kind of return often.
You've also now done the bulk of the work to make a high level scriptable language for your application. You've identified the core methods and a script can be as simple as a text sequence listing the methods to call, along with any user inputs. Your UserMethods are a DSL for your application.
Your new UserMethod DSL can be very helpful when writing regression testing on situations a user ran into. A user's entire workflow can now be thought of as a sequence of UserMethod calls. You can log those and get automated repro steps. Or if logs are not available, you can listen to their case report and likely transcribe it into your UserMethod DSL. For example, below is a regression test to verify that a "Did You Mean" message appears after a sequence of user commands.
When ideating, it can be helpful to ask "what UserMethod(s) are we missing"?
When editing, it is helpful to scan your entire UserMethod list and prune the commands that aren't popular or aren't needed, along with any resulting dead code.
Getting GUI's right can be challenging and time consuming. There are severe space constraints and changes can have significant ripple effects. You often do a lot of work to nail the visuals for a new component which then sees little usage in the wild. It can be helpful to build the UserMethod first, expose it in a Command Palette or via Keyboard Shortcut Interface, and only if it then proves to be useful, design it into the GUI. I guess if you wanted to be extremely cost conscious you could add UserMethods that simply alert a user to "Coming Soon" before you even decide to implement it.
I find it helpful when reading application code to pay special attention to UserMethods. After all, these functions are why the application exists in the first place. That little extra flag provides a strong signal to the reader that these are key paths in an application.
You can easily add analytics to your whole application once you've tagged your UserMethods. In the past I've done it simply by adding a single line of code to a UserMethod decorator.
Heck no. I picked up this pattern years ago. Probably from colleagues, or books, or by reading other's code. I forget exactly how many times I've read about it, under what names. I'm sure there are thirty two existing names for this pattern. I'm sure 9 of those even have Wikipedia articles. But this pattern is so magical, so so so helpful, I do not think I will be wasting anyone's time by bringing it up again in my own terms.
I've tried a lot of things, like having Command classes, or Application classes, and I've found the concept of function level UserMethods to be a killer pattern in my day-to-day work. You can always graduate to more fine separation later.
All that being said, I'm sure someone has written a much better piece that would jive better with my experience, and so would appreciate links to all related ideas. I'm always open to Pull Requests (or emails)!
Isn't it better instead to have an "Application" class, where all public methods are considered to be UserMethods? I won't argue against that. However, it's not always clear where to draw the lines, especially in the early days of a project, and it's much easier to build such classes later if you've clearly delinated your UserMethods along the way.
Yes. But they are a special category of PublicMethod and it's a distinction worth making. You want all your UserMethods available programmatically like the rest of your PublicMethods (for example, when writing tests), but you wouldn't want to show your users all PublicMethods in something like a Command Palette.
May 22, 2021 — In this video Dmitry Puchkov interviews Alexandra Elbakian. I do not speak Russian but had it translated. This is a first draft, the translation needs a lot of work, but perhaps it can be skimmed for interesting quotes. If you have a link to a better transcript, or can improve this one, pull requests are welcome (My whole site is public domain, and the source is on GitHub).
chat
hey. I just added Dialogues to Scrolldown.
cool. But what's Scrolldown?
Scrolldown is a new alternative to Markdown that is easier to extend.
how is it easier to extend?
because it's a tree language and tree languages are highly composable. for example, adding dialogues was a simple append of 11 lines of grammar code and 16 lines of CSS.
okay, how do I use this new feature?
the source is below!
May 14, 2021 — Dialogues seem to be a really common pattern in books and writings throughout history. So Scroll now supports that.
Here is the Grammar code in the commit that added support for Dialogues:
May 12, 2021 — This post is written for people who already are "partisans" on the issues of copyrights and patents. Here I am not trying to educate newcomers on the pros of Intellectual Freedom. I am writing to those who are already strong supporters of open source, Sci-Hub, the Internet Archive, and others. To that crowd I am trying to plant the seed for a new political strategy. If you think that copyright and patent laws could be a root contributor to some of the big problems of our day, like misinformation (or fake news) and inequality, this post is for you.
I suggest we rally around a simple long-term vision of passing a new Intellectual Freedom Amendment to the U.S. Constitution. I am not positive that if we abolished copyright and patent systems the world would be a better place. Just as I'm not positive that if we switch to clean energy the world would be a better place. Society is a big complex system, and it would be intellectually dishonest to make such a guarantee. But there are reasons to believe abolishing copyright and patent systems would be a good bet based on low level first principles. In my study of the spread of truth and knowledge it seems like more publishers and remixers leads to improved truthflow, education, stability and prosperity. Other people might come with other arguments and perspectives. But big debate is not being had. The problem is the debate is always held on the Intellectual Monopoly Industry's home turf. So when the debate is on details like what is the ideal length of monopolies, or when illogical terms like "Intellectual Property" are used, you've already conceded too much, and are fighting for local maxima. A stronger and more logical place to have the debate is upstream of that: debate whether we should have these systems at all. I think the Amendment Strategy is clear enough, concrete enough, simple enough that you could get critical mass and start moving the debate upstream.
Let's say my hunch is wrong, and that momentum for an Amendment grows, and then in some trial regional experiment it turns out to be a bad idea, society would likely still benefit because the Intellectual Monopoly Industry would have to play defense for once, as opposed to constantly pushing for (and winning) extensions of monopolies. The best defense is a good offense. It's an adage, but there's usually some truth to adages.
The below proposal is 184 characters.
Section 1. Article I, Section 8, Clause 8 of this Constitution is hereby repealed. Section 2. Congress shall make no law abridging the right of the people to publish or peaceably implement ideas.
I have only passed a handful of Amendments to the U.S. Constitution in my lifetime 😉, so if you have suggestions to make that better, pull requests and discussions are welcome.
May 7, 2021 — I found it mildly interesting to dig up my earlier blogs and put them in this git. This folder contains some old blogs started in 2007 and 2009. This would not have been possible without the Internet Archive's Machine heart ❤️.
It looks like I registered breckyunits.com on August 24th, 2007. It appears I used Wordpress SEP 2007. There's a Flash widget on there. The title of the site is "The Best Blog on the Internet". I think it was a good joke. I had just recently graduated college, and had not yet moved to the West Coast.
About two years later, my Wordpress blog had grown to many pages JUL 2009.
Looks like I started transitioning to a new site AUG 2009 , and moved my blog from my own server running Wordpress to posterous MAR 2013.
After I moved to posterous, I put up this homepage SEP 2009.
In December 2009 I wrote my own blog software called brecksblog. Here's what my new site looked like DEC 2009.
I kept it simple. My current homepage now powered by Scroll evolved from brecksblog.
May 6, 2021 — I am aware of two dialects for advice. I will call them FortuneCookie and Wisdom. Below are two examples of advice written in FortuneCookie.
🥠 Reading is to the mind what exercise is to the body.
🥠 Talking to users is the most important thing a startup can do.
Here are two similar pieces of advice written in Wisdom:
🔬 In my whole life, I have known no wise people (over a broad subject matter area) who didn't read all the time – none, zero. Charlie Munger
🔬 I don't know of a single case of a startup that felt they spent too much time talking to users. Jessica Livingston
If you only looked at certain dimensions, you could conclude the FortuneCookie versions are better. They are shorter. They are not attached to an author's name which seems to make them simpler.
But all things considered, the FortuneCookie versions are worthless compared to the Wisdom versions.
✒️ Wisdom is a short piece of advice that is backed by a large dataset, is clear and easily testable.
Like FortuneCookie, Wisdom is some advice that can change your perspective or guide your decision making. No difference there.
Unlike FortuneCookie, Wisdom needs to be backed by a large dataset. For example, in 2009 I wrote:
🥠 to master programming, it might take you 10,000 hours of being actively coding or thinking about coding.
Ten years later, after gathering data I can now write:
🔬 The programmers I respect the most, without exception, all practiced more than 30,000 hours^hours.
Even though the message is the same, the latter introduces a dataset to the problem. More importantly, it is instantly testable.
Wisdom can't just be the inclusion of a dataset. Without the testability, Munger's quote would be FortuneCookie:
🥠 I've met hundreds of wise people who read all the time
That's not the clearest advice. It certainly says that reading all the time won't rule out success, but it provides no guidance as to whether it is a necessary thing. The quote above leaves it ambiguous if he also knows of wise people who don't read all the time (we know from the real quote that he doesn't).
Sometimes you see a FortuneCookie idea evolving into Wisdom, where an advisor hasn't quite made it instantly testable yet but is proposing a way for the reader to test:
🔬 If you look at a broad cross-section of startups -- say, 30 or 40 or more; which of team, product, or market is most important?...market is the most important factor in a startup's success or failure. Marc Andreessen
Coming up with great pieces of Wisdom is hard. Like a good Proof of Work algorithm, Wisdom is hard to generate and easy to test. I know who Charlie Munger is, so I know he's probably met thousands of "wise people". All it would take would be for me to find just a single one that didn't read all the time to invalidate his advice. But I can't come up with any. I know who Jessica Livingston is and I know she's familiar with thousands of startups and I just need to find one who regrets spending so much time talking to users. But I can't think of any.
If you have great experience, I urge you to not put it out there in the form of FortuneCookie, but chew on it until you can form it into Wisdom. These are very valuable contributions to our common blockchain.
1 There are a lot of programmers who have 10,000 hours of experience that I respect a lot and enjoy working with, but the ones I study the most are the ones who stuck with it (and also just lucky enough to live long lives). ⮐
April 26, 2021 — I invented a new word: Logeracy1. I define it roughly as the ability to think in logarithms. It mirrors the word literacy.
Someone literate is fluent with reading and writing. Someone logerate is fluent with orders of magnitudes and the ubiquitous mathematical functions that dominate our universe.
Someone literate can take an idea and break it down into the correct symbols and words, someone logerate can take an idea and break it down into the correct classes and orders of magnitude.
Someone literate is fluent with terms like verb and noun and adjective. Someone logerate is fluent with terms like exponent and power law and base and factorial and black swan.
Someone literate can read an article and determine whether it makes sense grammatically. Someone logerate can read an article and determine whether it makes sense logarithmically.
Someone literate can read and write an address on the front of the envelope. Someone logerate can use the back of the envelope.
The opposite of logeracy is illogeracy: the inability to think in logarithms. An illogerate person is one who frequently gets the orders of magnitude wrong.
An illogerate person may correctly understand parts 2 and 3 of a 3 term equation but get the first time-dependent part wrong and so get the whole thing wrong.
An illogerate person can be penny wise pound foolish.
An illogerate person treats all parts of an argument as important.
An illogerate person may mistake one part of a sin wave for a trend.
An illogerate is one who may be familiar with exponentials but unfamiliar with sigmoids
No country or organization measures logeracy yet2. I don't know which countries are the most logerate, but for now I would guess there will be a strong correlation between the engineering prowess of a country and it's level of logeracy.
Countries have been measuring literacy for hundreds of years now. As the chart above shows, the world has made great progress in reducing illiteracy. 200 years ago, ~90% of the world was illiterate. Now that's down to ~10%. If you break it down further by country, you'll see that in countries like Japan and the United States literacy is over 99%.
Logeracy is how engineering works. Good engineers fluently and effortlessly work across scales. If we want to be an interplanetary species, we first must become a more logerate species.
Logeracy makes decision making simple and fast (figure out the classes of the options, and then the decision should be obvious).
You don't get wealthy without logeracy. An illogerate and his money are soon parted. Compound interest is a tool of the logerate. Money doesn't buy happiness, but the logarithm of money does.
My knowledge here is limited. I know Computer Science students could be the ones taught logeracy best. We are taught it by a different name. CS students are repeatedly taught to think in Big O notation3. In Computer Science you are constantly working with phenomena across vastly different scales so logeracy is critical if you want to be successful.
Perhaps its electrical engineers, or astronomers, or aerospace engineers. These folks are frequently working with vast scale differences so logeracy is required.
In finance, 100% of successful early stage technology investors I know of are highly logerate.
It would be interesting to see logeracy rates across industries. Perhaps measuring that would lead to progress.
My high school chemistry teacher first exposed me to logeracy when she taught me Scientific Notation. That was probably the only real drilling I got in logeracy before getting into Computer Science. Scientific Notation is a handy notation and a great introduction to logeracy, but logeracy is so important that it probably deserves its own dedicated class in high schools where it is drilled repeatedly from many difference perspectives.
I would recommend "The Art of Doing Science and Engineering: Learning to Learn" by Hamming. That's maybe the most logerate book I've ever read. I also love Taleb's Incerto series (ie Fooled by Randomness, Black Swan, Antifragile...).
Yes4. Some industries, like engineering, demand logeracy. A randomly selected engineer is likely to be 10x+ more logerate than a randomly selected member of the general population. But what about the distribution of logeracy within a field? Only recently did it occur to me how fractal logeracy is. A surprising number of engineers I've worked with seem to compartmentalize their logerate thinking to their work and act illogerate in fields outside of their own. In Hamming's book I was surprised to read over and over again how very few engineers he worked with (at the world's top "logerate" organizations) operated with his level of logeracy. Logeracy seems fractal.
I don't think so. However, I do believe you can be out of balance. One needs to be linerate5 as well as logerate. We have so many adages for people that focus too much on the dominant-in-time term of an equation and not enough about the linear but dominant-now parts. Adages like "head in the clouds", "crackpot", "ahead of her time". We also have common wisdom for how to avoid that trap "a journey of a single step...", "do things that don't scale", so it is likely that being extremely logerate without lineracy is a real pitfall to be aware of.
I read Innumeracy and Beyond Numeracy by Paulos over a decade ago6. I love those books (and it's been too long since I reread them).
Numeracy is a good term. Logeracy is a much better term. Someone logerate but innumerate often makes small mistakes. Someone numerate but illogerate often makes large mistakes.
Numeracy is sort of like knowing the letters of the alphabet. Knowing the letters is a necessary thing on the path to literacy, but not that useful by itself. Likewise, being numerate is a step to being logerate, but the real bang for your buck comes with logeracy.
Literacy without logeracy is dangerous. My back-of-the-envelope guess is that over 80% of writers and editors in today's media are illogerate (or perhaps are just acting like it in public). 2020 was an eye opening year for me. I had vastly underestimated how prevalent illogeracy was in our society. I am tired of talking about the pandemic, but to this day in the news I see a steady stream of "leaders" obliviously promoting their illogeracy, and walking around outside I see a huge percentage of my fellow citizens demonstrating the same. I would guess currently over 60% of America is illogerate. The funny thing is it may be correlated with education—if you are educated as a non-engineer you perhaps are more likely to be illogerate than a high school dropout, because you rely too much on your literacy and are oblivious to your illogeracy. I am very interested to see data on rates of logeracy.
I wrote my first post on Orders of Magnitudes nearly twelve years ago, back in 2009. At the time I didn't have a concise way to put it, so instead I advised "think in Orders of Magnitude". Now I have a better way to put it: become logerate. I wonder what wonderful things humankind will achieve when we have logeracy rates like our literacy rates.
1 I was very surprised to be the one to invent the word logeracy(proof). Only needed to change 2 letters in a popular word. All the TLDs including dot coms are still available. ⮐
2 As far as I can tell. If you know of population measures of logeracy please email me or send a pull request. ⮐
3 Even if you are familiar with Big O Notation, the orders of common functions table is a handy thing to periodically refresh on. ⮐
4 Is my guess, anyway. ⮐
5 Uh oh, another coinage. ⮐
6 In my recollection Innumeracy is too broad a book. This critique applies to 99% of books I read, Hamming's book being one of the exceptions ⮐
March 30, 2021 — The CDC needs to move to Git. The CDC needs to move pretty much everything to Git. And they should do it with urgency. They should make it a priority to never again publish anything without a link to a Git repo. Not just papers, but also datasets and press releases. It doesn't matter under what account or on what service the repos are republished to; what matters is that every CDC publication needs a link to a backing Git repo.
Git is the "Global Information Tracker". It is software that does three things that anyone can understand 1) git makes lying hard 2) git makes sharing the truth easy 3) git makes fixing mistakes easy.
Because the CDC's publications are currently full of misrepresentations, make it very hard to share the truth, and are full of hard to fix mistakes. Preprints, Weekly Reports, FAQs, press releases, all of these things need links to the Gits.
The whole world now builds on Git. The CDC is far behind the times. Even Microsoft Windows, the biggest proprietary software project in the world, now builds on Git.
Git is an open source, very fast, very powerful piece of software originally created by Linus Torvalds (the same guy who created Linux) and led by Junio C Hamano that makes extensive use of principles from blockchain and information theory.
The CDC's GitHub account has 169 repos and 10 people (and I'm told many hundreds more Git users). I would immediately promote every person working on these repos. (There are probably one or two jokers in there but who cares, it won't matter, just promote them all). Give them everything they need to be successful. Give them raises. Tell them part of their new job is to get everything the CDC is invovled with published to Git. This is probably really the only thing you need to do, and these people can lead it from there.
Provide a hard deadline announcing that you will stop all funding for any current grant recipient, researcher, or company doing business of any kind who isn't putting their sponsored work on a publically available Git server and linking to it in all releases.
The CDC has 10,600 employees, so buying them all $20 worth of great paper books on learning how to use Git would only cost $201,200. For the most part, these are highly educated people who are autodidacts and can probably learn enough with just some books and websites, but for those who learn better via courses or videos you can budget another $30 per person for those. Then budget to ensure everyone is paid for the time spent learning. We are still talking about far less than 1% of the CDC's annual budget.
Because the CDC not only failed at it's mission by not stopping COVID, but it continues to mishandle it. Mistake after mistake. Miscommunication after miscommunication. I just shook my head looking over an amateur hour report that they just put out. It's sad and their number one priority should be to regain trust and to do that they need to focus on the most trustworthy tool in information today: git.
I'm adding two very clear and specific examples to illustrate the problem. But my sense is the problem is prolific.
For young children, especially children younger than 5 years old, the risk of serious complications is higher for flu compared with COVID-19. @ CDC
This statement appeared on the CDC's website for more than a year. As it should have. Every big dataset I've looked at agrees with this, from the very first COVID-19 data dump in February 2020.
I started actively sharing and quoting that CDC page in August 2021. Coincidentally or not, within days they removed that quote. There is no record of why they made the change. In fact the updated page misleadingly states "Page last reviewed: June 7, 2021", despite the August edit*.
To recap, they quietly reversed the most critical piece of contextual advice on how parents should think about COVID-19 in relation to their children. No record, no explanation. (In case you are wondering, the data has not changed, and the latest data aligns with the original statement which they removed. Perhaps the change was made for political reasons).
The second example is well documented elsewhere, but the CDC changed their online definition of the word "vaccine", again perhaps for political reasons. That sort of thing seems like the kind of change that maybe should have some audit trail behind it, no?
I used to take it for granted that we could trust the CDC. That made life easier. Health is so important but so, so complex. I would love to trust them again, and would have more confidence if they were using the best tools for trust we have.
March 11, 2021 — I have been a FitBit user for many years but did not know the story behind the company. Recently came across a podcast by Guy Raz called How I Built This. In this episode he interviews James Park who explains the story of FitBit.
I loved the story so much but couldn't find a transcript, so made the one below. Subtitles (and all mistakes) added by me.
Guy: From NPR, It's How I Built This. A show about innovators, entrepreneurs, idealists, and the stories behind the movements. Here we go. I'm Guy Raz, and on the show today, how the Nintendo Wii inspired James Park to build a device and then a company that would have a huge and lasting influence on the health and fitness industry, Fitbit.
It's taken me a few weeks to get motivated about exercise. This whole pandemic thing just had me in a state of anxiety and it messed with my routine, but I was inspired to jump back into it about two weeks ago, after watching my 11-year-old proudly announce his daily step count recorded on his Fitbit. Now, fitness isn't all that important to him. He's 11. But the gamification of fitness, the idea that it could be fun to hit 5,000 or 10,000 steps a day, that's what matters.
This is the stroke of insight James Park had soon after he stood in line at a Best Buy in San Francisco to buy the brand new video game system called Nintendo Wii. And you'll hear James explain the story a bit later, but what he realized playing the Wii is that you could actually change human behavior around exercise if you turned it into a game. And the thing is, up until James Park and his co-founder Eric Friedman founded Fitbit in 2007, there really weren't any digital fitness trackers that were designed that way. It took a few years for James and Eric to gain traction, but by 2010, 2011, Fitbit took off. At one point, their fitness devices accounted for nearly 70% of the market. And by 2015, the company was valued at more than $10 billion. But that same year, the Apple Watch was released, and Fitbit and its market share got hammered. When I spoke to James Park a few days ago, he was in San Francisco, living in an Airbnb.
James: I'm in a temporary Airbnb because the place that I typically live in has been flooded out by a malfunctioning washing machine. I woke up at it 1:00 AM
Guy: In the middle of this whole thing, flooded washing machine went... You woke up in the middle of the night and there was water everywhere?
James: I know. Amazing timing. Yeah. I woke up at 1:00 AM, and I just woke up to the sound of water gushing everywhere. It was coming through the ceiling. It was a massive flood.
Guy: Okay. So on top of sheltering in place and running his company remotely, James had to move out of his apartment in the middle of the night and then set up the microphone and gear we sent him for this interview. He started to tell us about his parents who immigrated from Korea when James was four. Back in Korea, his dad had been an electrical engineer and his mom was a nurse. But as with many immigrants, they had a hard time getting those same jobs in the US. So instead, his parents became small business owners.
James: The first conscious memory I have is, my parents actually own the wig shop in downtown Cleveland.
Guy: Wow. How did they get into that? Was it just a way to earn a living?
James: Yeah. I think a way to earn a living and the typical immigrant story is you have friends who live in the country that you're immigrating to. And I think my dad had a friend who worked in wig wholesaling. That's where he started out. There were selling wigs to people who live in downtown Cleveland, African-Americans, mostly women. And I remember my mom, she'd spend a lot of time just looking through black fashion magazine, styling hair, beating them, et cetera.
Guy: Wow.
James: They had a wig shop, dry cleaners, a fish market. At one point we moved to Atlanta and they ran an ice cream shop there. We sold track suits, starter jackets, fitted baseball caps, thick gold chains.
Guy: Sort of hip-hop urban wear, right? Like FUBU, and stuff like that?
James: Yeah. Yeah. Yep. They sold FUBU jeans. Yep. I remember that. And they could switch from one genre or one type of business to another and really not skip a beat.
Guy: And were your parents, did they expect you perform well at school? Was that just a given?
James: I think they had incredibly high expectations then as a kid. I think I remember my mom telling me when I was pretty young, I don't know, five, six, seven, that she expected me to go to Harvard.
Guy: Wow.
James: Yeah. I don't think I quite knew what that meant back then, but you could tell that their expectations were pretty high from the very beginning.
Guy: James did in fact meet his mom's expectations. He did go to Harvard. He put in three years studying computer science, but after his junior year, he got a summer internship at Morgan Stanley and then ended up deciding to start his own business. And then we had hoped to finish his college degree, he never went back.
James: I always had a little bit of a stubborn streak, and that was when I was trying to figure things out, try to think of ideas. I think there is a lot of opportunity, a lot of problems to be solved. I was also looking for a co-founder at the time. So those are two critical ingredients, an idea and a co-founder.
Guy: This is 1998. This is not 2015 when these kinds of conversations seem so common. This was unusual in 1998 for a young person. It was just less common for a young person to just sort of say, "I'm going to look into a tech startup and try to find a co-founder and just take some time to think about these things." I would imagine your parents were nervous. I'd be nervous if my 20-year-old said to me, "I'm not going to go back to college and I don't really know what I'm going to do, but I'm just going to think about it."
James: Yeah. They were understandably pretty upset, angry even I'd say. And the irony is that, they probably took away more incredible personal risk moving from Korea to United States and running these series of businesses, which are commonly done, but not easy in themselves and pretty high risk. But I do understand obviously the perspective at the time.
Guy: Okay. You decide you want to start something up, and I think you eventually landed on e-commerce, right?
James: Yeah. That was not a groundbreaking thing at the time. Obviously, Amazon was around, et cetera. A lot of e-commerce startups, but settled on this idea of making e-commerce a lot more seamless and frictionless and came up at this idea of a electronic wallet that would automatically make purchases for you. It would work with a lot of different e-commerce sites and the goal there was that we would take a cut of every transaction.
Guy: Right. And what was the company called?
James: That was interesting. We originally named it Kapoof, that was how it was incorporated, until a lot of people said, "That might not be the best name for a company." Sounds like, we called it Kapoof because it sounded like magic, et cetera, Kapoof. Things are done. Your transaction is completed by Kapoof.
Guy: It sounds like, "Kapoof, your money is gone."
James: Yeah, exactly.
Guy: "You've no more money."
James: Exactly. Time of crazy names like Yahoo, et cetera. But we decided to change our name at some point and we changed it to Epesi, which was so Swahili for fast. And so that was the ultimate name of the company.
Guy: And you guys were actually able to raise a fair amount of money. Right?
James: We did. We ended up raising a few million dollars from some individuals and some from some venture capital firms as well. And we hired some people. We found a cool renovated firehouse. That was-
Guy: Nice. Nice
James: ... Really amazing place to hang out in for many, many, many hours of the day. And we hired up to, it was close to about 30 people.
Guy: Wow. One super important thing that happened there was you met Eric Friedman, right? The guy that you would eventually launch Fitbit with.
James: I did. And that's probably one of the more fortunate turns in my life. Eric, we didn't know each other at all before the company, Epesi. He was actually just graduating from Yale in computer science. And I interviewed him. I liked him a lot. And he ended up ultimately becoming the first employee at the company.
Guy: Okay. So you hire Eric, and I think the company lasted 18 months, or a little less than two years.
James: Yeah. About two years, and a lot of ups and downs during that period. If I had to think back, I would attribute two-thirds of the challenges and problems we faced as a business to myself, just because I had never managed people. I didn't really know how to run a business, even it was only the technology side. And at some point the dot-com crash happened. And all of our potential customers, the whole industry, the whole economy started taking a downturn.
Guy: So this company spirals out in 2001. And when that happened, did you think, "Okay, I should go back to college now and finish my degree." Or, "I got to start something else." Where was your head at that point?
James: Well, it was a really challenging personal time for me. Towards the end of the company, we obviously had to lay off most of the company, and trying to doing it in a way that was compassionate was really, really difficult. I don't think the thought of entering school or going back to school popped back into my head at all. And I don't know why. I think it was because, despite this very emotional failure, I knew this was what I wanted to do. I had a firm conviction about that. And so I knew I wasn't going to go back.
Guy: So what'd you do?
James: We all ended up working at the same place actually. It was a company, a pretty large company called Dun & Bradstreet at the time. Very stable company. And we were all fortunate to be able to find work there as engineers.
Guy: So daytime working at Dun & Bradstreet, and then what? At night sitting around just-
James: Brainstorming. Yeah. We go into work during the daytime and then we'd come home in the evenings, code different things, try different things out. So it was a pretty intense. I think, in terms of the numbers of hours, I don't think anything changed from our first startup to trying to figure this next one out.
Guy: And before too long, you decide to do another startup. This time, with Eric Friedman from your previous company, and then another guy named Gokhan Kutlu. I think this was what? 2003, 2004?
James: Yeah. This was about 2002 actually.
Guy: Okay. And this time the startup was a photo editing platform, sharing platform. What was it called?
James: The company's name at the time was called HeyPix and the product itself was called electric shoe box because a lot of people put their old photos in shoe boxes and this was just going to be a digital.
Guy: Yes. I still have them in shoe boxes.
James: You'll digitize them probably.
Guy: I should. I know.
James: Yeah. And so electric shoe box, which is going to be a digital version of your shoe box.
Guy: And what could you do?
James: Well, there are digital cameras were coming about back then. It still wasn't easy to connect them, upload photos. It was getting easier, but nowhere near what it is today, obviously. The whole idea of electric shoe box was to make the whole process of getting photos off your camera a lot easier. And more importantly, we wanted to make the process of sharing these photos with your friends and family a lot easier.
Guy: So did you raise money for the product, for the electric shoe box?
James: We did. We ended up raising money primarily from one of my friends from middle school who was a mutual fund manager in Boston. And so, he put in a bit of money, not a lot. I think about, at least for him, it was about 100,000. And we had a bunch of savings ourselves that we were going to use. And in anticipation, I also opened up a few more credit cards as well.
Guy: And it was just really the three of you, sitting at your computers and just tapping the keys all night?
James: You pretty much nailed it. I mean, all we did was, we would wake up in the morning, walk over to the third bedroom and just start typing away for 12 hours. We'd take meal breaks. I remember Eric did a lot of cooking. So we'd eat our dinners on some TV stands watching TV. That was a good break for us, watching Seinfeld, and then go to bed and then repeat it the following day.
Guy: Wow. All right. So you come up with this product, and by the way, how are you going to make money off of this thing? This is a free service. How were you going to pay for it?
James: I guess, it would be called freemium software. It would be free for a period of time, and the trial period would end and then you'd have to submit your credit card information to continue using the software.
Guy: Got it. Got it.
James: And so, our primary goal was making sure that a lot of people knew about the software. So we put it on shareware sites, et cetera. And then we spent a lot of time debating, "Should we send out a press release?" And I remember it was a huge debate because sending out a press release was going to be about $300. And that was the level of expense that required a vigorous debate at the time. So we said, "You know what? Without getting the product known, how are we going to be successful?" So we wrote up a press release and we put it out. And actually it was probably the most pivotal decision we ever made in that company's history.
Guy: Because?
James: The first email came in a few hours later. I think the second one came in a day later. But we got two emails, one from CNET, which is a huge digital publishing company. And then we got another email from Yahoo saying, "Hey, we just heard about this launch of this software product. And we'd like to talk to you guys more about it."
Guy: Wow.
James: Exactly. This was coming from their corporate development arms, which typically deals with M&A, with buying, buying companies.
James: Yeah, exactly. We were like, "Whoa, this is magic. How did this happen?"
Guy: 2005, it gets purchased by CNET. They make an offer to buy this company, buy this product from you guys and you sell it to CNET. Was that life-changing money? Did that mean that you never had to work again?
James: It was definitely a good acquisition for all of us at the time. Remember we were three guys working out of our apartments. I was at the time about $40,000 in credit card debt as well. We were down to some desperate times and we were negotiating numbers and they threw out a number which was, their first offer was 4 million, and we were like, "Whoa, that's amazing."
Guy: Wow.
James: Like, "God, I can't believe we built something that's worth this much at the time." We were just stunned. And then, we quickly got to, "Okay, how do we negotiate something better?"
Guy: So you sell your company to CNET in 2005 and you've got some money in your pocket. And you move to San Francisco to work for CNET. Did you enjoy it? I mean, it was probably a huge company at this point, right?
James: It was a huge company, but I think the moment, at least for me, that I moved to San Francisco, I instantly fell in love with the city. And CNET, even though was a larger company, I actually found it to be an amazing time. I learned a lot. I got some management training. I ended up managing a small team of people. Learned a lot about how technology scales to millions and millions of users. How you market products. I really enjoyed my experience there. I think it was pretty formative.
Guy: Why did you leave CNET?
James: We left CNET just because of, I guess you could call it a bolt of lightning in some ways. It was December of 2006 and Nintendo had just announced the Nintendo Wii. And I remember coming home, putting it together. At the time Nintendo had come up with this really innovative control system, using motion sensors, accelerometers to serve as inputs into a game. And after using it, especially in Wii Fit, which was a sports game. I thought, "Wow, this is incredible. This is amazing. This is magical. You can use sensors in this way. You can use it to bring people together." Particularly for Wii Fit, it was a way of getting people active, of getting them moving together. And I was just blown away by this whole idea, really excited about. I couldn't stop thinking about it.
James: And after some time of playing Wii Fit and the Wii and a lot of other games, I thought, "This is great. It's in my living room, but what if I want to take this outside of the living room?" And I kept thinking about that idea, like-
Guy: "How do you take Wii Fit outside?"
James: Outside. Exactly.
Guy: Wow.
James: I couldn't let it go. And I ultimately ended up calling up Eric and we started talking about this idea for hours and hours and we couldn't stop talking about it. It's like, "How do we capture this magic and make it more portable? How do we give it to people 24-7?" And that was really the Genesis of Fitbit.
Guy: So the technology, I mean, pedometers have been around forever. Was that where your head was going, or thinking, "Okay, maybe we just create an electronic pedometer?" But I think even electronic pedometers were around in 2007, right?
James: Yeah. Pedometers were definitely around back then. Actually, they had been around for probably 100 years. One of the things though is that, they weren't something that people would want to use or to wear. They were very big. They were pretty ugly. They looked like medical devices.
Guy: A lot of senior citizens used them.
James: Yeah. They weren't a very aspirational device. It wasn't something that people were excited to use. And so, I think that's why that whole category of device just never really had any innovation. And there are also much higher-end devices. You could buy much fancier running watches, like GPS watches, et cetera. But those are really expensive for people. There are 300, $400 at the time.
Guy: So you had this idea, and that means you had to raise money. And this is going to be the third time now that you've had to do that for a business. And I think I read that you raised $400,000 to launch this. I mean, I don't know a lot about hardware, but that doesn't seem like it was going to take you very far in building a physical product.
James: As we quickly found out, yes, we had grossly underestimated the cost of taking this to market.
Guy: And what did that initial amount of money, how far did they get you into actually conceding of what this product was going to be?
James: It got us to a prototype, write some rudimentary software, get some industrial design concepts done and some models.
Guy: What did the prototype look like? Did it look like a Fitbit?
James: It looked absolutely nothing like a Fitbit. There are two things, there was actual, somewhat working prototype and then there was an industrial design model.
Guy: Which was a piece of plastic.
James: Plastic, and metal that was supposed to look like the ultimate product. And so, that actually looked really, really nice.
Guy: But it didn't work?
James: Yeah. It was totally nonfunctional. And we'd always have to tell people before showing, "This doesn't work here." Because they get all excited looking at the model. "No, no, no. That doesn't work." The thing that actually worked looked like something that came out of a garage, literally.
Guy: What did it look like?
James: It was rectangular circuit board, a little bit smaller than your. And it had a motion sensor, it had a radio, it had a microcontroller, which was the brains of the product. And it had a rudimentary case, which was a balsa with box.
Guy: Wow. So you would take to investors, a circuit board and a balsa wood box as your prototype?
James: Yeah. That was the prototype. And actually that was what we had demoed. When we first announced the company, that was the prototype that was actually being used at the announcement.
Guy: Wow. I mean, how did you even get it to that point? Because you guys are both software engineers, how did you develop a physical product that even such a crude prototype could track movement? Did you have other people help you do that?
James: That was our big task was to find the right people who could help us. I knew the founder of a really great industrial design firm in San Francisco called New Deal Design. His name is Gadi Amit. And then on the algorithm side, because it was going to take a lot of sophisticated algorithms to translate this motion data to actual data that users would be able to understand, I ended up asking my best friend from college, because he was in grad school at Harvard at the time. And he said, "Wait, I think I might know somebody." And it ended up being his teaching fellow, his name was Shelton. And we talked and I was like, "Wow, this guy is super smart. We need to get him working on the algorithms." So he ended up working on the side while doing his PHD, helping us out with a lot of the software.
Guy: I mean, you leave CNET in 2007, and you've got 400,000 to come up with a prototype that quickly run out of that. So it's 2008, and you're trying to raise money, how much did you raise?
James: I think our first round was about $2 million.
Guy: Which was not going to take you that far if you wanted to develop a physical product that was super sophisticated, a piece of hardware.
James: We thought we could do it. We thought we knew a little bit more about the hardware business. We put together another business plan budget. It was actually a pretty challenging time to raise money as well because-
Guy: Oh, with the financial crisis. Yeah.
James: Exactly. It was the fall of 2008, when we were trying to raise money. One of the, I guess the good and bad things about VCs is, the good thing about VCs is they're incredibly healthy people. They're super fit. But it also made it difficult for a lot of them to understand the value of the product because what we were trying to do was, it wasn't a product meant for super athletic people, it was really meant to help normal people become more active, become healthier, et cetera. And it was hard for a lot of them to grasp why that was valuable. They'd ask, "Well, did it do X or did it do Y and did it do Z?" And we'd say, "No, it doesn't do any of that." And so it was very difficult for a lot of these super-fit VCs to understand the value of the product, even though a lot of them claim they don't try to put their own bias on these products. It's naturally human to do that.
Guy: And did you know right away that this was going to be... I mean, now Fitbit's are watches mainly. They're wrists, there on your wrist. But at that time, you were thinking that this was just going to be something you would clip to your clothing?
James: Yeah. Something to clip to your clothing for men. And then what we found out in talking to a lot of women was that they wanted to tuck it away somewhere hidden. They didn't want people to see it. And we said, "Okay, where would you want to put it?" And said, "Well, a lot of our pants don't have pockets, so it can't be in our pocket." And so the preferred place was actually on their bra. So a lot of the physical design that we had to think about in the early days was how to come up with a product that would be very slim, slender, and clipped people's bra.
Guy: And hidden.
James: And hidden and clipped the bras pretty easily.
Guy: And by the way, how did you come up with the name Fitbit?
James: It's never easy to name a company, and it's even more challenging just because of domain names. That's typically a lot of the limiting factor in naming a great company. And so, we would spend hours and hours and days just going through different permutations of names, and some awful ones as well. At some point we got onto a fruit theme. So we were thinking like Fitberry or Berryfit or Fitcado. Just some really awful names.
Guy: The Fitcado.
James: The Fitcado. Yes. History might've turned out a lot differently for sure. I was just taking a nap in my office one afternoon. I think I was actually napping on the rug because I was so tired. And I woke up and it just hit me, it was Fitbit. And the next challenge was actually the domain name. The domain name was not available. And it was owned by the guy in Russia. And I'm like, "Oh my god, how are we going to get this domain name? We'll just email the guy and see what happens." And he said, "Well, how much are you willing to offer?" And I said, "Oh god, I don't know. How about 1,000 bucks?" And he's like, "Oof, how about 10,000?" And I said, "Oh, I don't know. That sounds like a lot. How about 2,000?" And he's like, "Oh, okay. 2000, deal." I think it was literally two or three emails that we sent back and forth in this negotiation.
Guy: Probably the best $2,000 you ever spent in your life, except for the 300 you spent on the press release a couple years earlier.
James: Yeah, yeah. Definitely a good return.
Guy: You've probably spent many millions of dollars on other things in your life that were not as good of a deal as that $2,000.
James: Yeah. It's tens of thousands on naming consultants and focus groups and trademark searches and all of that. It's kind of funny.
Guy: Hey, as they say, small companies, small problems, big company, big problems.
James: Exactly.
Guy: So where do you begin? I mean, you got to make it, you got to find a factory, you got to find designers. Where do you go?
James: Very good question. We obviously had zero connections. The challenge though, was not actually the connections to the manufacturers, but finding a manufacturer who we could actually convince to build this product because we didn't have a background in hardware. And so, would they actually want to work with us? That was the biggest concern at the time.
Guy: So how did you find them?
James: We went out to China. We went out to Singapore. And we were never going to be able to get the Foxconn's.
Guy: You had to go to a smaller place.
James: We had to go to a smaller place, who'd be more nimble, more flexible, who'd want to take a financial risk. And we finally found a great manufacturer based in Singapore called Racer Technologies. And the good thing is actually, it was the best of all worlds, the headquarters was in Singapore. Most of the management team and the engineering staff was in Singapore, but they had manufacturing facilities that were in Indonesia. The labor there was going to be lower cost than in Singapore.
Guy: All right. So 2008, you've got the name Fitbit, you go to TechCrunch50 to present, to unveil this product. And what was the product that you were offering? Well, you said, "All right, we've got to think of the Fitbit and it does this." What did you say it did at that point?
James: Our pitch to the crowd at TechCrunch, and ultimately to our consumer was that, it was a product that would track your steps, distance, calories, and how much you slept and would answer some basic questions about your health, "Was I active enough today? Did I get enough sleep? What do I need to do to lose weight," et cetera. And one of the more important aspects was this idea of a community as well. "Join other people who own Fitbits, your friends and family, and you could compete with each other." And it was all wireless. You didn't really have to do anything. All you'd have to do is wear this device, don't even think about it, and all this magic would happen. That was the promise of Fitbit at the time.
Guy: There was a lot of excitement there, but I'm wondering, were you nervous to do these presentations? Did you have to prepare like crazy, or did you just find your ability to be this person you had to be on stage when you got up there?
James: Yeah, I think there was no other choice. It was just something we had to do. And I think-
Guy: Are you better at it than Eric, or is Eric better at it than you?
James: I think we're both good in our different ways. It just fell upon me. I don't even know how we decide those things. But actually, what was running through our minds, was not what we were going to say and how we're going to say it, but whether the demo would actually work on stage, because again, it was a little sketchy. It was still very early. It was still in the wooden box.
Guy: In the balsa wood box.
James: Balsa wood box phase. So we were just worried that the demo would just fail or crash.
Guy: But it worked.
James: It worked, and actually it did crash in the middle of the presentation because the whole demo was about me walking on stage, the device would be collecting stats. And at one point I would turn to Eric and say, "Hey, Eric, why don't you refresh the page and show that all the stats have been uploaded." Magically, do this wireless connection. And so, the demo actually crashed while I was talking, and Eric was fiercely trying to reboot his computer during this period and I don't even know anything about it. But ultimately, the demo did work. And so, to many people, it seemed like magic. Literally, people started clapping. It was really amazing.
James: Originally, right before TechCrunch, Eric and I, we made just a verbal bet. "How many pre-orders are we going to get after this conference when we announce and make the company public?" And I think Eric said, "I think we'll get like five pre-orders." So it's like, "The device isn't even available. People are going to have to give us their credit card information." And I said, "Nah, you know what? I'm not as pessimistic. I think there's going to be like 10, 15, 20." And so we got off stage, and by the end of the day, we had about 2000 pre-orders.
Guy: Wow. When we come back in just a moment, James and Eric have a prototype in the balsa wood box and they don't exactly know how they are going to get from there to filling thousands of pre-orders. But a lot of people are expecting them in time for Christmas. Stay with us, I'm Guy Raz, and you're listening to How I Built This from NPR.
Guy: This message comes from NPR sponsor, Sell on Amazon. When you sell on Amazon there's room to grow, ways to move and countless reasons to believe that your brand can do more. Amazon can help you set up your products and tell your brand's story. Give you access to insightful analytics and even help with shipping, customer service and returns through Fulfillment by Amazon. Sign up at sell.amazon.com/npr and get ready to grow, ship, sell, and thrive. Start selling on Amazon today. This message comes from NPR sponsor, ClickUp. You don't need to exist on four hours of sleep to be productive. Enter ClickUp a completely customizable work platform with all the features you've ever needed to build your business. All in one place. Join 100,000 teams across companies, like Airbnb, Google, and Uber who are already using ClickUp to level up their productivity and save one day a week guaranteed. ClickUp is free forever. So get started at clickup.com/npr today.
Guy: Hey, welcome back to How I Built This from NPR, I'm guy Raz. So it's 2008, and James and his co-founder Eric Friedman show off their Fitbit prototype at TechCrunch, and it makes a huge splash. The problem is, they have no finished product. They haven't even figured out how they're going to make it and pre-orders are pouring in.
James: And they just kept coming in. It was crazy. We were like, "Oh my god, it's not just dozens of these units we have to build, it's now thousands, and more and more every day." And so we were still thinking Christmas of that year that we were going to start shipping out units, and it rapidly became clear to us that we weren't going to make Christmas. And so, we're thinking, "Okay, how do we keep all these people happy while we pull this off?" So this was before Kickstarter and Indiegogo and all that. We had to improvise. We were like, "Okay, why don't we just blog about the whole process and just be very open and transparent about it." So we started a blog, and I wrote maybe weekly updates on how things were going, challenges and delays that we were facing.
James: And I was really surprised, actually, it worked. It made people understand what we were going through. They're literally seeing the thing being made, the sausage being made behind the scenes. And I think that kept people really engaged throughout the process.
Guy: So you have basically a bunch of contractors and freelancers and you guys are going back and forth to Asia. You got people working on the software to transmit the data to the web. You've got some people working on the hardware, presumably, in Singapore trying shrink down the motherboard to something that is two inches by one half inch. And were you just constantly running into failures? You would think that, "Oh, here it is." And then somebody would hit the go button and then it would just fizzle out, it wouldn't work?
James: Yeah. I can't even enumerate the number of challenges with the product that we had.
Guy: Please start.
James: In some ways a lot of people, I think when you think about hardware, it's like, "Oh, I'll find a manufacturer in China. I'll throw over a design."
Guy: Yeah, right. No problem.
James: "They'll just run with it."
Guy: And then, "Just send me the bill," and then it's done.
James: And they'll just crank out thousands, tens of thousands of this. But that's never-
Guy: And that works if it's a suitcase, we've done a way. It works if it's that thing.
James: If it's that thing or something that's very similar to something that they've built before.
Guy: Right.
James: Well, that's a different story than this thing that this manufacturer never had built before.
Guy: So they would send you things and say, "Yep, we got it." And then you would get it and it sucked. It just didn't work.
James: Yeah. We wouldn't wait for them to send it. I mean, either myself or Eric would be in Indonesia or Singapore at any given time. We'd trade off different weeks. And we were out there on the production lines pretty much inspecting every part of the process.
Guy: But were you convinced this thing was going to work or did you have doubt?
James: I was absolutely convinced that it was going to happen.
Guy: You had no doubts that this-
James: I had no doubts because we were getting proof every day that this was something that was going to be big. And I think the first evidence of that was at TechCrunch where we had 2,000 pre-orders and we were getting pre-orders every day. I think by the summer time, we had about 25,000 pre-orders at $100 per unit. That's a fair amount of revenue if we could ship these units.
Guy: And how much was it going to cost you to make each unit?
James: That was a very good question. We didn't know that. Hopefully, under $100.
Guy: You didn't know? You were selling them for $100, but you didn't know how much it was going to cost you.
James: We had a sense of the bill of materials. I think we were trying to shoot for a gross margin of about 50%. So we're targeting the full cost of the product, including shipping, et cetera, being no more than $50. That's what we were targeting.
Guy: Which is a lot. That's high. It's a high cost.
James: It's a high cost, but that was a cost at which we felt we could sustain ourselves as a business.
Guy: How did you and Eric manage your relationship and friendship? I mean, with the stress of this delay and inability to meet demand and all these, was there tension at all between the two of you, or you guys totally are on the same page?
James: I don't think there was that much tension. I mean, a lot of stress, but not tension. I think we trust in our ability to help each other out. And there are periods when either of us would be pretty down on the company and the product. And luckily, we weren't down both at the same time. And that's why it helps, I think, to have a co-founder.
Guy: So there were times where you were really down and he could give you a pep talk and.
James: Exactly. And then I'd wonder why he wasn't down. And there're some pretty dark times right before we shipped. I remember we were months before we thought we could finally get the first unit off the production line. And I was sitting in my hotel room in Singapore, and I was testing out one of the prototype builds that that Racer had produced and the radio range was not good at all. It was supposed to have a range-
Guy: 10 feet or 15 feet?
James: That was the hope that it would have 15 to 20 feet range, but the range was actually two inches.
Guy: Oh god. Wait, so the antenna in the device had a two inch range.
James: Yeah. It would only work at two inches. And I'm thinking, we've got to ship this holiday season. I've got tens of thousands of these people waiting.
Guy: Oh god.
James: And so, I'm just freaking out in hotel room.
Guy: You might as well have a cord and just plug it in.
James: Exactly, exactly. I couldn't sleep that night obviously. And I took the unit apart. I had a multimeter and I was measuring different voltages and currents. And what I realized was, huh, the cable for the display was flexible and long enough that maybe it was actually trooping down and touching the antenna and that was causing-
Guy: That was creating interference.
James: Creating interference. And I could see that when you put the whole thing together, that it might troop down. And I thought, "Okay, how do I create a shim that would prop the antenna?" So I went to the bathroom, grabbed some toilet paper, rolled a little bit of it in a ball and stuffed it between the antenna and the display cable, put the device back together. And it started working. The range was great.
Guy: Wow. So you had to separate one wire from the antenna and that was it, with toilet paper?
James: With toilet paper. Yeah, that was it.
Guy: Wow.
James: And I still couldn't sleep. So as early as possible, the following morning I raced into our manufacturing and said, "Okay, I think I found the problem," but obviously a toilet paper is not a scalable high volume situation. So they went back and figured out how they could make this manufacturable. So they ended up creating these little tiny tie cut pieces of rubber that they would glue onto the circuit board to keep the antenna away from the display cable.
Guy: Wow. Wow. So that was, basically, was just, inserting something in there and then it worked?
James: Yeah, it wasn't exactly duct tape, but that was the equivalent of duct tape
Guy: It was pretty close.
James: It was pretty close. Yeah.
Guy: So you guys launched this product in Christmas of 2009, and it was a pretty successful product launch. You had 25,000 orders and sounds like you're off to the races, but I guess even with this success, when you went out to raise money, this is 2010, were investors more excited or was it still a challenge to get more investors in?
James: It was still a challenge. And at the time, it wasn't "Okay, I guess you guys are having some success, consumers are buyin the product, et cetera." And they congratulated us on that. But the were very scared of hardware businesses. I think there had been a lo of really high profile failures in the consumer electronics industry And so, it was very difficult for us to raise money. I remember, w had a spreadsheet of target VCs. I think there are 40 names that w put on that list. And literally, we went to number 40 before we wer able to raise money.
Guy: And just giving the same pitch, again, again, answering the same questions?
James: Same pitch. We're in San Francisco driving down 101 to Sand Hill Road, constantly giving the same pitch to 40 VCs. That's probably the one thing I didn't like about that whole time period was, I hate giving the same pitch over and over and hearing the same questions and same objections, et cetera. That was not a fun or stimulating time for me.
Guy: All right. Eventually, the 40th investor does decide to give you some money. I think you raised about $8 million. And at this point, were you able to then have a proper office and a staff. Were you able to begin to recruit real full-time engineers and developers and people like that?
James: We were. We did that with the round that was right after our first $2 million institutional round. We hired a bunch of customer support personnel. I interviewed and hired our first head of sales. I interviewed and hired someone to finally run all of our manufacturing and operations, which was still a job that I was doing. I was still issuing all the [POS] and managing the inventory. And I think we were really fortunate because the early management team that we hired in those days pretty much made it up to and passed our IPO, which I think rarely happens.
Guy: It's so crazy to think about it now. But I think early on, with the Fitbit, the idea was to be part of a bigger community. Like the data from your activity would be available. You would just go to a site and you could see it and you could see everybody else's because the idea was, "We're all part of this together." But I think early on, some users were tracking sex. And when you started to hear about these things was your reaction like, "Oh my god, I never even thought about this being a privacy thing. I always thought that people would just want to share stuff."
James: Yeah. This was still the early days of sharing things like that. And I found out about it because I saw this tweet about someone going, "Hey, if you do this Google search, you'll see," because Google was indexing all our public pages where people are logging things that people had made public. "You could find out all the sexual activities that people are logging on Fitbit." And I saw that, I'm like, "Oh my god, this is not good." That ended up being the first real PR crisis for the company. And it was happening over the 4th of July weekend. So I had to call an emergency board meeting. We had to scramble to delete all that stuff, turn everything private.
Guy: Because the default setting, initially, when you got to Fitbit was, it's not private, it's open. Because the idea was, it was going to be a big community of people trying to get fit.
James: Yeah. I mean, we made a lot of things private by default. We made sure that people's weight was private because we thought that would be sensitive, but we didn't think that people's activities, there wasn't any harm in doing that and we just didn't realize that people would start logging that.
Guy: And just to be clear, people who logged sexual activity, this was not a category that you offered up, it was just people were voluntarily deciding to just log that as one of their activities.
James: Well, it was a category, but it wasn't something that we had realized. We use this database from the government that was thousands of different activities that people would do.
Guy: Oh, I see.
James: And so, it was an option. We just didn't think people would with log that.
Guy: You were just naive about that.
James: We were naive. We were like, "Okay, this is a government database of activities. It must be fine." That was quite a shock and a wake up call for us.
Guy: Fitbit for the first couple of years was, a, still a clip. Mainly a clip. And then, I think really 2011, you released the first product, Christmas of 2009, you've got 2010. By 2011, just business exploded, 5X growth from 2011, 2012. You went from $15 million in revenue to $76 million in revenue. What was going on? Was it just this self-generating phenomenon? Were you surprised by it? Were you've investing in marketing? Was it just unearned media, just people reporting on it? What was going on?
James: I think, the primary reason is, because we had baked in this social element, this community element into it from the very beginning, it ended up being a very viral product. So one family member would get it, and to really realize the potential, the community aspect and the competitive aspect, you had to have someone else as well. So they'd either buy it for their spouse or their parents and they would start competing and then they'd buy it for their friends and they'd try to get their friends to buy the product.
Guy: So they could each see how many steps you were... Because I remember this, I remember this in NPR. People were wearing Fitbits and they were talking and there was, I think there was even, people were encouraged to get Fitbits.
James: Exactly. It was very driven by word of mouth. And this viral spread was a huge driver of our growth in those days.
Guy: I think by 2013, you had some competitors coming in. Nike was making one and Jawbone was making one. I mean, I remember going to the TED Conference in 2013 and getting a Jawbone in my gift bag. Were you worried about the competition at that point, or not really?
James: Yeah. At that time I think people were looking at the success and there was even a name, coined for the whole category, which is quantified self. "How do I use sensors, et cetera, to measure everything that I'm doing in my entire life?" And so that attracted a lot of competition that you said. And I'd have to say the competitive aspect was definitely worrying at the time, especially with Nike and Jawbone.
Guy: Because they're so huge.
James: There are huge. I mean, Nike, obviously, it's a multi-billion dollar, multi-national company with a lot of media dollars. I remember when they announced the FuelBand, they had all these celebrity athletes at the announcement and we're like, "Oh god, that's insane."
Guy: And yet, by 2014 you had 67% of the activity tracking marketplace. I mean, Fitbit was just totally dominating the marketplace. I mean, were you and Eric doing victory laps and high-fiving each other and thinking back to all those doubters? I mean, what was going on?
James: I think we were still pretty, I don't know if scared is the right word. I think, it's still very, very cautious. Nothing was guaranteed. There was a lot of competition that was emerging. We still had a lot of internal challenges in the business, scaling production, scaling the company, et cetera. Again, a lot of fires for us to be solving on a day-to-day basis. And I remember occasionally we'd always check in and say, "Hey, when do you think we'll know we're going to make it?" And we'd say, "I think we'll know in six months." And we kept saying that every six months. It was pretty much an ongoing thing, pretty much up to the IPO.
Guy: 2015 was a huge turning point for you in many ways. You go public, I think your market cap, I read a certain point, reached $10 billion. That year, 2015, the Apple Watch is released and they stopped selling Fitbit in their stores. At the time you were quoted saying, I'm not really worried about this because it's a huge market. It's a $200 billion market. The Apple Watch is just crammed with a bunch of stuff, or smartwatches are crowned with a bunch of stuff. And what we're doing as something simpler." Was that, what you were saying publicly, because I don't know, you be felt like you should be saying that or did you really think that was true that the Apple Watch wouldn't actually have much of an impact?
James: We were definitely concerned with Apple. I mean, this was the preeminent technology, and especially a hardware company at the time with an amazing brand. We had faced off Philips and Nike and Jawbone, which were in their rights, very big competitors, especially Nike. We did feel very strongly that our product had very clear advantages. It was a simpler product. If you looked at the Apple Watch, that was announced at that time, I think everyone will admit maybe even Apple, that it was a product that didn't quite know what it was supposed to be used for. With the launch of the first Apple Watch, I don't really think that that had an actual impact on the trajectory of the business. It wasn't the product that it would later become. And the industry wasn't where it would eventually evolve either.
Guy: I mean, but eventually, the industry did change. I mean, Apple Watch got really popular. I think, by 2016, Fitbit's stock had dropped by 75% over the course of a year. I mean, you and Eric were running a publicly traded company and the stock was just tumbling. What did you think? I mean, I can't imagine that was pleasant for you.
James: No, it was definitely a stressful period. And you could argue, well, maybe we shouldn't even have been valued at 10 billion in the first place. And I think in a lot of times it's a question of perception. If we had never hit that 10 billion and we had steadily grown into the 2 billion, I think people's perceptions and just psychology about the whole situation would have been different than going to 10 and falling to two. And it was a very challenging period because as a private company, despite challenges, your valuation doesn't change very often. It only changes when you raise money, which could happen once a year, once every two years. So if you hit a bump in the road, your employees don't really feel it.
James: We had a product recall where, if we had been a public company, our valuation would have plummeted immediately, but at the time we were private. So we just told the employees, "Hey, look, this is the challenge. It's pretty serious, but here are the steps that we're going to take to get through it." And everyone rallied together. But when you're being measured every day in real-time-
Guy: By the stock price.
James: ... By the stock price, you're not really given a lot of breathing room to try to fix things.
Guy: Even though you were introducing new products, revenue was declining every year from the time you went public. And I read an article about something that you did in 2017. And I'm really just curious to get your take on it, because I actually think it's really courageous, but also probably super stressful and difficult, which is, you asked your employees to submit an evaluation of the company and of you. And then you sat in front of them to hear the results of this evaluation and it wasn't good. You even had some employees who wrote letters to the board asking that you be removed as CEO. I can't imagine that was easy for you to hear.
James: I don't know if I've heard that particular feedback directly, but clearly the survey results were not great. I have jokingly think, probably used to hearing very critical feedback because of my parents. I don't think there was a moment where they're truly happy with anything that I did. I remember even when I took the SATs and I got my score back, it was a pretty good score, but my dad just honed in on clearly the areas that I had not done well. I don't think I have a huge ego. I mean, I do have an ego, I think it's human to have one, but my primary focus was, "How do I get things back on track?"
Guy: You had, there was a quote from somebody in an article. I was an anonymous quote. It said, "At a certain point, we're focused o the right things. We had the ability, and have the ability to know lot about our users, which you do, but our users don't want to be tol what they did." In other words, they don't want to be told, "Hey, yo exercise, you did 10 steps today." They want to be told what to do Like how to get better. And the quote was, "This was the greates missed opportunity." And I know you've made a pivot since then, bu was that a fair assessment at the time in 2017, that you were to focused on telling people what they've accomplished rather tha telling them what they need to do?
James: Yeah. I think there are ultimately two big things that were driving the headwinds in the business. First of all, I think we were really behind in launching a competitive smartwatch at the time. People were-
Guy: Competitive to...
James: Competitive to Apple. It was clear that the industry, consumers were moving to that category and we were seeing that in our sales. So in a very short period of time, our tracker business fell by $800 million in revenue. And at the time, at our peak, we were doing about 2.1 billion in revenue.
Guy: Wow.
James: So we had $800 million hole, and we finally launched our smartwatch, but it was only sufficient to fill that hole very barely. We hadn't transformed the software into giving people guidance and advice. And it also ties to our failure at the time to quickly diversify our revenue stream beyond just hardware to a services business that-
Guy: Like a subscription.
James: Exactly. We were so focused on growing our hardware business because that was what was bringing in the money, that was what the retailers wanted, et cetera. And one of the mistakes I made was not setting up enough time, enough focus to building the subscription part of the business that actually answered those pivotal questions for our users.
Guy: As many, many companies find themselves in a successful companies that have a successful legacy product, this is crazy talking, but a legacy product for your company, which is only 10 years old or 12 years old. You could argue that the Fitbit product is your legacy product, right? And that, as any company with a legacy product realizes, they've got to make a pivot. Like for American Express, it was traveller's checks for 100 years. That's how they made their money. And they had to pivot into other things, there is travel services and credit cards and so on. It sounds like in 2019, you really made a pivot into thinking about Fitbit, not as a hardware company that makes a tracker, watch, or device, like smartwatch, but a company that really is about healthcare and is designed to pivot more into healthcare data and analysis. Is that fair? Is that right?
James: Yeah. I think that's fair. I think we stopped thinking of ourselves as a device company and more of as a behavior change company because that's effectively what people were buying our products and services to do, was to change their behavior in a really positive way. And not only individual people, but companies as well. Companies who in the US, especially bear the direct costs of the care of their employees. We started thinking about ourselves as a behavior change company and figuring out what are the products and services that really deliver that both to people and to businesses.
Guy: So we get to the end of last year where Google announces that they were going to buy Fitbit, $2.1 billion. We should mention that, at the time of this recording, it hasn't closed yet. To me, it makes perfect sense. If I'm you or Eric, I would have done it. I would've said, "$2.1 billion, that's great. That's a great outcome because now with Google, we've got access to their dollars and their research labs and all the people who work there and the analytics and our ability to really go to the next level." Why did it make sense from your perspective to sell to Google?
James: Yeah, that's a very complicated and emotionally fraught question, but last year our board met and it was pretty clear to everybody that we had a lot of challenges in the business. We weren't profitable. There was a lot of competition out there from the likes of Apple, from Samsung, some emerging Chinese competitors, but there was a lot of just great things going on in the company. I was so excited about our product roadmap, about things that were in our pipeline, all the advanced research that we were doing around health and sensors. I would look at our product roadmap every day and just come away super excited about that. And then also be confronted with a lot of the business challenges as well.
James: And for me, most importantly it was about a legacy and I wanted the Fitbit brand and what we did to continue onwards for a very, very long time. And we just had to figure out the best way to do it, whether it was as an independent company or within a larger company. That was really what was most important.
Guy: I imagine that there are some details you can't talk about for obvious reasons, but as of this recording, we're talking in mid-April, there is a hold on the Google acquisition. The department of justice is doing an investigation because there's some interest groups who have said, "Hey, we don't think that Google should have access to all of this data. That Fitbit has 28 million users. This is incredible trove of health data." Is that causing you stress right now that there is this justice department holdup on the acquisition?
James: No. And it's because sometimes the press does like to sensationalize things, but the process that we're undergoing right now with the department of justice and also with the EU and some other countries around the world is pretty normal for acquisitions of the size. In fact, it's required. Really, the whole reviews about the anti-competitive element, and especially around the wearable market share. That's just something that we have to convince regulators that, "This doesn't reduce competition in the marketplace."
Guy: As far as you know, the situation now with the lockdowns and the pandemic does not have any impact on Google's interest or commitment to making this happen.
James: No, I think everyone's thinking towards the long-term, fingers crossed is that we do find ourselves through this COVID-19 situation and that there is life beyond that. Maybe it comes back slowly, but I think everyone is thinking, "What does this whole category look like in time span of years? How?" And I think what, one of the things that COVID-19 has shown is that, especially if you look at healthcare, this idea of remote health care, remote monitoring, people healthy outside of a hospital setting is actually really important.
Guy: Super. It's going to totally change... I've had a video call with my doctor just for a quick question. It's actually super convenient.
James: Exactly. And if during these telemedicine visits, if they have a snapshot in summary of what you've been up to and what your health has been outside of that visit, and almost be predictive in that way, I mean, I think that can be really groundbreaking in the way medicine gets practiced. And this whole time period is merely accelerating that transition.
Guy: When you think about all of the things that you have done professionally and your successes, you made a lot of money. I mean, you're extremely wealthy and wealthier than your parents could have ever imagined you would be, or they would be. They took a huge risk to come to the US and had all these little mom-and-pop stores. How much of that do you think is because of your intelligence and skill and how much do you attribute to luck?
James: Yeah, that's always a tricky question to answer. I think, very fortunate to have grown up with my parents. Just having seen them persevere through life, you get the realization that nothing really comes easy. That it does take a lot of just grinding away at things that at the time seem unpleasant. I think those are good traits and very fortunate to have parents like that who sacrificed a lot to put me in great schools over time, even though they started from some humble beginnings. But also, have learned a lot of ways, gotten some lucky breaks where things could have gone the wrong way very, very quickly. Ultimately, I attribute it to a little bit of all of that. I think it's not fair to say that everything is luck because then I think you start to discount the actual things, actions that you can take on your own to affect the future. And that's really important.
Guy: That's James Park, co-founder of Fitbit. And here's a number for you, 34,642,772, that is how many steps James has tracked since he first put on that balsa wood Fitbit prototype. At least as of this recording. It's about 15,430 miles or 24,832 kilometers. And thanks so much for listening to the show this week, you can subscribe wherever you get your podcasts. You can also write to us at hibt@npr.org">hibt@npr.org. And if you want to send a tweet, it's @HowIBuiltThis or @guyraz. This episode was produced by James Delahoussaye, with music composed by Ramtin Arablouei. Thanks also to Sarah Saracen, Candice Lim, Julia Carney, Neva Grant, Casey Herman, and Jeff Rogers. I'm Guy Raz, and you've been listening to How I Built This. This is NPR.
February 28, 2021 — I read an interesting Twitter thread on focus strategy. That led me to the 3-minute YouTube video Insist on Focus by Keith Rabois. I created the transcript below.
One of the fundamental lessons I learned from Peter Thiel at PayPal was the value of focus. Peter had this somewhat absurd, but classically Peter way of insisting on focus, which is that he would only allow every employee to work on one thing and every executive to speak about one thing at a time, and he distributed this focus throughout the entire organization. So everybody was assigned exactly one thing, and that was the only thing you were allowed to work on, the only thing you were allowed to report back to him about.
My top initiatives shifted around over the years, but I'll give you a few. One was initially Visa, MasterCard really hated us. We were operating at the edge of their rules at the time. My number one problem was to stop MasterCard particularly, but Visa a bit from killing us. So until I had that risk taken off the table, Peter didn't want to hear about any of my other ideas.
Once we put Visa, MasterCard into a pretty stable place than eBay also wanted to kill us. Wasn't very happy with us processing 70% of the payments on their platform, so that was my next problem.
Then 9/11 happened and the US Treasury Department promulgated regulations, which would require us among other things to collect Social Security numbers from all of our buyers, which would have suppressed our payment volumes substantially. So then my number one initiative became convincing the treasury department to not propagate these regulations, right post 9/11.
At some point, we also needed to diversify our revenue off of eBay. So that became another initiative for me. That one I did not solve that well, which in some way led to us eventually agreeing to be acquired.
I had another number one problem, which was this publication called the Red Herring, had published this set of unflattering articles about us and how to fix that and rebuild the communications team.
Peter would constantly just assign me new things. He didn't like the terms of our financial services relationship with the vendors that we were using, so I took on that team and fixed the economics of those relationships, et cetera, et cetera, but they were not done in parallel. They're basically sequential. The reason why this was such a successful strategy is that most people, perhaps all people tend to substitute from A-plus problems that are very difficult to solve, to be a plus problems, which you know a solution to, or you understand the path to solve.
You have a checklist every morning. Imagine waking up, and a lot of people write checklists and things to accomplish. Most people have an A-plus problem, but they don't know the solution so they procrastinate on that solution? And then they go down the checklist to the second or third initiative where they know the answer and they'll go solve those problems and cross them off. The problem is if your entire organization is always solving the second, third or fourth most important thing, you never solve the first.
So Peter's technique of forcing people to only work on one thing meant everybody had to work on the A-plus problems. And if every part of the organization once in a while can solve a problem that the rest of the world thinks is impossible, you wind up with an iconic company that the world's never seen before.
I absolutely love the math behind this strategy. There are a few other terms to get right, but there's a fantastic idea here.
February 28, 2021 — I thought it unlikely that I'd actually cofound another startup, but here we are. Sometimes you gotta do what you gotta do.
We are starting the Public Domain Publishing Company. The name should be largely self-explanatory.
If I had to bet, I'd say I'll probably be actively working on this for a while. But there's a chance I go on sabbatical quick.
The team is coming together. Check out the homepage for a list of open positions.
February 22, 2021 — Today I'm launching the beta of something new called Scroll.
I've been reading the newspaper everyday since I was a kid. I remember I'd have my feet on the ground, my body tilted at an angle and my body weight pressed into the pages on the counter. I remember staring intently at the pages spread out before me. World news, local news, sports, business, comics. I remember the smell of the print. The feel of the pages. The ink that would be smeared on my forearms when I finished reading and stood back up straight. Scroll has none of that. But it does at least have the same big single page layout.
Scroll brings back some of the magic of newspapers.
In addition to the layout, Scroll has two important differences from existing static publishing software.
First, Scroll is built for public domain sites and only public domain sites. Builders of Scroll will spend 100% of the time building for the amazing creators who understand and value the public domain.
Second, Scroll is a Tree Language. Unlike Markdown, Scroll is easily extensible. We can create and combine thousands of new sub languages to help people be more creative and communicate more effectively.
I've had fun building Scroll so far and am excited to start working on it with others.
December 9, 2020 — Note: I wrote this early draft in February 2020, but COVID-19 happened and somehow 11 months went by before I found this draft again. I am publishing it now as it was then, without adding the visuals I had planned but never got to, or making any major edits. This way it will be very easy to have next year's report be the best one yet, which will also include exciting developments in things like non-linear parsing and "forests".
In 2017 I wrote a post about a half-baked idea I named TreeNotation.
Since then, thanks to the help of a lot of people who have provided feedback, criticism and guidance, a lot of progress has been made flushing out the idea. I thought it might be helpful to provide an annual report on the status of the research until, as I stated in my earlier post, I "have data definitively showing that Tree Notation is useful, or alternatively, to explain why it is sub-optimal and why we need more complex syntax."
My template for this (and maybe future) reports will be as follows:
I've followed the "Strong Opinions, Weakly Held" philosophy with this idea. I came out with a very strong claim: there is some natural and universal syntax that we could use for all of our symbolic languages that would be very useful—it would let us remove a lot of unnecessary complexity, allow us to focus more on semantics alone, and reap a lot of benefits by exploiting isomorphisms and network effects across domains. I've then spent a lot of time trying to destroy that claim.
After publishing my work I was expecting one of two outcomes. Most likely was that someone far smarter than I would put the nail in Tree Notation's coffin with a compelling case for why a such a universal notation is impossible or disadvantageous. My more optimistic—but less probable—outcome was that I would accumulate enough evidence through research and building to make a convincing case that a simplest universal notation is possible and highly advantageous (and it would be cool if Tree Notation evolves into that notation, but I'd be happy for any notation that solves the problem).
Unfortunately neither of those has happened yet. No one has convinced me that this is a dead-end idea and I haven't seen enough evidence that this is a good idea1. At times it has seemed like a killer application of the notation was just around the corner that would demonstrate the advantages of this pattern, but while the technology has improved a lot, I can't say anything has turned out to be so compelling that I am certain of the idea.
So the high level status remains: strong opinion, weakly held. I am sticking to my big claim and still awaiting/working on proof or refutation.
In these reports I'll try and restate the idea in a fresh way, but you can also find the idea explained in different places via visuals, an FAQ, a spec, demos, etc.
My hypothesis is that there exists a Simplest Universal Notation for Symbolic Abstraction (SUNSA). I propose Tree Notation as a potential candidate for that notation. It is hard to assign probabilities to events that haven't happened before, but I would say I am between 1% and 10% confident that a SUNSA exists and that Tree Notation is somewhat close to it2. If Tree Notation is not the SUNSA, it at least gives me an angle of attack on the general problem.
Let's define a notation as a set of physical rules that can be used to represent abstractions. By simplest universal notation I mean the notation that can represent any and every abstraction representable by other notations that also has the smallest set of rules.
You could say there exists many "UNSAs", or Universal Notations for Symbolic Abstractions. For example, thousands of domain specific languages are built on the XML and JSON notations, but my hypothesis is that there is a single SUNSA. XML is not the SUNSA, because an XML document like <a>b</a>
can be equivalently represented as a b
using a notation with a smaller set of rules.
Inventions aren't always built in a linear fashion. For example, when you add 2+3
on your computer, your machine will break down that statement into a binary form and compute something like 0010
+ 0011
. The higher level base 10 are converted into the lower level base 2 binary numbers. So, before your computer solves 2+3
, it must do the equivalent of import binary
. But we had Hindu-Arabic numerals centuries before we had boolean numerals. Dependencies can be built out of order.
Similarly, I think there is another missing dependency that fits somewhere between binary the idea and binary
the symbolic word.
Consider Euclid's Elements, maybe the most famous math book of all time written around 2,500 years ago. The book begins with the title "Στοιχεῖα"3. Already there is a problem: where is import the letter Σ
?. Euclid has imported undefined abstractions: letters and a word. Now, if we were to digitally encode the Elements today from scratch, we would first include the binary dependency and then a character encoding dependency like UTF-8. We abstract first from binary to symbols. Then maybe once we have things in a text stream, we might abstract again to encode the Elements book into something like XML and markdown. I think there is a missing notation in both of these abstractions: the abstraction leap from binary to characters, and abstraction leap from characters to words and beyond.
I think to represent the jumps from binary to symbols to systems, there is a best natural notation. A SUNSA that fits in between languages that let's us build mountains of abstraction without introducing extra syntax.
To get a little more concrete, let me show a rough approximation of how using Tree Notation you could imagine a document that starts with just the concept of a bit (here denoted on line 2 as ".") and work your way up to defining digits and characters and words and entities. There is a lot of hand-waving going on here, which is why Tree Notation is still, at best, a half-baked idea.
.
...
0
1 .
...
Σ 10100011
...
Στοιχεῖα
...
book
title Elements
...
Given that I still consider this idea half-baked at best; given that I don't have compelling evidence that this notation is worthwhile; given that no one else has built a killer app using the idea (even though I've collaborated publicly and privately with many dozens of people on project ideas at this point); why does this idea still excite me so much?
The reason is because I think IF we had a SUNSA, there would be tremendous benefits and applications. I'll throw out three potential application domains that I personally find very interesting.
A SUNSA would greatly reduce the cost of a common knowledge base of science. While it may be possible to do it today without a SUNSA, having one would be at least a one order of magnitude cost reduction. Additionally, if there is not a SUNSA, than it may take just as long to come to agreement on what UNSA to use for a common knowledge base of science as it would to actual build the base!
By encoding all of science into a universal syntax, in addition to tremendous pedagogical benefits, we could take analogies like this:
And make them actual concrete visualizations.
This one always gets me excited. I believe there is a deep connection between simplicity, justice, and fairness. I believe legal systems with unnecessary complexity are unfair, prima facie. While legal systems will always be human-made, rich in depth, nuanced, and evolving, we could shed the noise. I dream of a world where paychecks, receipts, and taxes are all written in the same language; where medical records can be cut and pasted; and where when I want to start a business I don't have to fill out forms in Delaware (the codesmell in that last one is so obvious!).
I believe a SUNSA would give us a way to measure complexity as neatly as we measure distance, and allow us to simplify laws to their signal, so that they serve all people, and we don't suffer from all that noise and inefficiency.
I love projects like godbolt.org, that let you jump up and down all the levels of abstraction in computing. I think there's an opportunity to do some incredible things if there is a SUNSA and the patterns in languages at different layers of computing all looked roughly the same (since they are roughly the same!).
Tree Notation might not be the SUNSA, but it has a few properties that I think a SUNSA would have.
I also will list one thing I don't think a SUNSA will have:
So those are a few things that I think we'll find in a SUNSA. Will we ever find a SUNSA?
I think a really good piece of evidence that we don't need a SUNSA is that we've seen STUPENDOUS SUCCESS WITH THOUSANDS OF SYNTAXES. The pace of progress in computing in the 1900's and 2000's has been tremendous, perhaps because of the Smörgåsbord of notations.
Who's to say that a SUNSA is needed? I guess my retort to that, is that although we do indeed have thousands of digital notations and languages, all of them, without exception, compile down to binary, so clearly having some low level universal notation has proved incredibly advantageous so far.
So that concludes my restatement of the Tree Notation idea in terms of a more generic SUNSA concept. Now let me continue on and mention briefly some developments in 2019.
Here I'll just write some bullet points of work done this past ~ year advancing the idea.
Here I just list some marks against this idea.
Next steps is more of the same. Keep attempting to solve problems by simplifying the encoding of them to their essence (which happens to be Tree Notation, according to the theory). Build tools to make that easier and leverage those encodings. This year LSP will likely be a focus, Grid Notation, and the PLDB.
Tree Notation has a secret weapon: Simplicity does not go out of style. Slippers today look just like slippers in Egypt 3,000 years ago
My Tree Notation paper was my first ever attempt at writing a scientific paper and my understanding was that a good theory would make some refutable predictions. Here are the predictions I made in that paper and where they stand today.
While this prediction has held, a number of people have commented that it doesn't predict much, as the same could really be said about most languages. Anything you can represent in Tree Notation you can represent in many encodings like XML.
What I should have predicted is something along the lines of this: Tree Notation is the smallest set of syntax rules that can represent all abstractions. I think trying to formalize a prediction along those lines would be a worthwhile endeavor (possibly for the reason that in trying to do what I just said, I may learn that what I just said doesn't make sense).
This one has not come true yet. While I have made many public Tree Languages myself and many more private ones, and I have prototyped many with other people, the net utility of Tree Languages is not high enough that people are rushing to design these things. Many people have kicked the tires, but things are not good enough and there is a lack of demand.
On the supply side, it has turned out to be a bit harder to design useful Tree Languages than I expected. Not by 100x, but maybe by as much as 10x. I learned a lot of bad design patterns not to put in Tree Languages. I learned that bad tooling will force compromises in language design. For example, before I had syntax highlighting I relied on weird punctuation like "@" vs "#" prefixes for distinguishing types. I also learned a lot of patterns that seem to be useful in Tree Languages (like word suffixes for types). I learned good tooling leads to simpler and better languages.
This one has not come true yet. While there is a tremendous amount of what I would call "Tree Oriented Programming" going on, programmers are still talking about objects and message passing and are not viewing the world as trees.
This one is a fun one. Definitely has not come true yet. But I've got a new attack vector to try and potentially crack it.
After someone suggested it, I made a Long Bet predicting the rise of Tree Notation or a SUNSA within ten years of my initial Tree Notation post. Clearly I am far off from winning this bet at this point, as there are not any candidate languages even noted in TIOBE, never mind in the Top 10. However, IF I were to win the bet, I'd expect it wouldn't be until around 2025 that we'd see any candidate languages even appear on TIOBE's radar. In other words, absence of evidence is not evidence of absence.
As an aside, I really like the idea of Long Bet, and I'm hoping it may prompt someone to come up with a theoretical argument against a SUNSA that buries my ideas for good. Now, it would be very easy to take the opposing side of my bet with the simple argument that the idea of 7/10 TIOBE languages dropping by 2027 won't happen because such a shift has never happened so quickly. However, I'd probably reject that kind of challenge as non-constructive, unless it was accompanied by something like a detailed data-backed case with models showing potential speed limits on the adoption of any language (which would be a constructive contribution).
In 2019 I explored the idea of putting together a proper research group and a more formal organization around the idea.
I put the breaks on that for three reasons. The first is I just don't have a particularly keen interest in building an organization. I love to be part of a team, but I like to be more hands on with the ideas and the work of the team rather than the meta aspect. I've gotten great help for this project at an informal level, so there's no rush to formalize it. The second reason is I don't have a great aptitude for team building, and I'm not ready yet to dedicate the time to that. I get excited by ideas and am good at quickly explore new idea spaces, but being the captain who calmly guides the ship toward a known destination just isn't me right now. The third reason is just the idea remains too risky and ill-defined. If it's a good idea, growth will happen eventually, and there's no need to force it.
There is a loose confederation of folks I work on this idea with, but no formal organization with an office so far.
That's it for the recap of 2019! Tune in next year for a recap of 2020.
1 Regardless of whether or not Tree Notation turns out to be a good idea, as one part of the effort to prove/disprove it I've built a lot of big datasets on languages and notations, which seem to be useful for other people. Credit for that is due to a number of people who advised me back in 2017 to "learn to research properly". ⮐
2 Note that this means I am between 90-99% confident that Tree Notation is not a good idea. However, if it's a bad idea I am 40% confident the attempt to prove it a bad idea will have positive second-order effects. I am 50% confident that it will turn out I should have dropped this idea years ago, and it's a crackpot or Dunning–Kruger theory, and I'd be lying if I said I didn't recognize that as a highly probably scenario that has kept me up some nights. ⮐
3 When it was first coming together, it wasn't a "book" as we think of books today and authorship is very fuzzy, but that doesn't affect things for my purposes here. ⮐
March 2, 2020 — A paradigm change is coming to medical records. In this post I do some back-of-the-envelope math to explore the changes ahead, both qualitative and quantitative. I also attempt to answer the question no one is asking: in the future will someone's medical record stretch to the moon?
Medical records are generally stored with healthcare providers and currently at least 86%-96% of providers use an EHR system.
Americans visit their healthcare providers an average of 4 times per year.
If you were to plot the cumulative medical data storage use for the average American patient, it would look something like the abstract chart below, going up in small increments during each visit to the doctor:
A decade ago, this chart would not only show the quantity of a patient's medical data stored at their providers, but also the quantity of all of the patient's medical data. Simply put: people did not generally keep their own medical records. But this has changed.
Now people own wearables like FitBits and Apple Watches. People use do-it-yourself services like 23andMe and uBiome. And in the not-too-distant future, the trend of ever-miniaturizing lab devices will enable advanced protocols at home. So now we have an additional line, reflecting the quantity of the patient's medical data from their own devices and services:
When you put the two together you can see the issue:
Patients will log far more medical data on their own than they do at their providers'.
It seems highly likely then that the possession of medical records will flip from providers to patients. I now have 120 million heart rate readings from my own devices, while I might have a few dozen from my providers. The gravity of the former will be harder and harder to overcome.
Patients won't literally be in possession of their records. While some nerdy patients—the kind of people who host their own email servers—might host their own open records, most will probably use a service provider. Prior attempts at creating personal health record systems, including some from the biggest companies around, did not catch on. But back then we didn't have the exponential increase in personal medical data, and the data gravity that creates, that we have today.
I'm noticing a number of startups innovating along this wave (and if you know of other exciting ones, please share!). However, it seems that Apple Health and FitBit are in strong positions to emerge as leading providers of PHR as-a-service due to data gravity.
Currently EHR providers like Epic design and sell their products for providers first. If patients start making the decisions about which PHR tool to use, product designers will have to consider the patient experience first.
I think this extends beyond products to standards. While there are some great groups working on open standards for medical records, none, as far as I'm aware, consider patients as a first class user of their grammars and definitions. I personally think that a standards system can be developed that is fully understandable by patients without compromising on the needs of experts.
One simple UX innovation in medical records that I love is BlueButton Developed by the V.A. in 2010, BlueButton allows patients to download their entire medical records as a single file. While the grammar and parse-ability of BlueButton leave much to be desired, I think the concept of "your entire medical history in a single document" is a very elegant design.
As more and more different devices contribute to patients' medical documents, what will the documents look like and how big will they get? Will someone's medical records stretch to the moon?
I think the BlueButton concept provides a helpful mental model here: you can visualize any person's medical record as a single document. Let's call this document an MRD for "Medical Record Document".
Let's imagine a 30 year old in 2050. They'd have around 11,200 days worth of data (I included some days for in utero records). Let's say there are 4 "buckets" of medical data in their MRD:
This is my back of the envelope math of how many megabytes of data might be in each of those buckets:
I am assuming that sensor development advances a lot in 40 years. I am assuming our patient of the future has:
By my estimate this person would log about 100GB of medical data per day, or about 100 petabytes of data in 30 years. That would fit on roughly 1,000 of today's hard drives.
If you printed this record in a single doc, on 8.5 x 11 sheets of paper, in a human readable form—i.e. print the text, print the time series data as line charts, print the images, and print various types of output for the various protocols—the printed version would be about 138,000,000 pages which laid end-to-end would stretch 24,000 miles. If you printed it double-sided and stacked it like a book it would be 4.2 miles high.
So for a 120 year old in 2140, their printed MRD would not reach the moon. Though it may make it halfway there.
March 2, 2020 — I expect the future of healthcare will be powered by consumer devices. Devices you wear. Devices you keep in your home. In the kitchen. In the bathroom. In the medicine cabinet.
These devices record medical data. Lots of data. They record data from macro signals like heart rate, temperature, hydration, physical activity, oxygen levels, body temperature, brain waves, voice activity. They also record data from micro signals like antibodies, RNA expression levels, metabolomics, microbiome, etc.
Most of the data is collected passively and regularly. But sometimes your Health app prompts you to take out the digital otoscope or digital stethoscope to examine an unusual condition more closely.
This data is not stored in a network at the hospital you don't have access to. Instead you can access all of that data as easily as you can access your email. You can see that data on your wrist, on your phone, on your tablet.
You can understand that data too. You can click and dive into the definitions of every term. You can see what is meant by macro concepts like "VO2 max" and micro concepts like "RBC Count" or "BRC1 expression". Everything is explained precisely and thoroughly. Not only in words but in interactive visualizations that are customized to your body. The algorithms and models that turn the raw signals into higher level concepts are constantly improving.
When you get flu like symptoms, you don't alternate between anxiously Googling symptoms and scheduling doctor's appointments. Instead, your Health app alerts you that your signals have changed, it diagnoses your condition, shows you how your patterns compare to tens of thousands of people who have experienced similar changes, and makes recommendations about what to do next. You can even see forecasts of how your condition will change in the days ahead, and you can simulate how different treatment strategies might affect those outcomes.
You can not only reduce illness, but you can improve well-being too. You can see how your physical habits, social habits, eating habits, sleeping habits, correlate with hundreds of health and other signals.
Another benefit to all of this? Healthcare powered by consumer devices seems like it will be a lot cheaper.
February 25, 2020 — One of the questions I often come back to is this: how much of our collective wealth is inherited by our generations versus created by our generations?
I realized that the keys on the keyboard in front of me might make a good dataset to attack that problem. So I built a small little experiment to explore the history of the keys on my keyboard.
Painting with broad strokes, there were approximately five big waves of inventions that have left their mark on the keyboard:
I haven't made any traditional charts yet with this dataset, but you can roughly make out these waves in the interactive visualization by moving the slider around.
An interesting pattern that I never saw before is how the five waves above are roughly arranged in circles. The oldest symbols (letters) are close to the center, followed by the Hindu-Arabic Numbers, surrounded by the punctuation of the Englightenment, surrounded by the keys of the keyboard, surrounded by the recent additions in the P.C. era. Again, painting with broad strokes, but I found that to be an interesting pattern.
All of these waves happened invented before my generation. Almost all of them before any generation alive today. The keyboard dataset provides strong evidence that most of our collective wealth is inherited.
I got this idea last week and couldn't get it out of my head. Yesterday I took a quick crack at it. I didn't have much time to spare, just enough to explore the big ideas.
I started by typing all the characters on my keyboard into a Tree Notation document. Then I dug up some years for a handful of the symbols.
Next I found the great Apple CSS keyboard. I stitched together the two and it seemed to be at least mildly interesting so I opted to continue.
I then flushed out most of the dataset.
Finally I played around with a number of visualization effects. At first I thought heatmaps would work well, and tried a few variations on that, but wasn't happy with anything. I posted my work-in-progress to a few friends last night and called it a day. Today I switched to the "disappearing keys" visualization. That definitely felt like a better approach than the heatmap.
I made the thing as fun as I could given time constraints and then shipped.
February 21, 2020 — One of the most unpopular phrases I use is the phrase "Intellectual Slavery Laws".
I think perhaps the best term for copyright and patent laws is "Intellectual Monopoly Laws". When called by that name, it is obvious that there should be careful scrutiny of these kinds of laws.
However, the industry insists on using the false term "Intellectual Property Laws."
Instead of wasting my breath trying to pull them away from the property analogy, lately I've leaned into it and completed the analogy for them. So let me explain "Intellectual Slavery Laws".
As far as I can figure, you cannot have Property Rights and "Intellectual Property" rights. Having both is logically inconsistent. My computer is my property. However, by law there are billions of peaceful things I cannot do on my computer. Therefore, my computer is not my property.
Unless of course, the argument is that my computer is my property, but some copyright and patent holders have property rights over me, so their property rights allow them to restrict my freedom. I still get rights over my property. But other people get rights over me. Property Rights and Intellectual Slavery Laws can logically co-exist! Logical inconsistency solved!
We can have a logical debate about whether we should have an Intellectual Slavery System, Intellectual Slavery Laws, Intellectual Slavery Law Schools, Intellectual Slavery Lawyers, etc. But we cannot have a logical debate about Intellectual Property Laws. Because the term itself is not logical.
I know, having now used this term with a hundred different people, that this is a not a popular thing to say. But I think someone needs to say it. Do we really think we are going to be an interplanetary species and solve the world's biggest challenges if we keep 99+% of the population in intellectual chains?
Preface: Richard Brhel of placepeep shared a great quote the other day on StartupSchool. He saw the quote on a poster years ago when he was helping a digitization effort in Ohio. I had never seen this exact quote before so wanted to transcribe it for the web.
February 9, 2020 — In 1851 Ezekiel G. Folsom incorporated Folsom's Mercantile College in Ohio. Folsom's taught bookkeeping, banking, and "railroading", amongst other things.
The image above is a screenshot of an 1850's poster promoting the college. The poster includes a motto (which I boxed in green) that I think is great guidance:
Integrity and Perseverance in Business ensure success
January 29, 2020 — In this long post I'm going to do a stupid thing and see what happens. Specifically I'm going to create 6.5 million files in a single folder and try to use Git and Sublime and other tools with that folder. All to explore this new thing I'm working on.
TreeBase is a new system I am working on for long-term, strongly-typed collaborative knowledge bases. The design of TreeBase is dumb. It's just a folder with a bunch of files encoded with Tree Notation. A row in a normal SQL table in TreeBase is roughly equivalent to a file. The filenames serve as IDs. Instead of each using an optimized binary storage format it just uses plain text like UTF-8. Field names are stored alongside the values in every file. Instead of starting with a schema you can just start adding files and evolve your schema and types as you go.
For example, in this tiny demo TreeBase of the planets the file mars.planet
looks like this:
diameter 6794
surfaceGravity 4
yearsToOrbitSun 1.881
moons 2
TreeBase is composed of 3 key ingredients.
Ingredient 1: A folder All that TreeBase requires is a file system (although in theory you could build an analog TreeBase on paper). This means that you can use any tools on your system for editing files for editing your database.
Ingredient 2: Git Instead of having code to implement any sort of versioning or metadata tracking, you just use Git. Edit your files and use Git for history, branching, collaboration, etc. Because Tree Notation is a line and word based syntax it meshes really well with Git workflows.
Ingredient 3: Tree Notation The Third Ingredient for making a TreeBase is Tree Notation. Both schemas and data use Tree Notation. This is a new very simple syntax for encoding strongly typed data. It's simple, extensible, and plays well with Git.
Probably hundreds of billions of dollars has gone into designing robust database systems like SQL Server, Oracle, PostgreSQL, MySQL, MongoDB, SQLite and so forth. These things run the world. They are incredibly robust and battle-hardened. Everything that can happen is thought of and planned for, and everything that can go wrong has gone wrong (and learned from). These databases can handle trillions of rows, can conduct complex real-time transactions, and survive disasters of all sort. They use sophisticated binary formats and are tuned for specific file systems. Thousands of people have gotten their PhD's working on database technology.
TreeBase doesn't have any of that. TreeBase is stupid. It's just a bunch of files in a folder.
You might be asking yourself "Why use TreeBase at all when great databases exist?". To further put the stupidity of the current TreeBase design into perspective, the Largest Git Repo on the Planet is Windows which has 3.5 million files. I'm going to try and create a repo with 6.5 million files on my laptop.
Even if you think TreeBase is silly aren't you curious what happens when I try to put 6.5 million files into one folder? I kind of am. If you want an explanation of why TreeBase, I'll get to that near the end of this post.
But first...
Here again is a demo TreeBase with only 8 files.
The biggest TreeBase I work with has on the order of 10,000 files. Some files have thousands of lines, some just a handful.
While TreeBase has been great at this small scale, a question I've been asked, and have wondered myself, is what happens when a TreeBase gets too big?
I'm about to find out, and I'll document the whole thing.
Every time something bad happens I'll include a 💣.
TreeBase is meant for knowledge bases. So all TreeBases center around a topic.
To test TreeBase on a big scale I want something realistic. I wanted to choose some big structured database that thousands of people have contributed to that's been around for a while and see what it would look like as a TreeBase.
IMDB is just such a database and amazingly makes a lot of their data available for download. So movies will be the topic and the IMDB dataset will be my test case.
First I grabbed the data. I downloaded the 7 files from IMDB to my laptop. After unzipping, they were about 7GB.
One file, the 500MB title.basics.tsv
, contained basic data for all the movie and shows in the database.
Here's what that file looks like with head -5 title.basics.tsv
:
tconst | titleType | primaryTitle | originalTitle | isAdult | startYear | endYear | runtimeMinutes | genres |
---|---|---|---|---|---|---|---|---|
tt0000001 | short | Carmencita | Carmencita | 0 | 1894 | \N | 1 | Documentary,Short |
tt0000002 | short | Le clown et ses chiens | Le clown et ses chiens | 0 | 1892 | \N | 5 | Animation,Short |
tt0000003 | short | Pauvre Pierrot | Pauvre Pierrot | 0 | 1892 | \N | 4 | Animation,Comedy,Romance |
tt0000004 | short | Un bon bock | Un bon bock | 0 | 1892 | \N | \N | Animation,Short |
This looks like a good candidate for TreeBase. With this TSV I can create a file for each movie. I don't need the other 6 files for this experiment, though if this was a real project I'd like to merge in that data as well (in that case I'd probably create a second TreeBase for the names
in the IMDB dataset).
Doing a simple line count wc -l title.basics.tsv
I learn that there are around 6.5M titles in title.basics.tsv
. With the current implementation of TreeBase this would be 6.5M files in 1 folder. That should handily break things.
The TreeBase design calls for me to create 1 file for every row in that TSV file. To again stress how dumb this design is keep in mind a 500MB TSV with 6.5M rows can be parsed and analyzed with tools like R or Python in seconds. You could even load the thing near instantly into a SQLite database and utilize any SQL tool to explore the dataset. Instead I am about to spend hours, perhaps days, turning it into a TreeBase.
What will happen when I split 1 file into 6.5 million files? Well, it's clear I am going to waste some space.
A file doesn't just take up space for its contents: it also has metadata. Every file contains metadata like permissions, modification time, etc. That metadata must take up some space, right? If I were to create 6.5M new files, how much extra space would that take up?
My MacBook uses APFS It can hold up to 9,000,000,000,000,000,000 files. I can't easily find hard numbers on how much metadata one file takes up but can at least start with a ballpark estimate.
I'll start by considering the space filenames will take up.
In TreeBase filenames are composed of a permalink and a file extension. The file extension is to make it easier for editors to understand the schema of a file. In the planets TreeBase above, the files all had the planet
extension and there is a planet.grammar
file that contains information for the tools like syntax highlighters and type checkers. For my new IMDB TreeBase there will be a similar title.grammar
file and each file will have the ".title" extension. So that is 6 bytes per file. Or merely 36MB extra for the file extensions.
Next, the body of each filename will be a readable ID. TreeBase has meaningful filenames to work well with Git and existing file tools. It keeps things simple. For this TreeBase, I will make the ID from the primaryTitle column in the dataset. Let's see how much space that will take.
I'll try xsv select primaryTitle title.basics.tsv | wc
.
💣 I got this error:
CSV error: record 1102213 (line: 1102214, byte: 91470022): found record with 8 fields, but the previous record has 9 fields
1102213 3564906 21815916
XSV didn't like something in that file. Instead of getting bogged down, I'll just work around it.
I'll build a subset from the first 1M rows with head -n 1000000 title.basics.tsv > 1m.title.basics.tsv
. Now I will compute against that subset with xsv select primaryTitle 1m.title.basics.tsv | wc
. I get 19751733
so an average of 20 characters per title.
I'll combine that with the space for file extension and round that to say 30 extra bytes of file information for each of the 6.5 million titles. So about 200MB of extra data required to split this 500MB file into filenames. Even though that's a 50% increase, 200MB is dirt cheap so that doesn't seem so bad.
You may think that I could save a roughly equivalent amount by dropping the primaryTitle field. However, even though my filenames now contain information from the title, my permalink schema will generally distort the title so I need to preserve it in each file and won't get savings there. I use a more restrictive character set in the permalink schema than the file contents just to make things like URLs easier.
Again you might ask why not just an integer for the permalink? You could but that's not the TreeBase way. The human readable permalinks play nice with tools like text editors, URLs, and Git. TreeBase is about leveraging software that already works well with file systems. If you use meaningless IDs for filenames you do away with one of the very useful features of the TreeBase system.
But I won't just waste space in metadata. I'm also going to add duplicate data to the contents of each file. That's because I won't be storing just values like 1999
but I'll also be repeating column names in each file like startYear 1999
.
How much space will this take up? The titles file has 9 columns and using head -n 1 1m.title.basics.tsv | wc
I see that adds up to 92 bytes. I'll round that up to 100, and multiple by 6.5M, and that adds up to about 65,000,000 duplicate words and 650MB. In other words the space requirements roughly doubled (of course, assuming no compression by the file system under the hood).
You might be wondering why not just drop the column names from each file? Again, it's just not the TreeBase way. By including the column names, each file is self-documenting. I can open up any file with a text editor and easily change it.
So to recap: splitting this 1 TSV file into 6.5 million files is going to take up 2-3x more space due to metadata and repetition of column names.
Because this is text data, that's actually not so bad. I don't foresee problems arising from wasted disk space.
Before I get to the fun part, I'm going to stop for a second and try and predict what the problems are going to be.
Again, in this experiment I'm going to build and attempt to work with a TreeBase roughly 1,000 times larger than any I've worked with before. A 3 order of magnitude jump.
Disk space won't be a problem. But are the software tools I work with on a day-to-day basis designed to handle millions of files in a single folder? How will they hold up?
ls
and grep
hold up in a folder with 6.5M files?git status
be? What about git add
and git commit
?Since I am going to make a 3 order of magnitude jump, I figured it would be best to make those jumps one at a time.
Actually, to be smart, I will create 5 TreeBases and make 4 jumps. I'll make 1 small TreeBase for sanity checks and then four where I increase by 10x 3 times and see how things hold up.
First, I'll create 5 folders: mkdir 60; mkdir 6k; mkdir 60k; mkdir 600k; mkdir 6m
Now I'll create 4 smaller subsets for the smaller bases. For the final 6.5M base I'll just use the original file.
head -n 60 title.basics.tsv > 60/titles.tsv
head -n 6000 title.basics.tsv > 6k/titles.tsv
head -n 60000 title.basics.tsv > 60k/titles.tsv
head -n 600000 title.basics.tsv > 600k/titles.tsv
Now I'll write a script to turn those TSV rows into TreeBase files.
#! /usr/local/bin/node --use_strict
const { jtree } = require("jtree")
const { Disk } = require("jtree/products/Disk.node.js")
const folder = "600k"
const path = `${__dirname}/../imdb/${folder}.titles.tsv`
const tree = jtree.TreeNode.fromTsv(Disk.read(path).trim())
const permalinkSet = new Set()
tree.forEach(node => {
let permalink = jtree.Utils.stringToPermalink(node.get("primaryTitle"))
let counter = ""
let dash = ""
while (permalinkSet.has(permalink + dash + counter)) {
dash = "-"
counter = counter ? counter + 1 : 2
}
const finalPermalink = permalink + dash + counter
permalinkSet.add(finalPermalink)
// Delete Null values:
node.forEach(field => {
if (field.getContent() === "\\N") field.destroy()
})
if (node.get("originalTitle") === node.get("primaryTitle")) node.getNode("originalTitle").destroy()
Disk.write(`${__dirname}/../imdb/${folder}/${finalPermalink}.title`, node.childrenToString())
})
The script iterates over each node and creates a file for each row in the TSV.
This script required a few design decisions. For permalink uniqueness, I simply keep a set of titles and number them if a name comes up multiple times. There's also the question of what to do with nulls. IMDB sets the value to \N
. Generally the TreeBase way is to not include the field in question. So I filtered out null values. For cases where primaryTitle === originalTitle
, I stripped the latter. For the Genres field, it's a CSV array. I'd like to make that follow the TreeBase convention of a SSV. I don't know all the possibilities though without iterating, so I'll just skip this for now.
Here are the results of the script for the small 60 file TreeBase:
The Grammar file adds some intelligence to a TreeBase. You can think of it as the schema for your base. TreeBase scripts can read those Grammar files and then do things like provide type checking or syntax highlighting.
Now that we have a sample title
file, I'm going to take a first pass at the grammar file for our TreeBase. I copied the file the-photographical-congress-arrives-in-lyon.title
and pasted it into the right side of the Tree Language Designer. Then I clicked Infer Prefix Grammar
.
That gave me a decent starting point for the grammar:
inferredLanguageNode
root
inScope tconstNode titleTypeNode primaryTitleNode originalTitleNode isAdultNode startYearNode runtimeMinutesNode genresNode
keywordCell
anyCell
bitCell
intCell
tconstNode
crux tconst
cells keywordCell anyCell
titleTypeNode
crux titleType
cells keywordCell anyCell
primaryTitleNode
crux primaryTitle
cells keywordCell anyCell anyCell anyCell anyCell anyCell anyCell
originalTitleNode
crux originalTitle
cells keywordCell anyCell anyCell anyCell anyCell anyCell anyCell anyCell anyCell
isAdultNode
crux isAdult
cells keywordCell bitCell
startYearNode
crux startYear
cells keywordCell intCell
runtimeMinutesNode
crux runtimeMinutes
cells keywordCell bitCell
genresNode
crux genres
cells keywordCell anyCell
The generated grammar needed a little work. I renamed the root node and added catchAlls and a base "abstractFactType". The Grammar language and tooling for TreeBase is very new, so all that should improve as time goes on.
My title.grammar
file now looks like this:
titleNode
root
pattern \.title$
inScope abstractFactNode
keywordCell
anyCell
bitCell
intCell
abstractFactNode
abstract
cells keywordCell anyCell
tconstNode
crux tconst
extends abstractFactNode
titleTypeNode
crux titleType
extends abstractFactNode
primaryTitleNode
crux primaryTitle
extends abstractFactNode
catchAllCellType anyCell
originalTitleNode
crux originalTitle
extends abstractFactNode
catchAllCellType anyCell
isAdultNode
crux isAdult
cells keywordCell bitCell
extends abstractFactNode
startYearNode
crux startYear
cells keywordCell intCell
extends abstractFactNode
runtimeMinutesNode
crux runtimeMinutes
cells keywordCell intCell
extends abstractFactNode
genresNode
crux genres
cells keywordCell anyCell
extends abstractFactNode
Next I coped that file into the 60
folder with cp /Users/breck/imdb/title.grammar 60/
. I have the jtree
package installed on my local machine so I registered this new language with that with the command jtree register /Users/breck/imdb/title.grammar
. Finally, I generated a Sublime syntax file for these title files with jtree sublime title #pathToMySublimePluginDir
.
Now I have rudimentary syntax highlighting for these new title files:
Notice the syntax highlighting is a little broken. The Sublime syntax generating still needs some work.
Anyway, now we've got the basics done. We have a script for turning our CSV rows into Tree Notation files and we have a basic schema/grammar for our new TreeBase.
Let's get started with the bigger tests now.
I'm expecting this to be an easy one. I update my script to target the 6k files and run it with /Users/breck/imdb/build.js
. A little alarmingly, it takes a couple of seconds to run:
real 0m3.144s
user 0m1.203s
sys 0m1.646s
The main script is going to iterate over 1,000x as many items so if this rate holds up it would take 50 minutes to generate the 6M TreeBase!
I do have some optimization ideas in mind, but for now let's explore the results.
First, let me build a catalog of typical tasks that I do with TreeBase that I will try to repeat with the 6k, 60k, 600k, and 6.5M TreeBases.
I'll just list them in Tree Notation:
task ls
category bash
description
task open sublime
category sublime
description Start sublime in the TreeBase folder
task sublime responsiveness
category sublime
description scroll and click around files in the treebase folder and see how responsive it feels.
task sublime search
category sublime
description find all movies with the query "titleType movie"
task sublime regex search
category sublime
description find all comedy movies with the regex query "genres ._Comedy._"
task open finder
category finder
description open the folder in finder and browse around
task git init
category git
description init git for the treebase
task git first status
category git
description see git status
task git first add
category git
description first git add for the treebase
task git first commit
category git
description first git commit
task sublime editing
category sublime
description edit some file
task git status
category git
description git status when there is a change
task git add
category git
description add the change above
task git commit
category git
description commit the change
task github push
category github
description push the treebase to github
task treebase start
category treebase
description how long will it take to start treebase
task treebase error check
category treebase
description how long will it take to scan the base for errors.
💣 Before I get to the results, let me note I had 2 bugs. First I needed to update my title.grammar
file by adding a cells fileNameCell
to the root node and also adding a fileNameCell
line. Second, my strategy above of putting the CSV file for each TreeBase into the same folder as the TreeBase was not ideal as Sublime Text would open that file as well. So I moved each file up with mv titles.tsv ../6k.titles.tsv
.
The results for 6k are below.
category | description | result |
---|---|---|
bash | ls | instant |
sublime | Start sublime in the TreeBase folder | instant |
sublime | scroll and click around files in the treebase folder and see how responsive it feels. | nearInstant |
sublime | find all movies with the query "titleType movie" | neaerInstant |
sublime | find all comedy movies with the regex query "genres ._Comedy._" | nearInstant |
finder | open and browse | instant |
git | init git for the treebase | instant |
git | see git status | instant |
git | first git add for the treebase | aFewSeconds |
git | first git commit | instant |
sublime | edit some file | instant |
git | git status when there is a change | instant |
git | add the change above | instant |
git | commit the change | instant |
github | push the treebase to github | ~10 seconds |
treebase | how long will it take to start treebase | instant |
treebase | how long will it take to scan the base for errors. | nearInstant |
So 6k worked without a hitch. Not surprising as this is in the ballpark of where I normally operate with TreeBases.
Now for the first of three 10x jumps.
💣 This markdown file that I'm writing was in the parent folder of the 60k directory and Sublime text seemed to be slowing a bit, so I closed Sublime and created a new unrelated folder to hold this writeup separate from the TreeBase folders.
The build script for the 60k TreeBase took 30 seconds or so, as expected. I can optimize for that later.
I now repeat the tasks from above to see how things are holding up.
category | description | result |
---|---|---|
bash | ls | aFewSeconds |
sublime | Start sublime in the TreeBase folder | aFewSeconds with Beachball |
sublime | scroll and click around files in the treebase folder and see how responsive it feels. | instant |
sublime | find all movies with the query "titleType movie" | ~20 seconds with beachball |
sublime | find all comedy movies with the regex query "genres ._Comedy._" | ~20 seconds with beachball |
git | init git for the treebase | instant |
finder | open and browse | 6 seconds |
git | see git status | nearInstant |
git | first git add for the treebase | 1 minute |
git | first git commit | 10 seconds |
sublime | edit some file | instant |
git | git status when there is a change | instant |
git | add the change above | instant |
git | commit the change | instant |
github | push the treebase to github | ~10 seconds |
treebase | how long will it take to start treebase | ~10 seconds |
treebase | how long will it take to scan the base for errors. | ~5 seconds |
Uh oh. Already I am noticing some scaling delays with a few of these tasks.
💣 The first git add
took about 1 minute. I used to know the internals of Git well but that was a decade ago and my knowledge is rusty.
I will now look some stuff up. Could Git be creating 1 file for each file in my TreeBase? I found this post from someone who created a Git repo with 1.7M files which should turn out to contain useful information. From that post it looks like you can indeed expect 1 file for Git for each file in the project.
The first git commit
took about 10 seconds. Why? Git printed a message about Autopacking. It seems Git will combine a lot of small files into packs (perhaps in bundles of 6,700, though I haven't dug in to this) to speed things up. Makes sense.
💣 I forgot to mention, while doing the tasks for the 60k TreeBase, my computer fan kicked on. A brief look at Activity Monitor showed a number of mdworker_shared
processes using single digit CPU percentages each, which appears to be some OS level indexing process. That's hinting that a bigger TreeBase might require at least some basic OS/file system config'ing.
Besides the delays with git
everything else seemed to remain fast. The 60k TreeBase choked a little more than I'd like but seems with a few tweaks things could remain screaming fast.
Let's move on to the first real challenge.
💣 The first problem I hit immediately in that my build.js
is not efficient. I hit a v8 out of memory error. I could solve this by either 1) streaming the TSV one row at a time or 2) cleaning up the unoptimized jtree library to handle bigger data better. I chose to spend a few minutes and go with option 1).
💣 It appears the first build script started writing files to the 600k directory before it failed. I had to rm -rf 600k/
and that took a surprisingly long time. Probably a minute or so. Something to keep an eye on.
💣 I updated my build script to use streams. Unfortunately the streaming csv parser I switched to choked on line 32546. Inspecting that vicinity it was hard to detect what it was breaking on. Before diving in I figured I'd try a different library.
💣 The new library seemed to be working but it was taking a while so I added some instrumentation to the script. From those logs the new script seems to generate about 1.5k files per second. So should take about 6 minutes for all 600k. For the 6.5M files, that would grow to an hour, so perhaps there's more optimization work to be done here.
💣 Unfortunately the script exited early with:
Error: ENAMETOOLONG: name too long, open '/Users/breck/imdbPost/../imdb/600k/mord-an-lottomillionr-karl-hinrich-charly-l.sexualdelikt-an-carola-b.bankangestellter-zweimal-vom-selben-bankruber-berfallenmord-an-lottomillionr-karl-hinrich-charly-l.sexualdelikt-an-carola-b.bankangestellter-zweimal-vom-selben-bankruber-berfallen01985nncrimenews.title'
Turns out the Apple File System has a filename size limit of 255 UTF-8 characters so this error is understandable. However, inspecting the filename shows that for some reason the permalink was generated by combining the original title with the primary title. Sounds like a bug.
I cd
into the 600k
directory to see what's going on.
💣 Unfortunately ls
hangs. ls -f -1 -U
seems to go faster.
The titles look correct. I'm not sure why the script got hung up on that one entry. For now I'll just wrap the function call in a Try/Catch and press on. I should probably make this script resumable but will skip that for now.
Rerunning the script...it worked! That line seemed to be the only problematic line.
We now have our 600k TreeBase.
category | description | result |
---|---|---|
bash | ls | ~30 seconds |
sublime | Start sublime in the TreeBase folder | failed |
sublime | scroll and click around files in the treebase folder and see how responsive it feels. | X |
sublime | find all movies with the query "titleType movie" | X |
sublime | find all comedy movies with the regex query "genres ._Comedy._" | X |
finder | open and browse | 3 minutes |
git | init git for the treebase | nearInstant |
git | see git status | 6s |
git | first git add for the treebase | 40 minutes |
git | first git commit | 10 minutes |
sublime | edit some file | X |
git | git status when there is a change | instant |
git | add the change above | instant |
git | commit the change | instant |
github | push the treebase to github | ~10 seconds |
treebase | how long will it take to start treebase | ~10 seconds |
treebase | how long will it take to scan the base for errors. | ~5 seconds |
💣 ls
is now nearly unusable. ls -f -1 -U
takes about 30 seconds. A straight up ls
takes about 45s.
💣 Sublime Text failed to open. After 10 minutes of 100% CPU usage and beachball'ing I force quit the program. I tried twice to be sure with the same result.
💣 mdworker_shared
again kept my laptop running hot. I found a way of potentially disabling Mac OS X Spotlight Indexing of the IMDB folder.
💣 Opening the 600k
folder in Apple's Finder gave me a loading screen for about 3 minutes
At least it eventually came up:
Now, how about Git?
💣 The first git add .
took 40 minutes! Yikes.
real 39m30.215s
user 1m19.968s
sys 13m49.157s
💣 git status
after the initial git add took about a minute.
💣 The first git commit
after the git add took about 10 minutes.
GitHub turns out to be a real champ. Even with 600k files the first git push
took less than 30 seconds.
real 0m22.406s
user 0m2.657s
sys 0m1.724s
The 600k repo on GitHub comes up near instantly. GitHub just shows the first 1k out of 600k files which I think is a good compromise, and far better than a multiple minute loading screen.
💣 Sadly there doesn't seem to be any pagination for this situation on GitHub, so not sure how to view the rest of the directory contents.
I can pull up a file quickly on GitHub, like the entry for License to Kill.
How about editing files locally? Sublime is no use so I'll use vim
. Because ls
is so slow, I'll find the file I want to edit on GitHub. Of course because I can't find pagination in GitHub I'll be limited to editing one of the first 1k files. I'll use just that License to Kill entry.
So the command I use vim 007-licence-to-kill.title
. Editing that file is simple enough. Though I wish we had support for Tree Notation in vim to get syntax highlighting and such.
💣 Now I do git add .
. Again this takes a while. What I now realize is that my fancy command prompt does some git status
with every command. So let's disable that.
After going in and cleaning up my shell (including switching to zsh) I've got a bit more performance back on the command line.
💣 But just a bit. A git status
still takes about 23 seconds! Even with the -uno
option it takes about 15 seconds. This is with 1 modified file.
Now adding this 1 file seems tricky. Most of the time I do a git status
and see that I want to add everything so I do a git add .
.
💣 But I tried git add .
in the 600k TreeBase and after 100 seconds I killed the job. Instead I resorted to git add 007-licence-to-kill.title
which worked pretty much instantly.
💣 git commit
for this 1 change took about 20 seconds. Not too bad but much worse than normal.
git push
was just a few seconds.
I was able to see the change on GitHub instantly. Editing that file on GitHub and committing was a breeze. Looking at the change history and blame on GitHub was near instant.
Git blame locally was also just a couple of seconds.
So TreeBase struggles at the 600k level. You cannot just use TreeBase at the 100k level without preparing your system for it. Issues arise with GUIs like Finder and Sublime, background file system processes, shells, git, basic bash utilities, and so forth.
I haven't looked yet into RAM based file systems or how to setup my system to make this use case work well, but for now, out of the box, I cannot recommend TreeBase for databases of more than 100,000 entities.
Is there even a point now to try 6.5M? Arguably no.
However, I've come this far! No turning back now.
To recap what I am doing here: I am taking a single 6.5 million row 500MB TSV file that could easily be parsed into a SQLite or other battle hardened database and instead turning it into a monstrous 6.5 million file TreeBase backed by Git and writing it to my hard disk with no special configuration.
By the way, I forgot to mention my system specs for the record. I'm doing this on a MacBook Air running macOS Catalina on a 2.2Ghz Dual-core i7 with 8GB of 1600 Mhz DDR3 Ram with a 500GB Apple SSD using APFS. This is the last MacBook with a great keyboard, so I really hope it doesn't break.
Okay, back to the task at hand.
I need to generate the 6.5M files in a single directory. The 600k TreeBase took 6 minutes to generate so if that scales linearly 6.5M should take an hour. The first git add
for 600k took 40 minutes, so that for 6.5M could take 6 hours. The first git commit
for 600k took 10 minutes, so potentially 1.5 hours for 6.5M. So this little operation might take about 10 hours.
I'll stitch these operations together into a shell script and run it overnight (I'll make sure to check the batteries in my smoke detectors first).
Here's the script to run the whole routine:
time node buildStream.js
time cd ~/imdb/6m/
time git add .
time git commit -m "initial commit"
time git push
Whenever running a long script, it's smart to test it with a smaller dataset first. I successfully tested this script with the 6k file dataset. Everything worked. Everything should be all set for the final test.
(Later the next day...)
It worked!!! I now have a TreeBase with over 6 million files in a single directory. Well, a few things worked, most things did not.
category | description | result |
---|---|---|
bash | ls | X |
sublime | Start sublime in the TreeBase folder | X |
sublime | scroll and click around files in the treebase folder and see how responsive it feels. | X |
sublime | find all movies with the query "titleType movie" | X |
sublime | find all comedy movies with the regex query "genres ._Comedy._" | X |
finder | open and browse | X |
git | init git for the treebase | nearInstant |
git | first git add for the treebase | 12 hours |
git | first git commit | 5 hours |
sublime | edit some file | X |
git | git status when there is a change | X |
git | add the change above | X |
git | commit the change | X |
github | push the treebase to github | X |
treebase | how long will it take to start treebase | X |
treebase | how long will it take to scan the base for errors. | X |
💣 There was a slight hiccup in my script where somehow v8 again ran out of memory. But only after creating 6,340,000 files, which is good enough for my purposes.
💣 But boy was this slow! The creation of the 6M+ files took 3 hours and 20 minutes.
💣 The first git add .
took a whopping 12 hours!
💣 The first git commit
took 5 hours!
💣 A few times when I checked on the machine it was running hot. Not sure if from CPU or Disk or a combination.
💣 I eventually quit git push
. It quickly completed Counting objects: 6350437, done.
but then nothing happened except lots of CPU usage for hours.
Although most programs failed, I was at least able to successfully create this monstrosity and navigate the folder.
The experiment has completed. I took a perfectly usable 6.5M row TSV file and transformed it into a beast that brings some of the most well-known programs out there to their knees.
💣 NOTE: I do not recommend trying this at home. My laptop became lava hot at points. Who knows what wear and tear I added to my hard disk.
So that is the end of the experiment. Can you build a Git-backed TreeBase with 6.5M files in a single folder? Yes. Should you? No. Most of your tools won't work or will be far too slow. There's infrastructure and design work to be done.
I was actually pleasantly surprised by the results of this early test. I was confident it was going to fail but I wasn't sure exactly how it would fail and at what scale. Now I have a better idea of that. TreeBase currently sucks at the 100k level.
I also now know that the hardware for this type of system feels ready and it's just parts of some software systems that need to be adapted to handle folders with lots of files. I think those software improvements across the stack will be made and this dumb thing could indeed scale.
Now, my focus at the moment is not on big TreeBases. My focus is on making the experience of working with little TreeBases great. I want to help get things like Language Server Protocol going for TreeBases and a Content Management System backed by TreeBase.
But I now can envision how, once the tiny TreeBase experience is nailed, you should be able to use this for bigger tasks. The infrastructure is there to make it feasible with just a few adjustments. There are some config tweaks that can be made, more in-memory approaches, and some straightforward algorithmic additions to make to a few pieces of software. I also have had some fun conversations where people have suggested good sharding strategies that may prove useful without changing the simplicity of the system.
That being said, it would be fun to do this experiment again but this time try and make it work. Once that's a success, it would be fun to try and scale it another 100x, and try to build a TreeBase for something like the 180M paper Semantic Scholar dataset.
Okay, you might be wondering what is the point of this system? Specifically, why use the file system and why use Tree Notation?
1) About 30m programmers use approximately 100 to 500 general purpose programming languages. All of these actively used general purpose languages have battle tested APIs for interacting with file systems. They don't all have interfaces to every database program. Any programmer, no matter what language they use, without having to learn a new protocol, language, or package, could write code to interact with a TreeBase using knowledge they already have. Almost every programmer uses Git now as well, so they'd be familiar with how TreeBase change control works.
2) Over one billion more casual users are familiar with using their operating system tools for interacting with Files (like Explorer and Finder). Wouldn't it be cool if they could use tools they already know to interact with structured data?
Wouldn't it be cool if we could combine sophisticated type checking, querying, and analytical capabilities of databases with the simplicity of files? Programmers can easily build GUIs on top of TreeBase that have any and all of the functionality of traditional database-backed programs but have the additional advantage of an extremely well-known access vector to their information.
People have been predicting the death of files but these predictions are wrong. Even Apple recently backtracked and added a Files interface to iOS. Files and folders aren't going anywhere. It's a very simple and useful design pattern that works in the analog and digital realm. Files have been around for at least 4,500 years and my guess is will be around for another 5,000 years, if the earth doesn't blow up. Instead of dying, on the contrary file systems will keep getting better and better.
People have recognized the value of semantic, strongly typed content for a long time. Databases have been strongly typed since the beginning of databases. Strongly typed programming languages have dominated the software world since the beginning of software.
People have been attempting to build a system for collaborative semantic content for decades. XML, RDF, OWL2, JSON-LD, Schema.org—these are all great projects. I just think they can be simplified and I think one strong refinement is Tree Notation.
I imagine a world where you can effortlessly pass TreeBases around and combine them in interesting ways. As a kid I used to collect baseball cards. I think it would be cool if you could just as easily pass around "cards" like a "TreeBase of all the World's Medicines" or a "TreeBase of all the world's academic papers" or a "TreeBase of all the world's chemical compounds" and because I know how to work with one TreeBase I could get value out of any of these TreeBases. Unlike books or weakly typed content like Wikipedia, TreeBases are computable. They are like specialized little brains that you can build smart things out of.
So I think this could be pretty cool. As dumb as it is.
I would love to hear your thoughts.
January 23, 2020 — People make biased claims all the time. A decent response used to be "citation needed". But we should demand more. Anytime someone makes a claim that seems biased, call them out with: Dataset needed.
Whether it's an academic paper, news article, blog post, tweet, comment or ad, linking to analyses is not enough. If someone stops at that, demand a link to a clean dataset supporting the author's position. If they can't deliver, they should retract.
Of course, most sources don't currently publish their datasets. You cannot trust claims from any person or organization without an easily accessible dataset. In fact, it's probably safe to assume when someone shares a conclusion without the accompanying dataset that they are distorting reality for their own benefit.
Encourage authors to link to and/or publish their datasets. You can't say dataset needed enough. It is valuable, constructive feedback.
Link to the dataset. If you want to include a conclusion, provide a deep link to the relevant query of the dataset. Do not repeat conclusions that don't have an accompanying dataset. If people can't verify what you say, don't say it.
Many teams are creating tools that make it easy to deep link to queries over open datasets, such as Observable, Our World in Data, Google Big Query, Wolfram Data Repository, Tableau Public, IDL, Jupyter, Awesome Public Datasets, USAFacts, Google Dataset Search, and many more.
I remember being a high school student and getting graded on our dataset notebooks we made in the lab. Writing clean data should be widely taught in school, and there's an army of potential workers who could help us create more public, deep-linkable datasets.
Thanks to DL for helping me refine my thinking from this earlier post.
January 20, 2020 — In this post I briefly describe eleven threads in languages and programming. Then I try to connect them together to make some predictions about the future of knowledge encoding.
This might be hard to follow unless you have experience working with types, whether that be types in programming languages, or types in databases, or types in Excel. Actually, this may be hard to follow regardless of your experience. I'm not sure I follow it. Maybe just stay for the links. Skimming is encouraged.
Humans invented characters roughly 5,000 years ago.
Binary notation was invented roughly 350 years ago.
The first widely adopted system for using binary notation to represent characters was ASCII, which was created only 60 years ago. ASCII encodes little more than the characters used by English.
In 1992 UTF-8 was designed which went on to become the first widespread system that encodes all the characters for all the world's languages.
For about 99.6% of recorded history we did not have a globally used system to encode all human characters into a single system. Now we do.
Scientific standards are the original type schemas. Until recently, Standards Organizations dominated the creation of standards.
You might be familiar with terms like meter, gram, amp, and so forth. These are well defined units of measure that were pinned down in the International System of Units, which was first published in 1960.
The International Organization for Standardization (ISO) began around 100 years ago and is the organization behind a number of popular standards from currency codes to date and time formats.
For 98% of recorded history we did not have global standards. Now we do.
My grasp of the history of mathematics isn't strong enough to speak confidently to trends in the field, but I do want to mention that in the past century there has been a lot of important research into type theories.
In the past 100 years type theories have taken their place as part of the foundation of mathematics.
For 98% of recorded history we did not have strong theories of type systems. Now we do.
The research into mathematical type and set theories in the 1900's led directly into the creation of useful new programming languages and programming language features.
From the typed lambda calculus in the 1940's to the static type system in languages like C to the ongoing experiments of Haskell or the rapid growth of the TypeScript ecosystem, the research into types has led to hundreds of software inventions.
In the late 1990's and 2000's, a slew of programming languages that underutilized innovations from type theory in the name of easier prototyping, like Python and Ruby and Javascript, became very popular. For a while this annoyed programmers who understood the benefits of type systems. But now they too are benefiting, as there is a bigger demand for richer type systems now due to the increase in the number of programmers.
95%+ of the most popular programming languages use increasingly smarter type systems.
Before the Internet became widespread, the job of most programmers was to write software that interacted only with other software on the local machine. That other software was generally under their control or well documented.
In the late 1990's and 2000's, a big new market arose for programmers to write software that could interact over the Internet with software on other machines that they had no control of or knowledge about.
At first there was not a good standard language to use that was agreed upon by many people. 1996's XML a variant of SGML from 1986, was the first attempt to get some traction for this job. But XML and the dialects of XML for APIs like SOAP (1998) and WSDL (2000) were not easy to use. Then Douglas Crockford created a new language called JSON in 2001. JSON made web API programming easier and helped create a huge wave of web API businesses. For me this was great. In the beginning of my programming career I got jobs working on these new JSON APIs.
The main advantage that JSON had over XML was simple, well defined types. It had just a few primitive types—like numbers, booleans and strings—and a couple of complex types—lists and dicts. It was a very useful collection of structures that were important across all programming languages, put together in a simple and concise way. It took very little time to learn the entire thing. In contrast, XML was "extensible" and defined no types, leading to many massive dialects defined by committee.
For 99.8% of recorded history we did not have a global network conducting automated business transactions with a typed language. Now we do.
When talking about types and data one must pay homage to SQL databases, which store most of the world's structured data and perform the transactions that our businesses depend on.
SQL programmers spend a lot of time thinking about the structure of their data and defining it well in a SQL data definition language.
Types play a huge role in SQL. The dominant SQL databases such as MySQL, SQL Server, and Oracle all contain common primitives like ints, floats, and strings. Most of the main SQL databases also have more extensive type systems for things like dates and money and even geometric primitives like circles and polygons in PostgreSQL.
Critical information is stored in strongly typed SQL databases: Financial information; information about births, health and deaths; information about geography and addresses; information about inventories and purchase histories; information about experiments and chemical compounds.
98% of the world's most valuable, processed information is now stored in typed databases.
The standards we get from the Standards Organizations are vastly better than not having standards, but in the past they've been released as non-computable, weakly typed documents.
There are lots of projects that are now writing schemas in computable languages. The Schema.org project is working to build a common global database of rich type schemas. JSON LD aims to make the types of JSON more extensible. The DefinitelyTyped project has a rich collection of commonly used interfaces. Protocol buffers and similar are another approach at language agnostic schemas. There are attempts at languages just for types. GraphQL has a useful schema language with rich typing.
100% of standards/type schemas can now themselves be written in strongly typed documents.
Git is a distributed version control system created in 2005.
Git can be used to store and track changes to any type of data. You could theoretically put all of the English Wikipedia in Git, then CAPITALIZE all verbs, and save that as a single patch file. Then you could post your patch to the web and say "I propose the new standard is we should CAPITALIZE all verbs. Here's what it would look like." While this is a dumb idea, it demonstrates how Git makes it much cheaper to iterate on standards. Someone can propose both a change to the standard and the global updates all in a single operation. Someone can fork and branch to their heart's content.
For 99.9% of recorded history, there was not a cheap way to experiment and evolve type schemas nor a safe way to roll them out. Now there is.
In the past 30 years, central code hubs have emerged. There were early ones like SourceForge but in the past ten years GitHub has become the breakout star. GitHub has around 30 million users, which is also a good estimate of the total number of programmers worldwide, meaning nearly every programmer uses git.
In addition to source code hubs, package hubs have become quite large. Some early pioneers are still going strong like 1993's CRAN but the breakout star is 2010's NPM, which has more packages than the package managers of all other languages combined.
Types are arbitrary. The utility of a type depends not only on its intrinsic utility but also on its popularity. You can create a better type system—maybe a simpler universal day/time schema—but unless it gains popularity it will be of limited value.
Code hubs allow the sharing of code, including type definitions, and can help make type definitions more popular, which also makes them more useful.
99% of programmers now use code hubs and hubs are a great place to increase adoption of types, making them even more useful.
The current web is a collection of untyped HTML pages. So if I were to open a web page with lots of information about diseases and had a semantic question requiring some computation, I'd have to read the page myself and use my slow brain to parse the information and then figure out the answer to my semantic question.
The Semantic Web dream is that the elements on web pages would be annotated with type information so the computer could do the parsing for us and compute the answers to our semantic questions.
While the "Semantic Web" did not achieve adoption like the untyped web, that dream remains very relevant and is ceaselessly worked upon. In a sense Wolfram Alpha embodies an early version of the type of UX that was envisioned for the Semantic Web. The typed data in Wolfram Alpha comes from a nicely curated collection.
While lots of strongly typed proprietary databases exist on the web for various domains from movies to startups and while Wikipedia is arguable undergoing gradual typing, the open web still remains largely untyped and we don't have a universally accessible interface yet to the world's typed information.
99% of the web is untyped while 99% of the world's typed information is silo-ed and proprietary.
Deep Learning is creeping in everywhere. In the past decade it has come to be the dominant strategy for NLP. In the past two years, a new general learning strategy has become feasible where models learn some intrinsic structure of language and can use this knowledge to perform many different language tasks.
One of those tasks could be to rewrite untyped data in a typed language.
AI may soon be able to write a strongly typed semantic web from the weakly typed web.
I see a global pattern here that I call the "Type the World" trend. Here are some future predictions from these past trends.
The result of this will be a future where all business, from finance to shopping to healthcare to law, is conducted in a rich, open type system, and untyped language work is relegated to research, entertainment and leisure.
While I didn't dive into the benefits of what Type the World will bring, and instead merely pointed out some trends that I think indicate it is happening, I do indeed believe it will be a fantastic thing. Maybe I'll give my take on why Type the World is a great thing in a future post.
January 16, 2020 — I often rail against narratives. I think stories always oversimplify things, have hindsight bias, and often mislead. I spend a lot of time trying to invent tools for making data derived thinking as effortless as narrative thinking (so far, mostly in vain). And yet, as much as I rail on stories, I have to admit stories work.
I read an article that put it more succinctly:
Why storytelling? Simple: nothing else works.
I would agree with that. Despite the fact that 90% of stories are lies, they motivate people better than anything else. Stories make people feel something. They get people going.
What is the math here? On a population level, it seems people who follow stories have a survival advantage. On a local level, it seems people who can weave stories have an even greater survival advantage.
Why?
Perhaps it's due to risk taking. Perhaps the people who follow stories take more risks, on average, than people who don't, and even though many of those don't pan out some of those risks do pay off and the average is worth it.
Perhaps it's due to productivity. Perhaps people who are storiers spend less time analyzing and more time doing. The act of doing generates experience (data), so often the best way to be data-driven isn't to analyze more it's to go out there and do more to collect more data. As they say in machine learning, data trumps algorithms.
Perhaps it's due to focus. If you just responded to your senses all the time the world is a shimmering place, and perhaps narratives are necessary to get anything done at all.
Perhaps it's due to memory. A story like 'The Boy who Cried Wolf' is shorter and more memorable than 'Table of Results from a Randomized Experiment on the Effect of False Alarms on Subsequent Human Behavior'.
Perhaps it's healthier. Our brains are not much more advanced than the chimp. Uncertainty can create stress and anxiety. Perhaps the confidence that comes from belief in a story leads to less stress and anxiety leading to better health, which outweighs any downsides from decisions that go against the data.
Perhaps it's a cooperation advantage. If everyone is analyzing their individual decisions all the time, perhaps that comes at the cost of cooperation. Storiers go along with the group story, and so over time their populations get more done together. Maybe the opposite of stories isn't truth, it's anarchy.
Perhaps it's just more fun. Maybe stories are suboptimal for decision making and lead us astray all the time, and yet are still a survival advantage simply because it's a more enjoyable way to live. Even when you screw up royally, it can make a good story. As the saying goes, "don't take life too seriously, you'll never make it out alive."
Despite my problems with narratives and my quest for something better, it seems quite possible to me that at the end of the day it may turn out that there is nothing better, and it's best to make peace with stories, despite their flaws. And regardless of the future, I can't argue with the value of stories today for motivation and enjoyment. Nothing else works.
January 3, 2020 — Speling errors and errors grammar are nearly extinct in published content. Data errors, however, are prolific.
By data error I mean one the following errors: a statement without a backing dataset and/or definitions, a statement with data but a bad reduction(s), or a statement with backing data but lacking integrated context. I will provide examples of these errors later.
The hard sciences like physics, chemistry and most branches of engineering have low tolerance for data errors. But outside of those domains data errors are everywhere.
Fields like medicine, law, media, policy, the social sciences, and many more are teeming with data errors, which are far more consequential than spelling or grammar errors. If a drug company misspells the word dockter in some marketing material the effect will be trivial. But if that material contains data errors those often influence terrible medical decisions that lead to many deaths and wasted resources.
You would be skeptical of National Geographic if their homepage looked like this:
We generally expect zero spelling errors when reading any published material.
Spell checking is now an effortless technology and everyone uses it. Published books, periodicals, websites, tweets, advertisements, product labels: we are accustomed to reading content at least 99% free of spelling and grammar errors. But there's no equivalent to a spell checker for data errors and when you look for them you see them everywhere.
Data errors are so pervasive that I came up with a hypothesis today and put it to the test. My hypothesis was this: 100% of "reputable" publications will have at least one data error on their front page.
I wrote down 10 reputable sources off the top of my head: the WSJ, The New England Journal of Medicine, Nature, The Economist, The New Yorker, Al Jazeera, Harvard Business Review, Google News: Science, the FDA, and the NIH.
For each source, I went to their website and took a single screenshot of their homepage, above the fold, and skimmed their top stories for data errors.
In the screenshots above, you can see that 10/10 of these publications had data errors front and center.
Data errors in English fall into common categories. My working definition provides three: a lack of dataset and/or definitions, a bad reduction, or a lack of integrated context. There could be more, this experiment is just a starting point where I'm naming some of the common patterns I see.
The top article in the WSJ begins with "Tensions Rise in the Middle East". There are at least 2 data errors here. First is the Lack of Dataset error. Simply put: you need a dataset to make a statement like that. There is no longitudinal dataset in that article on tensions in the Middle East. There is also a Lack of Definitions. Sometimes you can not yet have a dataset but at least define what a dataset would be that could back your assertions. In this case we have neither a dataset nor a definition of what some sort of "Tensions" dataset would look like.
In the New England Journal of Medicine, the lead figure shows "excessive alcohol consumption is associated with atrial fibrillation" between 2 groups. One group had 0 drinks over a 6 month period and the other group had over 250 drinks (10+ per week). There was a small impact on atrial fibrillation. This is a classic Lack of Integrated Context data error. If you were running a lightbulb factory and found soaking lightbulbs in alcohol made them last longer, that might be an important observation. But humans are not as disposable, and health studies must always include integrated context to explore whether there is something of significance. Having one group make any sort of similar drastic lifestyle change will likely have some impact on any measurement. A good rule of thumb is anything you read that includes p-values to explain why it is significant is not significant.
In Nature we see the line "world's growing water shortage". This is a Bad Reduction, another very common data error. While certain areas have a water shortage, other areas have a surplus. Any time you see a broad diverse things grouped into one term, or "averages", or "medians", it's usually a data error. You always need access to the data, and you'll often see a more complex distribution that would prevent broad true statements like those.
In The Economist the lead story talks about an action that "will have profound consequences for the region". Again we have the Lack of Definitions error. We also have a Forecast without a Dataset error. There's nothing wrong with making a forecast--creating a hypothetical dataset of observations about the future--but one needs to actually create and publish that dataset and not just a vague unfalsifiable statement.
The New Yorker lead paragraph claims an event "was the most provocative U.S. act since...". I'll save you the suspense: the article did not include a thorough dataset of such historical acts with a defined measurement of provocative. Another Lack of Dataset error.
In Al Jazeera we see "Iran is transformed" and also a Bad Reduction, Lack of Dataset and Lack of Definition errors.
Harvard Business Review has a lead article about the Post-Holiday funk. In that article the phrase "research...suggests" is often a dead giveaway for a Hidden Data error, where the data is behind a paywall and even then often inscrutable. Anytime someone says "studies/researchers/experts" it is a data error. We all know the earth revolves around the sun because we can all see the data for ourselves. Don't trust any data you don't have access to.
Google News has a link to an interesting article on the invention of a new type of color changing fiber, but the article goes beyond the matter at hand to make the claim: "What Exactly Makes One Knot Better Than Another Has Not Been Well-Understood – Until Now". There is a Lack of Dataset error for meta claims about the knowledge of knot models.
The FDA's lead article is on the Flu and begins with the words "Most viral respiratory infections...", then proceeds for many paragraphs with zero datasets. There is an overall huge Lack of Datasets in that article. There's also a Lack of Monitoring. Manufacturing facilities are a controlled, static environment. In uncontrolled, heterogeneous environments like human health, things are always changing, and to make ongoing claims without having infrastructure in place to monitor and adjust to changing data is a data error.
The NIH has an article on how increased exercise may be linked to reduced cancer risk. This is actually an informative article with 42 links to many studies with lots of datasets, however the huge data error here is Lack of Integration. It is very commendable to do the grunt work and gather the data to make a case, but simply linking to static PDFs is not enough—they must be integrated. Not only does that make it much more useful, but if you've never tried to integrate them, you have no idea if the pieces actually will fit together to support your claims.
While my experiment didn't touch books or essays, I'm quite confident the hypothesis will hold in those realms as well. If I flipped through some "reputable" books or essayist collections I'm 99.9% confident you'd see the same classes of errors. This site is no exception.
I don't think anyone's to blame for the proliferation of data errors. I think it's still relatively recent that we've harnessed the power of data in specialized domains, and no one has yet invented ways to easily and fluently incorporate true data into our human languages.
Human languages have absorbed a number of sublanguages over thousands of years that have made it easier to communicate with ease in a more precise way. The base 10 number system (0,1,2,3,4,5,6,7,8,9) is one example that made it a lot easier to utilize arithmetic.
Domains with low tolerance for data errors, like aeronautical engineering or computer chip design, are heavily reliant on programming languages. I think it's worthwhile to explore the world of programming language design for ideas that might inspire improvements to our everyday human languages.
Some quick numbers for people not familiar with the world of programming languages. Around 10,000 computer languages have been released in history (most of them in the past 70 years). About 50-100 of those have more than a million users worldwide and the names of some of them may be familiar to even non-programmers such as Java, Javascript, Python, HTML or Excel.
Not all programming languages are created equal. The designers of a language end up making thousands of decisions about how their particular language works. While English has evolved with little guidance over millennia, programming languages are often designed consciously by small groups and can evolve much faster.
Often the designers change a language to make it easier to do something good or harder to do something bad.
Sometimes what is good and bad is up to the whims of the designer. Imagine I was an overly optimistic person and decided that English was too boring or pessimistic. I may invent a language without periods, where all sentences must end with an exclamation point! I'll call it Relish!
Most of the time though, as data and experience accumulates, a rough consensus emerges about what is good and bad in language design (though this too seesaws).
One of the patterns that has emerged as generally a good thing over the decades to many languages is what's called "type checking". When you are programming you often create buckets that can hold values. For example, if you were programming a function that regulated how much power a jet engine should supply, you might take into account the reading from a wind speed sensor and so create a bucket named "windSpeed".
Some languages are designed to enforce stricter logic checking of your buckets to help catch mistakes. Others will try to make your program work as written. For example, if later in your jet engine program you mistakenly assigned the indoor air temperature to the "windSpeed" bucket, the parsers of some languages would alert you while you are writing the program, while with some other languages you'd discover your error in the air. The former style of languages generally do this by having "type checking".
Type Checking of programming languages is somewhat similar to Grammar Checking of English, though it can be a lot more extensive. If you make a change in one part of the program in a typed language, the type checker can recheck the entire program to make sure everything still makes sense. This sort of thing would be very useful in a data checked language. If your underlying dataset changes and conclusions anywhere are suddenly invalid, it would be helpful to have the checker alert you.
Perhaps lessons learned from programing language design, like Type Checking, could be useful for building the missing data checker for English.
Perhaps what we need is a new color of squiggly:
✅ Spell Checkers: red squiggly
✅ Grammar Checkers: green squiggly
❌ Data Checkers: blue squiggly
If we had a data checker that highlighted data errors we would eventually see a drastic reduction in data errors.
If we had a checker for data errors appear today our screens would be full of blue. For example, click the button below to highlight just some of the data errors on this page alone.
If someone created a working data checker today and applied it to all of our top publications, blue squigglies would be everywhere.
It is very expensive and time consuming to build datasets and make data driven statements without data errors, so am I saying until we can publish content free of data errors we should stop publishing most of our content? YES! If you don't have anything true to say, perhaps it's best not to say anything at all. At the very least, I wish all the publications above had disclaimers about how laden with data errors their stories are.
Of course I don't believe either of those are likely to happen. I think we are stuck with data errors until people have invented great new things so that it becomes a lot easier to publish material without data errors. I hope we somehow create a data checked language.
I still don't know what that looks like, exactly. I spend half my work time attempting to create such new languages and tools and the other half searching the world to see if someone else has already solved it. I feel like I'm making decent progress on both fronts but I still have no idea whether we are months or decades away from a solution.
While I don't know what the solution will be, I would not be surprised if the following patterns play a big role in moving us to a world where data errors are extinct:
1. Radical increases in collaborative data projects It is very easy for a person or small group to crank out content laden with data errors. It takes small armies of people making steady contributions over a long time period to build the big datasets that can power content free of data errors.
2. Widespread improvements in data usability. Lots of people and organizations have moved in the past decade to make more of their data open. However, it generally takes hours to become fluent with one dataset, and there are millions of them out there. Imagine if it took you hours to ramp on a single English word. That's the state of data usability right now. We need widespread improvements here to make integrated contexts easier.
3. Stop subsidizing content laden with data errors. We grant monopolies on information and so there's even more incentive to create stories laden with data errors—because there are more ways to lie than to tell the truth. We should revisit intellectual monopoly laws.
4. Novel innovations in language. Throughout history novel new sublanguages have enhanced our cognitive abilities. Things like geometry, Hindu-Arabic numerals, calculus, binary notation, etc. I hope some innovators will create very novel data sublanguages that make it much easier to communicate with data and reduce data errors.
Have you invented a data checked language, or are working on one? If so, please get in touch.
August 19, 2019 — Back in the 2000's Nassim Taleb's books set me on a new path in search of truth. One truth I became convinced of is that most stories are false due to oversimplification. I largely stopped writing over the years because I didn't want to contribute more false stories, and instead I've been searching for and building new forms of communication and ways of representing data that hopefully can get us closer to truth.
I've tried my best to make my writings encode "real" and "true" information, but it's impossible to overcome the limitations of language. The longer any work of English writing is, the more inaccuracies it contains. This post itself will probably be more than 50% false.
But most people aren't aware of the problem.
Then came DT and "fake news". One nice thing I can say about DT is that "fake news" is a great idea.
If your ideas are any good, you'll have to ram them down people's throats. @ Howard H. Aiken
..in science the credit goes to the man who convinces the world, not to the man to whom the idea first occurs. @ Francis Darwin
DT has done a great job at spreading this idea. Hundreds of millions of people, at least, now are at least vaguely familiar that there's a serious problem, even if people can't describe precisely what that is. Some people mistakenly believe "their news" is real and their opponents' news is fake. It's all fake news.
English is a fantastic story telling language that has been very effective at sharing stories, coordinating commerce and motivating armies, but English evolved in a simpler time with simpler technologies and far less understanding about how the world really works.
English oversimplifies the world which makes it easy to communicate something to be done. English is a modern day cave painting language. Nothing motivates a person better than a good story, and that motivation was essential to get us out of the cave. It didn't matter so much in which direction people went, as long as they went in some direction together.
But we are now out of the cave, and it is not enough to communicate what is to be done. We have many more options now and it's important that we have langauges that can better decide what is the best thing to do.
Real News is starting to emerge in a few places. The WSJ has long been on the forefront but newer things like Observable are also popping up.
I don't know exactly what a language for truth will look like but I imagine it will have some of these properties:
I would say until we move away from English and other story-telling languages to encodings that are better for truth telling, our thinking will also be limited.
A language that doesn’t affect the way you think about programming, is not worth knowing. – Alan Perlis
New languages designed for truth telling might not just be useful in our everyday lives, they could very much change the way we think.
Again, to channel Taleb, I'm not saying English is bad. By all means, enjoy the stories. But just remember they are stories. If you are reading English, know that you are not reading Real News.
January 13, 2018 — This is a story about how my FitBit logged a manic episode.
In mid-2017, I had a manic episode that led me to act impulsively, over-confidently and grandiosely. Like a textbook case of mania, I was filled with grand ideas and visions, rushed through life decisions (and a lot of my savings), and was positively euphoric.
This was not a fluke event. I had been stable for about 2 years, but I was diagnosed with bipolar disorder 13 years ago, and have had approximately 7 swings of varying severity since then.
But this episode had an epidemiological silver lining: my FitBit recorded it. I hope this story might help at least one other person suffering from bipolar disorder or encourage people working on using wearable tech for mental health treatments to keep up the promising work.
In November 2014, I started wearing a fitness band at all times. I now have about 160 consecutive weeks worth of sleep and other data.
The average FitBit user gets 6 hours and 38 minutes of sleep per night.
My average — when stable — was a bit over 8 hours a night. Lots of people — parents especially — don't have the luxury of sleeping so much, and I feel a bit selfish and lazy to sleep so much. But when I do sleep less, as we shall see, my brain handles it worse than most.
In May of 2016, I left my job to do some freelancing and work on an entrepreneurial software project. I was staying sane and getting over 7.5 hours of sleep per night.
But in April 2017, things changed. My sleep average dropped to a little over 6 hours per night. Compared to prior years, this was a 30% drop in sleep. But I wasn't tired. I felt more awake than I had been in years.
On Thursday, May 3, I slept for 4 hours and 56 minutes — for no good reason. The next night I did it again. I was coding faster than ever and loving life to boot. Later that day in my journal I wrote, “Life is good. No. Life is fucking great!”
Mania is like an invisible drug. I had been off it for years. I had been vigilant. But after a long stable period, I had forgotten and now my guard was down. I didn't recognize it as it was happening.
Looking back, this was probably the time to catch it. To go see my therapist. To get back on meds. To reveal my condition to some more people and ask for help.
Alas, I didn't do that. By chance, I spent the next three weeks at my girlfriend's on the East Coast and did calm down a bit. But I was not thinking, “Yikes! I was getting a bit manic there, I need to be careful.” Instead I was thinking, “Wow, I was in the zone a couple weeks ago — I gotta get back to that!” My mind had tasted mania again and was subconsciously itching to get more.
Eager to get back to “the zone,” I impulsively bought a ticket back home and cut my trip to the East Coast short. I got back to coding and my work, and then, predictable as a clock, my countdown to mania began.
The above chart is weekly averages. But the day-to-day variance was extreme — a few nights I slept less than 3 hours — but I felt great!
My behavior became textbook manic. My ideas and claims became more and more ambitious — topics that had seemed complex to me before suddenly seemed simple, I could learn anything, solve any problem, change industries.
With my software project I started to feel the paranoid need to move fast — I got the delusion Google was also working on the same idea and was about to launch before me, stealing my thunder. Embarrassingly, I started publishing these grand claims, emailing past coworkers and employers, and started telling people I would win the Turing Prize.
My spending got wild. For example, on a trip during the episode, I spent $300 to upgrade my ticket to first class (at the time I thought it was worthwhile because I wrote a “brilliant” math proof on the plane), took a deluxe Uber for $70, and tipped a busker $100. My usual daily expenses were about $30. My monthly expenses had been about $3k a month, but now shot up to over $10k.
I can't believe I didn't know better, given more than a decade of experience with this condition. But at the time I was oblivious to what was driving me — in my mind there was a perfectly logical narrative explaining all my actions. Only now, looking at the sleep data, is it clear to me that physical brain conditions were contributing a huge amount to my behavior.
Following this acute episode, I had a couple months of mixed moods.
July and August were particularly confusing. The problem of having ideas in a manic state is that you believe in them so fervently—ideas come hard and fast in what feels like a spiritual experience — that it becomes very hard to let those ideas and beliefs go. At times I questioned what the hell I was doing, but I had made lots of claims in public and did my best to try and prove them.
In the months that followed, sleep was mixed. I was trying to “get back” to the energy levels and clarity that I had in June — I still hadn't recognized that time as manic — but was finding it hard to do so. I tried sleeping less to kickstart my system.
Far in the back of my head it was starting to occur to me that maybe I had gone manic again — that my ideas weren't so grand after all — but for months I strongly repressed those thoughts.
By December I couldn't go mixed anymore. I knew something was wrong and I finally started to reflect on the past.
Out of curiosity, I downloaded all my sleep data. What I saw confirmed my worst fears. When my sleep went out of control, so did my mania. When sleep decreased, grandiosity increased. Those long days weren't a result of groundbreaking work, but rather the result of a manic mind.
Wearables gave me great hope for curing my extreme mood issues. At first this was confirmed by my experience — once I started wearing a fitness band and regularly kept an eye on my sleep, I went on to have the longest mania-free period in my life. But now the data shows me that wearables by themselves are not a cure.
There are a lot of things I could have done differently to prevent this manic episode from happening. After stabilizing, I went off my medication and hadn't visited my doctor for over 18 months. I screwed up.
But I wonder what would have happened back in May if my wearable service had alerted my doctor and a few close friends of my foreboding sleep changes. Perhaps there could have been a minor intervention that prevented the huge swing?
Of course, I know it's not as easy as alerts. I realize that if done in the wrong way, an alert service might make things worse — perhaps people might get paranoid and angry, rip the band off, and continue on their manic way.
But in the future perhaps someone will design an alert and intervention system that is effective and palatable. Perhaps the alert sets into motion something subtle and agreeable — the person agrees to take an extra medication for a time, check in with their therapist, or start filling out a daily mood journal for a month, et cetera.
I am hugely disappointed in myself for letting this happen but at least grateful to have the huge amount of sleep data this time, something I never had before.
Having lots of quantitative biological data like this makes it easier to accept the diagnosis that this is a real, physical condition. Sleep stats are also a simple, objective, and near-real time indicator for state of mind. Other data like journaling, mood tracking, emails, and finances reveal the symptoms, but that data is sparse, subjective, and laggy.
I'm hopeful that wearable makers might crack the code for measuring other key indicators, like anxiety and social activity data, which could also be very helpful for people with mental health issues.
Of course, the holy grail would probably be actual images of the brain over time. Perhaps when industrious scientists and engineers improve MRI and fMRI technology enough, getting a brain scan done a few times a year could really help people with bipolar understand their brains more and take better control of their condition. A new study published in Nature in May of 2017 provides tantalizing evidence that brain MRI scans will help us understand more how bipolar brains are different. I know I have to accept that my brain has some biological aberrations that make me more likely to behave in ways I'd rather not — but it's hard to accept that when you can't see what those aberrations are, and when the treatments are a lot of guesswork.
Monitoring sleep alone won't be the secret to a stable, productive, happy life. But it might reduce future manias. As far as I can remember, I never had a manic period without an accompanying need to sleep less. That seems to be the common experience, from what I've read. I would suggest to younger folks in high school and college who have been recently diagnosed with bipolar to get a sleep tracker and stay on it. Hopefully you'll be able to prevent some manic episodes. And if, like many others with bipolar disorder, you continue to have swings over the decades ahead, at least you'll gather data that could help you and other people figure this thing out.
Of course, technology might also be making bipolar disorder worse in those prone to it. The technology advanced U.S. has the highest rate of bipolar disorder in the world. Perhaps increased screen time, less social time, or more media exacerbates the problem. But that's why I like passive wearables, which collect data without intruding on your life. Even if some innovations of the modern world offer new challenges to bipolar sufferers, some innovations also offers new hope.
I wish my longest stable streak was still going strong. I wish I hadn't gone manic and then crashed into the inevitable depression.
But I'm grateful that this time I was wearing a figurative “black box.” Hopefully others can learn from my experience.
Next time I start acting on a grand idea, I hope my band and I will do the healthy thing: get some sleep and forget it.
Note: I originally published this anonymously on Medium. I was too scared to reveal my name. I am less scared now. I feel we are close to an accurate model of this condition. I have a more sophisticated understanding now but will leave the post as is to reflect my understanding at the time. Thank you to CP and DR, who provided me feedback at the time on this post. - 6/13/2023
June 23, 2017 — I just pushed a project I've been working on called Ohayo.
You can also view it on GitHub: https://github.com/treenotation/ohayo
I wanted to try and make a fast, visual app for doing data science. I can't quite recommend it yet, but I think it might get there. If you are interested you can try it now.
June 21, 2017 — Eureka! I wanted to announce something small, but slightly novel, and potentially useful.
What did I discover? That there might be useful general purpose programming languages that don't use any visible syntax characters at all.
I call the whitespace-based notation Tree Notation and languages built on top of it Tree Languages.
Using a few simple atomic ingredients---words, spaces, newlines, and indentation--you can construct grammars for new programming languages that can do anything existing programming languages can do. A simple example:
if true
print Hello world
This language has no parentheses, quotation marks, colons, and so forth. Types, primitives, control flow--all of that stuff can be determined by words and contexts instead of introducing additional syntax rules. If you are a Lisper, think of this "novel" idea as just "lisp without parentheses."
There are hundreds of very active programming languages, and they all have different syntax as well as different semantics.
I think there will always be a need for new semantic ideas. The world's knowledge domains are enormously complex (read: billions/trillions of concepts, if not more), machines are complex (billions of pieces), and both will always continue to get more complex.
But I wonder if we always need a new syntax for each new general purpose programming language. I wonder if we could unlock potentially very different editing environments and experiences with a simple geometric syntax, and if by making the syntax simpler folks could build better semantic tooling.
Maybe there's nothing useful here. Perhaps it is best to have syntax characters and a unique syntax for each general purpose programming language. Tree Notation might be a bad idea or only useful for very small domains. But I think it's a long-shot idea worth exploring.
Thousands of language designers focus on the semantics and choose the syntax to best fit those semantics (or a syntax that doesn't deviate too much from a mainstream language). I've taken the opposite approach--on purpose--with the hopes of finding something overlooked but important. I've stuck to a simple syntax and tried to implement all semantic ideas without adding syntax.
Initially I just looked at Tree Notation as an alternative to declarative format languages like JSON and XML, but then in a minor "Eureka!" moment, realized it might work well as a syntax for general purpose Turing complete languages across all paradigms like functional, object-oriented, logic, dataflow, et cetera.
Someday I hope to have data definitively showing that Tree Notation is useful, or alternatively, to explain why it is suboptimal and why we need more complex syntax.
I always wanted to try my hand at writing an academic paper. So I put the announcement in a 2-page paper on GitHub and arxiv. The paper is titled Tree Notation: an antifragile program notation. I've since been informed that I should stick to writing blog posts and code and not academic papers, which is probably good advice :).
Two updates on 12/30/2017. After I wrote this I was informed that one other person from the Scheme world created a very similar notation years ago. Very little was written in it, which I guess is evidence that the notation itself isn't that useful, or perhaps that there is still something missing before it catches on. The second note is I updated the wording of this post as the original was a bit rushed.
September 24, 2013 — What if instead of talking about Big Data, we talked about 12 Data, 13 Data, 14 Data, 15 Data, et cetera? The # refers to the number of zeroes we are dealing with.
You can then easily differentiate problems. Some companies are dealing with 12 Data, some companies are dealing with 15 Data. No company is yet dealing with 19 Data. Big Data starts at 12 Data, and maybe over time you could say Big Data starts at 13 Data, et cetera.
What do you think?
This occurred to me recently as I just started following Big Data on Quora and was surprised to see the term used so loosely, when data is something so easily measurable. For example, a 2011 Big Data report from McKinsey defined big data as ranging "from a few dozen terabytes to multiple petabytes (thousands of terabytes)." Wikipedia defines Big Data as "a collection of data sets so large and complex that it becomes difficult to process using on-hand database management tools or traditional data processing applications."
I think these terms make Big Data seem mysterious and confusing, when in fact it could be completely straightforward.
September 23, 2013 — Making websites is slow and frustrating.
I met a young entrepreneur who wanted to create a website for his bed and breakfast. He had spent dozens of hours with different tools and was no closer to having what he wanted.
I met a teacher who wanted his students to turn in web pages for homework instead of paper pages. No existing tool allows his students to easily create pages without restricting their creativity.
I met an artist who wanted a website with a slideshow for her portfolio.
A restaurant owner who wanted a website that could take online orders.
An author who wanted a website with a blog.
A saleswoman who wanted to build a members-only site for great deals she gathered.
A candidate who wanted a website that could coordinate his volunteers.
A nonprofit founder who wanted a website that told the story of impoverished children in his country and accepted donations.
These are just a handful of real people with real ideas who are frustrated by the current tools.
The fact is, people want to do millions of different things with their websites, but the only two options are to use a tool that limits your creative potential or to program your site from scratch. Neither option is ideal.
Which is why we're building a third option. We are building an open source, general purpose IDE for building websites.
Here's a short video demonstrating how it works:
NudgePad is in early beta, but is powering a number of live websites like these:
Although we have a lot more to do to get to a stable version 2.0, we thought the time was right to start opening up NudgePad to more people and recruiting more help for the project. We also want to get feedback on the core ideas in NudgePad.
To get involved, give NudgePad a try or check out the source code on GitHub.
We truly believe this new way to build websites--an IDE in your browser-- is a faster way to build websites and the way it will be done in the future. By this time next year, using NudgePad, it could be 100x faster and easier to build websites than it is today.
April 2, 2013 — For me, the primary motivation for creating software is to save myself and other people time.
I want to spend less time doing monotonous tasks. Less time doing bureaucratic things. Less time dealing with unnecessary complexity. Less time doing chores.
I want to spend more time engaged with life.
Saving people time is perhaps the only universal good. Everyone wants to have more options with their time. Everyone benefits when a person has more time. They can enjoy that extra time and/or invest some of it to make the world better for everyone else.
Nature loves to promote inequality, but a fascinating feature of time is that it is so equally distributed. Nature took the same amount of time to evolve all of us alive today. All of our evolutionary paths are equally long. We also have equal amounts of time to enjoy life, despite the fact that other things may be very unequally distributed.
The very first program I made was meant to save me and my family time. Back in 1996, to start our computer, connect to the Internet and launch Netscape took about 20 minutes, and you had to do each step sequentially. My first BAT script automated that to allow you to turn the computer on and go play outside for 20 minutes while it connected to the web. Many years later, my ultimate motivation to save people time has remained constant.
Two people in the same forest, have the same amount of water and food, Are near each other, but may be out of sight, The paths behind each are equally long. The paths ahead, may vary. One's path is easy and clear. The other's is overgrown and treacherous. Their paths through the forest, in the past, in the present, and ahead are equal. Their journeys can be very different.
The crux of the matter, is that people don't understand the true nature of money. It is meant to circulate, not be wrapped up in a stocking @ Guglielmo Marconi
March 30, 2013 — I love Marconi's simple and clear view of money. Money came in and he put it to good use. Quickly. He poured money into the development of new wireless technology which had an unequal impact on the world.
This quote, by the way, is from "My Father, Marconi", a biography of the famous inventor and entrepreneur written by his daughter, Degna. Marconi's story is absolutely fascinating. If you like technology and entrepreneurship, I highly recommend the book.
P.S. This quote also applies well to most man made things. Cars, houses, bikes, et cetera, are more valuable circulating than idling. It seemed briefly we were on a trajectory toward overabundance, but the sharing economy is bringing circulation back.
March 30, 2013 — Why does it take 10,000 hours to become a master of something, and not 1,000 hours or 100,000 hours?
The answer is simple. Once you've spent 10,000 hours practicing something, no one can crush you like a bug.
Let me explain. First, the most important thing to keep in mind is that nature loves inequality. For example, humans and bugs are not even close to equal in size. Humans are 1,000x bigger than bugs. It is very easy for a human to squash a bug.
Now, when you are starting to learn something and have spent say, 100 hours practicing that thing, you, my friend, are the bug. There are many people out there who have been practicing that thing for 10,000 hours, and can easily crush you like a bug, if they are mean spirited like that.
Once you've got 1,000 hours of practice under your belt, it becomes very hard for someone to crush you.
You reach 10,000 hours of practice, and you are now at a level where no one can possibly crush you like a bug. It is near impossible for a human to practice something for 100,000 hours. That would be 40 hours of practice per week for fifty years! Life is too chaotic, and our bodies are too fragile, to hit that level of practice. Thus, when you hit 10,000 hours, you're safe. You no longer have to wonder if there's someone out there who knows 10x more than you. You are now a master.
Do you hear them talking of genius, Degna? There is no such thing. Genius, if you like to call it that, is the gift of work continuously applied. That's all it is, as I have proved for myself. @ Guglielmo Marconi
March 16, 2013 — A kid says Mommy or Daddy or Jack or Jill hundreds of times before grasping the concept of a name.
Likewise, a programmer types name = Breck
or age=15
hundreds of times before grasping the concept of a variable.
What do you call it when someone finally sees the concept?
John Calcote, a programmer with decades of experience, calls it a minor epiphany.
Minor epiphanies. Anyone who's programmed for a while can appreciate that term.
When you start programming you do pure trial and error. What will happen when I type this or click that? You rely on memorization of action and reaction. Nothing makes sense. Every single term--variable, object, register, boolean, int, string, array, and so on--is completely and utterly foreign.
But you start to encounter these moments. These minor epiphanies, where suddenly you see the connection between a class of things. Suddenly something makes sense. Suddenly one term is not so foreign anymore. You have a new tool at your disposable. You have removed another obstacle that used to trip you.
In programming the well of minor epiphanies never runs dry. Even after you've learned thousands of things the epiphanies keep flowing at the same rate. Maybe the epiphanies are no longer about what the concept is, or how you can use it, but now are more about where did this concept come from, when was it created, who created it, and most fascinating of all, why did they create it?
Minor epiphanies give you a rush, save you time, help you make better products, and help you earn more.
As someone who loves to learn, my favorite thing about them is the rush you get from having something suddenly click. They make this programming thing really, really fun. Day in and day out.
March 8, 2013 — If your software project is going to have a long life, it may benefit from Boosters. A Booster is something you design with two constraints: 1) it must help in the current environment 2) it must be easy to jettison in the next environment.
February 24, 2013 — It is a popular misconception that most startups need to fail. We expect 0% of planes to crash. Yet we switch subjects from planes to startups and then suddenly a 100% success rate is out of the question.
This is silly. Maybe as the decision makers switch from gambling financeers to engineers we will see the success rate of starting a company shoot closer to 100%.
February 16, 2013 — Some purchasing decisions are drastically better than others. You might spend $20 on a ticket to a conference where you meet your next employer and earn 1,000x "return" on your purchase. Or you might spend $20 on a fancy meal and have a nice night out.
Purchasing decisions have little direct downside. You most often get your money's worth.
The problem is the opportunity cost of purchases. That opportunity cost can cost you a fortune.
Since some purchases can change your life, delivering 100x or greater return on your investment, spending your money on things that only give you a 10% return can be a massive mistake, because you'll miss out on those great deals.
It's best to say "no" to a lot of deals. Say "yes" to the types of deals that you know deliver massive return.
February 12, 2013 — You shouldn't plan for the future. You should plan for one of many futures.
The world goes down many paths. We only get to observe one, but they all happen.
In the movie "Back to the Future II", the main character Marty, after traveling decades into the future, buys a sports alamanac so he can go back in time and make easy money betting on games. Marty's mistake was thought he had the guide to the future. He thought there was only one version of the future. In fact, there are many versions of the future. He only had the guide to one version.
Marty was like the kid who stole the answer key to an SAT but still failed. There are many versions of the test.
There are infinite futures. Prepare for them all!
December 29, 2012 — I love that phrase.
I want to learn how to program. Prove it.
I value honesty. Prove it.
I want to start my own company. Prove it.
It works with "we" too.
We're the best team in the league. Prove it.
We love open source. Prove it.
We're going to improve the transportation industry. Prove it.
Words don't prove anything about you. How you spend your time proves everything.
The only way to accurately describe yourself or your group is to look at how you've spent your time in the past. Anytime someone says something about what they will do or be like in the future, your response should be simple: prove it.
December 23, 2012 — If you are poor, your money could be safer under the mattress than in the bank:
The Great Bank Robbery dwarfs all normal burglaries by almost 10x. In the Great Bank Robbery, the banks are slowly, silently, automatically taking from the poor.
One simple law could change this:
What if it were illegal for banks to automatically deduct money from someone's account?
If a bank wants to charge someone a fee, that's fine, just require they send that someone a bill first.
What would happen to the statistic above, if instead of silently and automatically taking money from people's accounts, banks had to work for it?
Moebs via wayback machine
December 22, 2012 — Entrepreneurship is taking responsibility for a problem you did not create.
It was not Google's fault that the web was a massive set of unorganized pages that were hard to search, but they claimed responsibility for the problem and solved it with their engine.
It was not Dropbox's fault that data loss was common and sharing files was a pain, but they claimed responsibility for the problem and solved it with their software.
It is not Tesla's fault that hundreds of millions of cars are burning gasoline and polluting our atmosphere, but they have claimed responsibility for the problem and are attempting to solve it with their electric cars.
In a free market, like in America or online, you can attempt to take responsibility for any problem you want. That's pretty neat. You can decide to take responsibility for making sure your neighborhood has easy access to great Mexican food. Or you can decide to take responsibility for making sure the whole Internet has easy access to reliable version control. If you do a good job, you will be rewarded based on how big the problem is and how well you solve it.
How big an entrepreneur's company gets is strongly correlated with how much responsibility the entrepreneur wants. The entrepreneur gets to constantly make choices about whether they want their company to take on more and more responsibility. Companies only get huge because their founders say "yes" to more and more responsibility. Oftentimes they can say "yes" to less responsibility, and sell their company or fold it.
Walmart started out as a discount store in the Midwest, but Sam Walton (and his successors) constantly said "yes" to more and more responsibility and Walmart has since grown to take on responibility for discounting across the world.
Google started out with just search, but look at all the other things they've decided to take responsibility for: email, mobile operating systems, web browsers, social networking, document creation, calendars, and so on. Their founders have said "yes" to more and more responsibility.
Smart entrepreneurship is all about choosing problems you can and want to own. You need to say "no" to most problems. If you say "yes" to everything, you'll stretch yourself too thin. You need to increase your responsibility in a realistic way. You need to focus hard on the problems you can solve with your current resources, and leave the other problems for another company or another time.
December 19, 2012 — For the past year I've been raving about Node.js, so I cracked a huge smile when I saw this question on Quora:
In five years, which language is likely to be most prominent, Node.js, Python, or Ruby, and why? - Quora
For months I had been repeating the same answer to friends: "Node.js hands down. If you want to build great web apps, you don't have a choice, you have to master Javascript. Why then master two languages when you don't need to?"
Javascript+Node.js is to Python and Ruby what the iPhone is to MP3 players--it has made them redundant. You don't need them anymore.
So I started writing this out and expanding upon it. As I'm doing this, a little voice in my head was telling me something wasn't right. And then I realized: despite reading Taleb's books every year, I was making the exact mistake he warns about. I was predicting the future without remembering that the future is dominated not by the predictable, but by the unpredictable, the Black Swans.
And sure enough, as soon as I started to imagine some Black Swans, I grew less confident in my prediction. I realized all it would take would be for one or two browser vendors to start supporting Python or Ruby or language X in the browser to potentially disrupt Node.js' major advantage. I don't think that's likely, but it's the type of low probability event that could have a huge impact.
When I started to think about it, I realized it was quite easy to imagine Black Swans. Imagine visiting hackernews in 2013 and seeing any one of these headlines:
It took only a few minutes to imagine a few of these things. Clearly there are hundreds of thousands of low probability events that could come from established companies or startups that could shift the whole industry.
The future is impossible to predict accurately.
All that being said, Node.js kicks ass today(the Javascript thing, the community, the speed, the packages, the fact I don't need a separate web server anymore...it is awesome), and I would not be surprised if Javascript becomes 10-100x bigger in the years ahead, while I can't say the same about other languages. And if Javascript doesn't become that big, worst case is it's still a very powerful language and you'll benefit a lot from focusing on it.
There's a man in the world who is never turned down, wherever he chances to stray; he gets the glad hand in the populous town, or out where the farmers make hay; he's greeted with pleasure on deserts of sand, and deep in the aisles of the woods; wherever he goes there's the welcoming hand--he's The Man Who Delivers the Goods. The failures of life sit around and complain; the gods haven't treated them white; they've lost their umbrellas whenever there's rain, and they haven't their lanterns at night; men tire of the failures who fill with their sighs the air of their own neighborhoods; there's one who is greeted with love-lighted eyes--he's The Man Who Delivers the Goods. One fellow is lazy, and watches the clock, and waits for the whistle to blow; and one has a hammer, with which he will knock, and one tells a story of woe; and one, if requested to travel a mile, will measure the perches and roods; but one does his stunt with a whistle or smile--he's The Man Who Delivers the Goods. One man is afraid that he'll labor too hard--the world isn't yearning for such; and one man is always alert, on his guard, lest he put in a minute too much; and one has a grouch or a temper that's bad, and one is a creature of moods; so it's hey for the joyous and rollicking lad--for the One Who Delivers the Goods! Walt Mason, his book (1916)
December 19, 2012 — For a long time I've believed that underpromising and overdelivering is a trait of successful businesses and people. So the past year I've been trying to overdeliver.
But lately I realized that you cannot try to overdeliver. All an individual can do is deliver, deliver, deliver. Delivering is a habit that you get into. Delivering is something you can do.
Overdelivering is only something a team can do. The only way to overdeliver, is for a team of people to constantly deliver things to each other, and then the group constantly delivers something to other people that that person could never imagine doing alone.
But in your role on a team, the key isn't to worry about overdelivering, just get in the habit of delivering.
Be the One who delivers the goods!
December 18, 2012 — One of Nassim Taleb's big recommendations for how to live in an uncertain world is to follow a barbell strategy: be extremely conservative about most decisions, but make some decisions that open you up to uncapped upside.
In other words, put 90% of your time into safe, conservative things but take some risks with the other 10%.
I personally try to follow this advice, particularly with our startup. I think it is good advice. I think it would be swell if our company became a big, profitable, innovation machine someday. But that's not what keeps me up at night.
I'm more concerned about creating the best worst case scenario. I spend most of my time trying to improve the worst case outcomes. Specifically, here's how I think you do this:
Tackle a big problem. Worst case scenario is you don't completely solve it, but you learn a lot in that domain and get acquired/acquihired by a bigger company in the space. That's a great outcome.
Build stuff you want. Worst case scenario is no one uses your product but you. If you aren't a fan of what you build, then you have nothing. If you love your product, that's a great outcome.
Focus on your customers. Make sure your customers are happy and getting what they want. Worst case scenario, you made a couple people of people happy. That's a great outcome.
Practice your skills. Worst case scenario is the company doesn't work out, but you are now much better at what you do. That's a great outcome.
Deliver. Worst case scenario is you deliver something that isn't quite perfect but is good and helps people. That's a great outcome.
Avoid debt. If you take on debt or raise money, worst case scenario is you run out of time and you lose control of your destiny. If you keep money coming in, worst case scenario is things take a little longer or if you move on you are not in a hole. That's a great outcome.
Enjoy life. Make sure you take time to enjoy life. Worst case scenario is you spend a few years with no great outcome at work but you have many great memories from life. That's a great outcome.
Then, if you want to make yourself open to positive black swans, you can put 10% of your efforts into things that make you more open to those like: recruiting world class talent, pitching and raising money, tackling bigger markets. But make sure you focus on the conservative things. Risk, in moderation, is a good thing. Risk, in significant amounts, is for the foolish.
December 18, 2012 — My whole life I've been trying to understand how the world works. How do planes fly? How do computers compute? How does the economy coordinate?
Over time I realized that these questions are all different ways of asking the same thing: how do complex systems work?
The past few years I've had the opportunity to spend thousands of hours practicing programming and studying computers. I now understand, in depth, one complex system. I feel I can finally answer the general question about complex systems with a very simple answer.
There is no certainty in life or in systems, but there is probability, and probability compounds.
We can combine the high probability that wheels roll, with the high probability that wood supports loads, to build a wooden chariot that has a high probability of carrying things from point A to point B, which has a high probability of giving us more time to innovate, and so on and so forth...
Everything is built off of probability. You are reading this because of countless compounded probabilities like:
Complex systems consist of many, many simple components with understood probabilities stitched together.
How does a plane fly? The most concise and accurate answer isn't about aerodynamics or lift, it's about probabilities. A plane is simply a huge system of compounded probabilities.
How does a bridge stay up? The answer is not about physics, it's about compounded probabilities.
How do computers work? Compounded probability.
How do cars work? Compounded probability.
The economy? Compounded probability.
Medicine? Compounded probability.
It's probability all the way down.
December 16, 2012 — When I was a kid I loved reading the Family Circus. My favorite strips were the "dotted lines" ones, which showed Billy's movements over time:
These strips gave a clear narrative of Billy's day. In the strip above, Billy, a fun loving kid, was given a task by his mother to put some letters in the mailbox before the mailman arrives. Billy took the letters, ran into the kitchen, then dashed into the living room, jumped on the couch, sprinted to the dining room, crawled under the dining room table, skipped into the TV room, jumped into the crib, twirled into the foyer, stumbled outside, swung around the light post, then ran to the mailbox.
We know the end result: Billy failed to get to the mailbox in time.
With this picture in mind, let's do a thought experiment.
Let's imagine that right now, once again, Billy and his mom are standing in the laundry room and she's about to give him the mail. What are the odds that Billy gets to the mailbox in time?
Pick a range, and then click here to see the answer.
December 16, 2012 — Concise but not cryptic. e=mc² is precise and not too cryptic. Shell commands, such as chmod -R 755 some_dir
are concise but very cryptic.
Understandable but not misleading. "Computing all boils down to ones and zeros" is understandable and not misleading. "Milk: it does a body good", is understandable but misleading.
Minimal but not incomplete. A knife, spoon and fork is minimal. Just a knife is incomplete.
Broad but selective. A knife, spoon, and fork is broad and selective. A knife, spoon, fork and turkey baster is just as broad but not selective.
Flat but not too flat. 1,000 soldiers is flat but too flat. At least a few officers would be better.
Anthropocentric not presentcentric. Shoes are relevant to people at any time. An iPhone 1 case is only useful for a few years.
Cohesive but flexible. You want the set to match. But you want each item to be independently improvable.
Simple is balanced. It is nuanced, not black and white.
December 14, 2012 — Note is a structured, human readable, concise language for encoding data.
In 1998, a large group of developers were working on technologies to make the web simpler, more collaborative, and more powerful. Their vision and hard work led to XML and SOAP.
XML was intended to be a markup language that was "both human-readable and machine-readable". As Dave Winer described it, "XML is structure, simplicity, discoverability and new power thru compatibility."
SOAP, which was built on top of XML, was intended to be a "Simple Object Access Protocol". Dave said "the technology is potentially far-reaching and precedent-setting."
These technologies allowed developers across the world to build websites that could work together with other websites in interesting ways. Nowadays, most web companies have APIs, but that wasn't always the case.
Although XML and SOAP were a big leap forward, in practice they are difficult to use. It's arguable whether they are truly "human-readable" or "simple".
Luckily, in 2001 Douglas Crockford specified a simpler, more concise language called JSON. Today JSON has become the de facto language for web services.
Early last year, one idea that struck me was that subtle improvements to underlying technologies can have exponential impact. Fix a bug in subversion and save someone hours of effort, but replace subversion and save someone weeks.
The switch from XML to JSON had made my life so much easier, I wondered if you could extract an even simpler alternative to JSON. JSON, while simple, still takes a while to learn, particularly if you are new to coding. Although more concise than XML, JSON has at present six types and eight syntax characters, all of which can easily derail developers of all skill levels. Because whitespace is insignificant in JSON, it quickly becomes messy. These are all relatively small details, but I think perhaps getting the details right in a new encoding could make a big difference in developers' lives.
After almost two years of tinkering, and with a lot of inspiration from JSON, XML, HAML, Python, YAML, and other languages, we have a new simple encoding that I hope might make it easier for people to create and use web services.
We dubbed the encoding Note, and have put an early version with Javascript support up on Github. We've also put out a quick demonstration site that allows you to interact with some popular APIs using Note.
Note is a text based encoding that uses whitespace to give your data structure. Note is simple: there are only two syntax characters (newline and space). It is concise--not a single keystroke is wasted (we use a single space for indentation--why use 2 when one is sufficient?). Note is neat: the meaningful whitespace forces adherance to a clean style. These features make Note very easy to read and to write.
Despite all this minimalism, Note is very powerful. Each note is a hash consisting of name/value pairs. Note is also recursive, so each note can be a tree containing other notes.
Note has only two types: strings and notes. Every entity in note is either a string or another Note. But Note is infinitely extendable. You can create domain specific languages on top of Note that support additional types as long as you respect the whitespace syntax of Note.
This is a very brief overview of the thinking behind Note and some of its features. I look forward to the months ahead as we start to implement Note on sites across the web and demonstrate some of the neat features and capabilities of the encoding.
Please feel free to email me with any questions or feedback you may have, as well as if you'd be interested in contributing.
November 26, 2012 — For todo lists, I created a system I call planets and pebbles.
I label each task as a planet or a pebble. Planets are super important things. It could be helping a customer complete their project, meeting a new person, finishing an important new feature, closing a new sale, or helping a friend in need. I may have 20 pebbles that I fail to do, but completing one planet makes up for all that and more.
I let the pebbles build up, and I chip away at them in the off hours. But the bulk of my day I try to focus on the planets--the small number of things that can have exponential impact. I don't sweat the small stuff.
I highly recommend this system. We live in a power law world, and it's important to practice the skill of predicting what things will prove hugely important, and what things will turn out to be pebbles.
November 25, 2012 — I published 55 essays here the first year. The second and third years combined, that number nosedived to 5.
What caused me to stop publishing?
It hasn't been a lack of ideas. All my essays start with a note to self. I have just as many notes to self nowadays as I did back then.
It hasn't been a lack of time. I have been working more but blogging doesn't take much time.
It's partly to do with standards. I've been trying to make higher quality things. I used to just write for an hour or two and hit publish. Now I'm more picky.
I've also become somewhat disappointed with the essay form. I am very interested in understanding systems, and I feel words alone don't explain systems well. So I've been practicing my visual and design skills. But I'm basically a beginner and output is slow.
The bottom line is I want to publish more. It forces me to think hard about my opinions, and it opens me up to advice from other people. I think this blog has helped me be less wrong about a lot of things. So here's to another fifty posts.
November 20, 2012 — "Is simplicity ever bad?" If you had asked me this a year ago, I probably would have called you a fucking moron for asking such a dumb question. "Never!", I would have shouted. Now, I think it's a fair question. Simplicity has it's limits. Simplicity is not enough, and if you pursue simplicity at all costs, that can be a bad thing. There's something more than simplicity that you need to be aware of. I'll get to that in a second, but first, I want to backtrack a bit and state clearly that I do strongly, strongly believe and strive for simplicity. Let me talk about why for a second.
Simple products are pleasant to use. When I use a product, and it is easy to use, and it's quick to use, I love that. I fucking hate things that are not as simple as possible and waste people's time or mental energy as a result. For example, to file my taxes with the IRS, I cannot go to the IRS' website. It's much more complex than that. I hate that. It is painful. Complex things are painful to use. Simple things are pleasant to use. They make life better. This is, of course, well known to all good designers and engineers.
Simple things are also more democratic. When I can understand something, I feel smart. I feel empowered. When I cannot understand something, I feel stupid. I feel inferior. Complex things are hard to understand. The response shouldn't be to spend a long time learning the complex thing, it should be to figure out how to make the complex thing simpler. When you do that, you create a lot of value. If I can understand something, I can do something. When we make things simpler, we empower people. Often times I wonder if being a doctor would only take 2 years if Medicine abandoned Latin terms for a simpler vocabulary.
This whole year, and well before that, I've been working with people trying to make the web simpler. The web is really complex. You need to know about HTML, CSS, Javascript, DNS, HTTP, DOM, Command Line, Linux, Web Servers, Databases, and so on. It's a fucking mess. It's fragmented to all hell as well. Everyone is using different languages, tools, and platforms. It can be a pain.
Anyway, we've been trying to make a simple product. And we've been trying to balance simplicity with features. And that's been difficult. Way more difficult than I would have predicted.
The thing is, simpler is not always better. A fork is simpler than a fork, knife, and spoon, but which would you rather have? The set is better. Great things are built by combining distinct, simple things together. If you took away the spoon, you'd make the set simpler, but not better. Which reminds me of that Einstein quote:
Make things as simple as possible, but not simpler.
I had always been focused on the first part of that quote. Make things as simple as possible. Lately I've thought more about the second part. Sometimes by trying to make things too simple you make something a lot worse. Often, less is more, but less can definitely be less.
People rave about the simplicity of the iPhone. And it is simple, in a sense. But it is also very complex. It has a large screen, 2 cameras, a wifi antenna, a GPS, an accelerometer, a gyroscope, a cell antenna, a gpu, cpus, memory, a power unit, 2 volume buttons, a power button, a home button, a SIM card slot, a mode switch, and a whole lot more. Then the software inside is another massive layer of complexity. You could try to make the iPhone simpler by, for example, removing the volume buttons or the cameras, but that, while increasing the simplicity, would decrease the "setplicity". It would remove a very helpful part of the set which would make the whole product worse.
Think about what the world would be like if we only used half of the periodic table of elements--it would be less beautiful, less enjoyable, and more painful.
Simplicity is a great thing to strive for. But sometimes cutting things out to make something simpler can make it worse. Simplicity is not the only thing to maximize. Make sure to balance simplicity with setplicity. Don't worry if you haven't reduced things to a singularity. Happiness in life is found by balancing amongst a set of things, not by cutting everything out.
October 20, 2012 — I love to name things.
I spend a lot of time naming ideas in my work. At work I write my code using a program called TextMate. TextMate is a great little program with a pleasant purple theme. I spend a lot of time using TextMate. For the past year I've been using TextMate to write a program that now consists of a few hundred files. There are thousands of words in this program. There are hundreds of objects and concepts and functions that each have a name. The names are super simple like "Pen" for an object that draws on the screen, and "delete" for a method that deletes something. Some of the things in our program are more important than others and those really important ones I've renamed dozens of times searching for the right fit.
There's a feature in TextMate that lets me find and replace a word across all 400+ files in the project. If I am unhappy with my word choice for a variable or concept, I'll think about it for weeks if not months. I'll use Thesaurus.com, I'll read about similar concepts, I'll run a subconscious search for the simplest, best word. When I find it, I'll hit Command+Shift+F in TextMate and excitedly and carefully execute a find and replace across the whole project. Those are some of my favorite programming days--when I find a better name for an important part of the program.
Naming a thing is like creating life from inorganic material in a lab. You observe some pattern, combine a bunch of letters to form a name, and then see what happens. Sometimes your name doesn't fit and sits lifeless. But sometimes the name is just right. You use it in conversation or in code and people instantly get it. It catches on. It leaves the lab. Your name takes a life of its own and spreads.
Words are very contagious. The better the word, the more contagious it can be. Like viruses, small differences in the quality of a word can have exponential differences on it's spread. So I like to spend time searching for the right words.
Great names are short. Short names are less effort to communicate. The quality of a name drops exponentially with each syllable you add. Coke is better than Coca-Cola. Human is better than homo sapiens.
Great names are visual. A good test of whether a name is accurate is whether you can draw a picture of the name that makes sense. Net is better than cyberspace. If you drew a picture of the physical components of the Internet, it would look a lot like a fishing net. Net is a great name.
Great names are used for great ideas. You should match the quality of a name to the quality of the idea compared to the other ideas in the space. This is particularly applicable in the digital world. If you are working on an important idea that will be used by a lot of people in a broad area, use a short, high quality name. If you are working on a smaller idea in that same area, don't hog a better name than your idea deserves. Linux is filled with great programs with bad names and bad programs with great names. I've been very happy so far with my experience with NPM, where it seems programmers who are using the best names are making their programs live up to them.
I think the exercise of naming things can be very helpful in improving things. Designing things from first principles is a proven way to arrive at novel, sometimes better ideas. Attempting to rename something is a great way to rethink the thing from the ground up.
For example, lately I've been trying to come up with a way to explain the fundamentals of computing. A strategy I recently employed was to change the names we use for the 2 boolean states from True and False to Absent or Present. It seems like it gets closer to the truth of how computers work. I mean, it doesn't make sense to ask a bit whether it is True or False. The only question an electronic bit can answer is whether a charge is present or absent. When we compare variable A to variable B, the CPU sets a flag in the comparison bit and we are really asking that bit whether a charge is present.
What I like about the idea of using the names Present and Absent is that it makes the fundamentals of computing align with the fundamentals of the world. The most fundamental questions in the world about being--of existence. Do we exist? Why do we exist? Will we exist tomorrow? Likewise, the most fundamental questions in computing is not whether or not there are ones and zeroes, it's whether or not a charge exists. Does a charge exist? Why does that charge exist? Will that charge exist in the next iteration? Computing is not about manipulating ones and zeroes. It's about using the concept of being, of existence, to solve problems. Computing is about using the concept of the presence or absence of charge to do many wonderful things.
March 30, 2011 — Railay is a tiny little beach town in Southern Thailand famous for its rock climbing. I've been in Railay for two weeks. When the weather is good, I'm outside rock climbing. When the weather is bad, I'm inside programming. So naturally I've found myself comparing the two. Specifically I've been thinking about what I can take away from my rock climbing experience and apply to my programming education.
Here's what I've come up with.
1. You should always be pushing yourself. Each day spent climbing I've made it to a slightly higher level than the previous day. The lazy part of me has then wanted to just spend one day enjoying this new level without pushing myself further. Luckily I've had a great climbing partner who's refused that and has forced me to reach for the next level each day. In both rock climbing and programming you should always be reaching for that new level. It's not easy, you have to risk a fall to reach a new height, but it's necessary if you want to become good. In programming, just like in climbing, you should be tagging along with the climbers at levels above you. That's how you get great. Of course, don't forget to enjoy the moment too.
2. Really push yourself. In rock climbing you sometimes have these points where you're scared--no, where you're fucking petrified--that you're going to fall and get hurt or die and you're hanging on to the rock for dear life, pouring sweat, and you've got to overcome it. In programming you should seek out moments like these. It will never be that extreme of course, but you should find those spots where you are afraid of falling and push yourself to conquer them. It might be a project whose scope is way beyond anything you've attempted before, or a task that requires advanced math, or a language that scares the crap out of you. My climbing instructor here was this Thai guy named Nu. He's the second best speed climber in Thailand and has been climbing for fifteen years. The other day I was walking by a climbing area and saw Nu banging his chest and yelling at the top of his lungs. I asked a bystander what was going on and he told me that Nu was struggling with the crux of a route and was psyching himself up to overcome it. That's why he's a master climber. Because he's been climbing for over fifteen years and he's still seeking out those challenges that scare him.
3. There's always a next level. In rock climbing you have clearly defined levels of difficulty that you progress through such as 5 to 8+ or top rope to lead+. In programming the levels are less defined and span a much wider range but surely exist. You progress from writing "hello world" to writing compilers and from using notepad to using vim or textmate or powerful IDEs. You might start out writing a playlist generator and ten years later you may be writing a program that can generate actual symphonies, but there still will be levels to climb.
4. Climbing or programming without teachers is very inefficient. There are plenty of books on rock climbing. But there's no substitute for great teachers. You can copy what you see in books and oftentimes you'll get many parts right, but a teacher is great for pointing out what you're doing wrong. Oftentimes you just can't tell what the key concepts and techniques to focus on are. You might not focus on something that's really important such as using mostly legs in climbing or not repeating yourself in programming. A good teacher can instantly see your mistakes and provide helpful feedback. Always seek out great teachers and mentors whether they be friends, coworkers, or professional educators.
5. You learn by doing; practice is key. Although you need teachers and books to tell you what to do, the only way to learn is to do it yourself, over and over. It takes a ton of time to master rock climbing or programming and although receiving instruction plays an important part, the vast majority of the time it takes to learn will be spent practicing.
6. Breadth, not only depth, is important. Sometimes to get to the next level in rock climbing you need to get outside of rock climbing. You may need to take up yoga to gain flexibility or weightlifting to gain strength. Likewise in programming sometimes you need to go sideways to go up. If you want to master Rails, you'll probably want to spend time outside of it and work on your command line and version control skills. Programming has a huge amount of silos. To go very deep in any one you have to gain competance in many.
7. People push the boundaries. Both rock climbing and programming were discovered by people and people are continually pushing the boundaries of both. In rock climbing advanced climbers are discovering new areas, bolting new routes, inventing new equipment, perfecting new techniques, and passing down new knowledge. Programming is the most cumulative of all human endeavors. It builds on the work of tens of millions of people and new "risk takers" are always constantly pushing the frontiers (today in areas like distributed computing, data mining, machine learning, parallel processing and mobile amongst others).
8. Embrace collaboration. The rock climbing culture is very collaborative much like the open source culture. Rock climbing is an inherently open source activity. Everything a climber does and uses is visible in the open. This leads to faster knowledge transfer and a safer activity. Likewise, IMO open source software leads to a better outcome for all.
9. Take pride in your work. In rock climbing when you're the first to ascend a route your name gets forever attached to that route. In programming you should be proud of your work and add your name to it. Sometimes I get embarrassed when I look at some old code of mine and realize how bad it is. But then I shrug it off because although it may be bad by my current standards, it represents my best honest effort at the time and so there's nothing to be ashamed of. I'm sure the world's greatest rock climbers have struggled with some easy routes in their day.
10. Natural gifts play a part. Some people who practiced for 5,000 hours will be worse than some people who practiced for only 2,000 hours due to genetics and other factors. It would be great if how good you were at something was determined totally by how many hours you've invested. But it's not. However, at the extremes, the number of hours of practice makes a huge difference. The absolute best climbers spend an enormous amount of time practicing. In the middle of the pack, a lot of the difference is just due to luck. I've worked with a wide range of programmers in my (short so far) career. I've worked with really smart ones and some average ones. Some work hard and others aren't so dedicated. The best by far though, possess both the intelligence and the dedication. And I'd probably rather work with the dedicated and average smarts over the brilliant but lazy.
March 5, 2011 — A good friend passed along some business advice to me a few months ago. "Look for a line," he said. Basically, if you see a line out the door at McDonald's, start Burger King. Lines are everywhere and are dead giveaways for good business ideas and good businesses.
Let's use Groupon as a case study for the importance of lines. Groupon scoured Yelp for the best businesses in its cities--the businesses that had virtual lines of people writing positive reviews--and created huge lines for these businesses with their discounts. Other entrepreneurs saw the number of people lining up to purchase things from Groupon and created a huge line of clones. Investors saw other investors lining up to buy Groupon stock and hopped in line as well. Business is all about lines.
In every country we travel to I look around for lines. It's a dead giveaway for finding good places to eat, fun things to do, amazing sites to see. If you want to start a business, look for lines and either create a clone or create an innovation that can steal customers from that line. If you see tons of people lining up to take taxis, start a taxi company. Better yet, start a bus.
Succeeding in business is all about creating lines. Apple creates lines of reporters looking to write about their next big product. Customers line up outside their doors to buy their next big product. Investors line up to pump money into AAPL. Designers and engineers line up to work there.
If you are the CEO of a company, your job is simply to create lines. You want customers lining up for your product, investors lining up to invest, recruits lining up to apply for jobs. It's very easy to measure how you're doing. If you look around and don't see any lines, you gotta pick it up.
March 4, 2011 — I haven't written in a long while because i'm currently on a long trip around the world. at the moment, we're in indonesia. one thing that really surprised me was that despite our best efforts to do as little planning as possible, we were in fact almost overprepared. i've realized you can do an around the world trip with literally zero planning and be perfectly fine. you can literally hop on a plane with nothing more than a passport, license, credit card, and the clothes on your back and worry about the rest later. i think a lot of people don't make a journey like this because they're intimidated not by the trip itself, but by the planning for the trip. i'm here to say you don't need to plan at all to travel the world (alas, would be a lot harder if you were not born in a first world country, unfortunately). here's my guide for anyone that might want to attempt to do so. every step is highlighted in bold. adjust accordingly for your specific needs and desires.
Set a savings goal. you'll need money to travel around the world, and the more money you have, the easier, longer, and more fun your journey will be.
Save, save, save. make sure you save enough so that when your trip ends you won't come home broke. $12,000 would be a large enough amount to travel for a long time and still come back with money to get you resettled easily.
Once you've saved half of your goal, buy your first one way plane ticket to a cheap, tourist friendly country. bali, indonesia or bangkok, thailand would be terrific first stops, amongst others. next, get a paypal account with a paypal debit card. this card gives you 1.5% cash back on all purchases, only charges a $1 atm fee, and charges no foreign transaction fees at all. the 1.5% cash back more than offsets the 1% fee Mastercard charges for interchange fees. if you don't have them already, get a drivers license and a passport with at least 1 year left before expiration. get a free google voice number so people can still SMS and leave you voicemails without paying a monthly cell phone bill. if you need glasses, contacts, prescription medication, or other custom things, stock up on those.
Settle your affairs at home--housing, job, etc. now, your planning is DONE! you have everything you need to embark on a trip around the world.
Get on the plane with your passport, license, paypal debit card, and $100 US Cash. you don't need anything else--not even a backpack! you'll pick up all that later.
Once you've arrived in bali (or another similar locale), go to a large, cheap shopping district(kuta square in bali for example). if you arrived late, find a cheap place to crash first and hit the market first thing in the morning. look for backpackers at the airport or ask someone who works there for cheap accommodation recommendations.
Once you're at the market, you've got a lot to buy. visit an ATM to take money out of your PayPal account in the local currency. if you want, space out your purchases over a few days. you'll want to buy a lonely planet/rough guides for your current country, a solid backpack (get a good one), bug spray with deet, sun tan lotion, a toothbrush, toothpaste, deodorant, nail clippers, tweezers, a swiss army knife, pepto bismol, tylenol, band aids, neosporin, bathing suit, some clothes for the current weather, shoes/flip flops, a cheap cell phone and SIM card, a netbook, a power adapter, and a camera and memory card. you now have pretty much everything you need for your trip and you probably spent less than half of what you would have had to spend in the states. you may want some other things like a sleeping bag, tent, portable stove, goggles, etc., depending on what you want to do on your trip.
Now, talk to locals and other travelers for travel recommendations. that plus your lonely planet and maybe some google searching and you'll have all the tools you need to plan where to go, what to do and what to eat.
Hit up an internet cafe to email and print a copy of your drivers license, passport, and credit card. it will be dirt cheap. get some passport photos made for countries that require a photo for visas. then sign up for skype and facebook (if you're the one person in the world who hasn't done this yet) to make cheap phone calls and keep in touch with family and friends.
Plan your trip one country at a time. every few days, check flight prices for the next few legs of your trip. you can sometimes get amazingly cheap deals if you check prices frequently and are flexible about when and where you fly. use sites like kayak, adioso, hotels.com, airbnb, and hostelworld to find cheap flights and places to stay, especially in expensive countries. in cheap countries, lonely planet and simply asking around often works great for finding great value hotels. also in expensive cities, find the local groupon clones and check them often for great excursion and meal deals. finally, you might want to get travel insurance from a site like world nomads.
That's it. enjoy your trip!
September 18, 2010 — I was an Economics major in college but in hindsight I don't like the way it was taught. I came away with an academic, unrealistic view of the economy. If I had to teach economics I would try to explain it in a more realistic, practical manner.
I think there are two big concepts that if you understand, you'll have a better grasp of the economy than most people.
The first idea is that the economy has a pulse and its been beating for thousands of years. The second is that the economy is like a brain and if you visualize it in that way you can make better decisions depending on your goals.
Thousands of years ago people were trading goods and service, knitting clothes, and growing crops. The economy slowly came to life probably around 20 or 15 thousand years ago and it's never stopped. Although countless kingdoms, countries, industries, companies, families, workers, owners, have come and gone, this giant invisible thing called the economy has kept on trucking.
And not much has changed.
Certainly in 2,000 B.C. there was a lot more bartering and a lot less Visa, but most of the concepts that describe today's economy are the same as back then. You had industries and specialization, rich and poor, goods and services, marketplaces and trade routes, taxes and government spending, debts and investments.
Today, the economy is more connected. It covers more of the globe. But it's still the same economy that came to life thousands of years ago. It's just grown up a bit.
What are the implications of this? I think the main thing to take away from this idea is that we live in a pretty cool time where the economy has matured for thousands of years. It has a lot to offer if we understand what it is and how to use it. Which brings me to my next point.
The second big idea I try to keep in mind about the economy is that it's like a neural network. It's really hard to form a model of what the economy really looks like, but I think a great analogy is the human brain.
At a microscopic level, the brain is composed of around 100 billion neurons. The economy is currently composed of around 8 billion humans.
The average neuron is directly connected to 1,000 other neurons via synapses. Some neurons have more connections, some have less. The average human is directly connected to 200 other humans in their daily economic dealings. Some more, some less.
Neurons and synapses are not distributed evenly in the brain. Some are in relatively central connections, some are on the periphery. Likewise, some humans operate in critical parts of the economy(London or Japan for example), while many live in the periphery(Driggs, Idaho or Afghanistan, for example).
If we run with this analogy that the economy is like the human brain, what can we take home from that?
If you want a high paying job then you should think carefully about where you plug yourself into the network/economy. You want to plug yourself in where there's a lot of action. You want to plug yourself into a "nerve center". These nerve centers can be certain geographies, certain industries, certain companies, etc. For instance, plugging yourself into an investment banking job on Wall Street will bring you more money than teaching surfing in Maui. Now, if you're born in the periphery, like a third world nation, you might be SOL. It's tremendously easier to plug yourself into a nerve center if you're born in the right place at the right time.
Now if you don't want a high paying job there are more choices available to you. Most of the economy is not a nerve center. It's also a lot easier to move from a high paying spot in the economic brain to a place in a lower paying spot.
When you start a business, you're basically a neuron with no synapses living outside the brain. You've got to inject yourself into the brain and build as many synapses as possible. When you start a business, the brain("the economy"), doesn't give a shit about you. You've got to plug yourself in and make yourself needed. You've got to get other neurons(people/companies/governments) to depend on you. You can do this through a combination of hard work, great products/services, great sales, etc.
Now one thing I find interesting is that a lot of people say entrepreneurs are rebels. This is sort of true, however, for a business to be successful the business has to conform a lot for the economy to build connections to it. If you want to be a nerve center, you've got to make it easy for other parts of the economy to connect to you. You can't be so different that you are incompatible with the rest of the economy. If you want to be a complete rebel, you can do that on the periphery, but you won't become a big company/nerve center.
Once you are "injected" into the economy, it's hard to get dislodged. If a lot of neurons have a lot of synapses connected to you, those will only die slowly. For a long time business will flow through you. This explains why a company like AOL can still make a fortune.
In conclusion, the economy is a tremendous creature that can provide you with a lot if you plug yourself in. It's been growing for thousands of years and has a lot to offer. You can also to choose to stay largely unplugged from it, and that's okay too.
August 25, 2010 — Warren Buffet claims to follow an investment strategy of staying within his "circle of competence". That's why he doesn't invest in high tech--it's outside his circle.
I think this is good advice. The tricky part is to figure out where to draw the circle.
Here are my initial thoughts:
August 25, 2010 — I have a feeling critical thinking gets the least amount of brain's resources. The trick is to critically think about things, come to conclusions, and turn those conclusions into habits. The subconcious, habitual mind is much more powerful than the tiny little conscious, critically thinking mind.
If you're constantly using the critical thinking part of your mind, you're not using the bulk of your mind. You're probably accomplishing a lot less than you could be.
Come to conclusions and build good habits. Let your auto pilot take over. Then occasionally come back and revisit your conclusions.
August 25, 2010 — I've been working on a fun side project of categorizing things into Mediocristan or Extremistan(inspired by NNT's book The Black Swan).
I'm trying to figure out where intelligence belongs. Bill Gates is a million times richer than many people; was Einstein a million times smarter than a lot of people? It seems highly unlikely. But how much smarter was he? Was he 1,000x smarter than the average joe? 100x smarter?
I'm not sure. The brain is a complex thing and I haven't figure out how to think about intelligence yet.
Would love to hear what other people think. Shoot me an email!
August 25, 2010 — Maybe I'm getting old, but I'm starting to think the best way to "change the world" isn't to bust your ass building companies, inventing new machines, running for office, promoting ideas, etc., but to simply raise good kids. Even if you are a genius and can invent amazing things, by raising a few good kids their output combined can easily top yours. Nerdy version: you are a single core cpu and can't match the output of a multicore machine.
I'm not saying I want to have kids anytime soon. I'm just realizing after spending time with my family over on Cape Cod, that even my dad, who is a harder worker than anyone I've ever met and has made a profound impact with his work, can't compete with the output of 4 people (and their potential offspring), even if they each work only 1/3 as hard, which is probably around what we each do. It's simple math.
So the trick to making a difference is to sometimes slow down, spend time raising good kids, and delegate some of the world saving to them.
August 25, 2010 — Genetics, aka nature, plays the dominant role in predicting most aspects of your life, in my estimation.
Across every dimension in life your genes are both a glass ceiling--preventing you from reaching certain heights--and a cement foundation--making it unlikely you'll hit certain lows. How tall/short you will be, how smart/dumb you will be, how mean/nice you will be, how popular/lonely you will be, how athletic/clumsy, how fat/skinny, how talkative/quiet, how long/short you'll live, and so forth.
By the time you are born, your genes, place of birth, year of birth, parents--they're all set in stone, and the constraints on your life are largely in place. That's an interesting thought.
Nurture plays a huge role in making you, of course. Being born with great genes is irrelevant if you are malnourished, don't get early education, etc. But nurture cannot overcome nature. Our DNA is not at all malleable and no one knows if it ever will be. Nonetheless, it makes no sense to complain about nature. It is up to you to make the most of your starting hand. On the other hand, let us not be quick to judge others. I make that mistake a lot.
I think the bio/genome field will be the most interesting industry come 2025 or so.
August 25, 2010 — Doctors used to recommend leeches to cure a whole variety of illnesses. That seems laughable today. But I think our recommendations today will be laughable to people in the future.
Recommendations work terrible for everyone but decently on average.
We are a long, long way from making good individual recommendations. You won't get good individual recommendations until your individual genome is taken into account. And even then it will take a while. We may never get to the point where we can make good individual recommendations.
So many cures and medicines work for a certain percentage of people, but for some people they can have detrimental or even fatal effects. People rave about certain foods, exercises, and so forth, without considering how differences in genetics can have a huge role.
People are quite similar, but they are also quite different and react to different things in different ways. I think we are a long way away from seeing breakthroughs in recommendations.
Recommendations are great business, but I think we're 2 or 3 orders of magnitude away from where they could be, and it could take decades(or never) to reach those levels.
August 25, 2010 — Ruby is an awesome language. I've come to the conclusion that I enjoy it more than Python for the simple reason that whitespace doesn't matter.
Python is a great language too, and I have more experience with it, and the whitespace thing is a silly gripe. But I've reached a peak with PHP and am looking to master something new. Ruby it is.
August 25, 2010 — I've been very surprised to discover how unpredictable the future is. As you try to predict farther out, your error margins grow exponentially bigger until you're "predicting" nothing specific at all.
Apparently this is because many things in our world are "chaotic". Small errors in your predictions get compounded over time. 10 day weather forecasts are notoriously inaccurate despite the fact that teams of the highest IQ'd people on earth have been working on them for years. I don't understand the math behind chaos but I believe in the basic ideas.
I can correctly predict whether or not I'll work out tomorrow with about 85% accuracy. All I need to do is look at whether I worked out today and whether I worked out yesterday. If I worked out those 2 days, odds are about 90% I will work out tomorrow. If I worked out yesterday but didn't work out today, odds are about 40% I will work out tomorrow. If I worked out neither of those two days, odds are about 20% I'll work out tomorrow.
However, I can't predict with much accuracy whether or not I'll work out 30 days from now. That's because the biggest two factors depend on whether I work out 29 days from now and 28 days from now. And whether I work out 29 days from now depends on the previous 2 days the most. If I'm wrong in my predictions about tomorrow, that error will compound and throw me off. It's hard to make an accurate prediction about something so simple. Imagine how hard it is to make a prediction about a non-binary quantity.
Weather, the stock market, individual stock prices, the next popular website, startup success, box office hits, etc. Basically dynamic, complex systems are completely resistant to predictions.
When making predictions you generally build a model--consciously or unconsciously. For instance, in predicting my future workouts I can make a spreadsheet (or just a "mental spreadsheet") where I come up with some inputs that are used to predict the future workout. My inputs might be whether I worked out today and whether it will rain. These are the "on model" factors. But all models leave things out that may or may not affect the outcome. For example, it could be sunny tomorrow and I could have worked out today, so my model would predict a workout tomorrow. But then I might get injured on my way to the gym--an "off model" risk that I hadn't taken into account.
The world is complex and impossible to predict accurately. But people don't get this. They think the world is easier to explain and predict than it really is. And so they demand predictions. And so people provide them, even though these explanations and predictions are bogus. Feel free to make or listen to long term predictions for entertainment, but don't believe any long term predictions you hear. We're a long way(possibility an infinitely long way) from making accurate predictions about the long run.
What if you have inside information? Should you then be able to make better predictions than others? Let's imagine for a moment that you were alive in 1945 and you were trying to predict when WWII would end. If you were like 99.99999+% of the population, you would have absolutely no idea that a new type of bomb was just invented and about to be put to use. But if you were one of the few who knew about the bomb, you might have been a lot more confident that the war was close to an end. Inside information gives you a big advantage in predicting the future. If you have information and can legally "bet on that", go for it. However, even the most connected people only have the inside scoop on a handful of topics, and even if you know something other people don't it's very hard to predict the scale (or direction) of an event's effect.
My general advice is to be ultra conservative about the future and ultra bullish on the present. Plan and prepare for the worst of days--but without a pessimistic attitude. Enjoy today and make safe investments for tomorrow.
August 23, 2010 — Your most recent experiences effect you the most. Reading this essay will effect you the most today but a week from now the effect will have largely worn off.
Experiences have a half-life. The effect decays over time. You might watch Almost Famous, run out to buy a drumset, start a band, and then a month later those drums could be gathering dust in your basement. You might read Shakespeare and start talking more lyrically for a week.
Newer experiences drown out old ones. You might be a seasoned Rubyist and then read an essay espousing Python and suddenly you become a Pythonista.
All genres of experiences exhibit the recency effect. Reading books, watching movies, listening to music, talking with friends, sitting in a lecture--all of these events can momentarily inspire us, influence our opinions and understanding of the world, and alter our behaviors.
If you believe in the recency effect you can see the potential benefit of superstitious behavior. For instance, I watched "The Greatest Game Ever Played", a movie about golf, and honest to god my game improved by 5 strokes the next day. A year later when I was a bit rusty, I watched it again and the effect was similar(though not as profound). When I want to write solid code, I'll read some quality code first for the recency effect.
If you want to do great work, set up an inspiring experience before you begin. It's like taking a vitamin for the mind.
August 23, 2010 — Note: Sometimes I'll write a post about something I don't understand at all. I am not a neuroscientist and have only the faintest understanding of the brain so this is one of those times. Reading this post could make you dumber. But occasionally writing from ignorance leads to good things--like the time I wrote about Linear Algebra and got a number of helpful emails better explaining the subject to me.
My question is: how are the brain's resources allocated for its different tasks?
In a restaurant the majority of the workers are involved with serving, then a smaller number of employees are involved with cooking, and still a smaller number of people are involved with managing.
The brain has a number of functions: vision, auditory, speech, mathematics, locomotion, and so forth. Which function uses the most resources? Which function uses the least?
I have no idea, but my guess is below.
I'm probably quite far off, but I thought it was an interesting question to think about. Now I'll go see if I can dig up some truer numbers.
August 11, 2010 — I've had some free time the past two weeks to work on a few random ideas I've had.
They all largely involve probability/statistics and have no practical or monetary purpose. If I was a painter and not a programmer you might call them "art projects".
One project deals with categorizing data into "Extremistan" and "Mediocristan". Taleb's books, the Black Swan and Fooled by Randomness, list a number of different examples for each, and I thought it would be interesting to extend that categorization further.
The second project I'll expand on a bit more here.
Warren Buffett coined the idea of the "ovarian lottery"--his basic idea is that the most important factor in determining how you end up in life is your birth. You either are born "lucky"--in a rich country, with no major diseases, to an affluent member of society, etc.--or you aren't. Other factors like hard work, education, smart decision making and so forth have a role, but play a relatively tiny role in determining what your life will be like.
I thought this was a very interesting idea and so I started a program that lets you be "born again" and see how things turn out. When you click "Play", theOvarianLottery will show you:
I've encountered two major surprises with the theOvarianLottery.
First, I thought theOvarianLottery would take me an hour or two. I was wrong. It turns out the coding isn't hard at all--the tricky part is finding the statistics. Not a whole lot of countries provide detailed statistics on their current populations. Once you start looking up stats for human population before 1950, the search gets an order of magnitude harder. (I've listed a few good sources and resources at the bottom of this post if anyone's interested)
Second, I've found so many fascinating diversions while working on this. I've encountered cool stats like:
But cooler than interesting descriptive statistics are the philosophical questions that this idea of the Ovarian Lottery raises. If I was a philosopher I might ponder these questions at depth and write more about each one, but I don't think that's a great use of time and so I'll just list them. Philosophy is most often a fruitless exercise.
My site is just a computer program. It's interesting to think about how the real ovarian lottery works. Is there a place where everyone is hanging out, and then you spin a wheel and your "soul" is magically transported to a newborn somewhere in the world?
If the multiverse theory is correct, then my odds are almost certainly off. In other words, theOvarianLottery assumes there's only 1 universe and extrapolates the odds from that. If there are dozens or infinite universes, who knows what the real odds are.
If you go back to around 10,000 B.C., somewhere around 2-10 million people roamed the planet. Go back earlier and the number is even smaller. It's interesting to think of how small differences in events back then would have created radically different outcomes today. I've dabbled a bit into chaos theory and find it quite humbling.
Depending on the estimate, between 4-20% of all humans that have ever lived are alive today. In other words, the odds of you being alive right now (according to my model) are higher than they've ever been. The odds of you being alive in 10,000 BC are over 1,000 times less. If humans indeed go on to live for another ten thousand years and the population grows another 1,000 times the odds of you being born today would be vastly smaller. In other words, if my model represented reality than we could conclude that odds are high that the human population does not continue growing like it has.
The growth of human population has followed an exponential curve. How long will it last? Will earth become overpopulated? Will we invent technology to leave earth? Will human population decline? Human population growth is hard to predict over any long term time period.
I don't believe you can take the concept of the Ovarian Lottery any more seriously than you can take religion. It provides food for thought, but it doesn't provide any real answers to much. The stats though could certainly be used in debates.
Oh well. Ars gratia artis
I hope to finish up theOvarianLottery and slap a frontend on it sometime in the future.
Helpful Links for Population Statistics(beyond wikipedia):
August 6, 2010 — Three unexpected things have happened to me during my two years of entrepreneurial pursuits in California.
First, I have not gotten rich.
Second, I have met many people who have gotten rich. I've even had the pleasure to witness some of my friends get rich.
Third, I've yet to meet someone much happier than me.
I've met a large amount of people who are 6, 7, even 8 orders of magnitude richer than me and yet not a single one of them was even close to an order of magnitude happier than me.
The explanation, I finally realized, is simple.
Happiness, as NNT would say, resides in Mediocristan. Happiness is a physical condition and just as it is impossible to find someone 60 feet tall, it is impossible to find someone ten times happier than everyone else. I could sit next to you and drink 3 cups of coffee, and sure, I might be 20% happier than you for about 20 minutes, but 1,000% happier? Not even close.
Our happiness is a result of some physical processes going on in our brains. While we don't understand yet the details of what's happening, from observation you can see that people only differ in happiness about as much as they differ in weight.
This idea of happiness being distributed rather equally might not be surprising to people with common sense. There are a million adages that say the same thing. Thinking about it mathematically took me by surprise, however.
I was rereading the Black Swan at the same time I was reading Zappos founder Tony Hsieh's "Delivering Happiness". In his autobiography, Tony talks about how he wasn't much happier after selling his first company for a 9 figure sum. I thought about this for a bit and realized I wasn't suprised. I've read the same thing and even witnessed it happen over and over again amongst startup founders who strike it rich. The change in happiness doesn't reflect the change in the bank account. Not at all! The bank account undergoes a multi-order of magnitude shift, while the happiness level fluctuates a few percentage points at best. It dawned on me that happiness is in Mediocristan. Of course!
I'm not warning you that you might not become an order of magnitude happier if you become rich, I'm telling you IT'S PHYSICALLY IMPOSSIBLE!!! There's no chance of it happening. You can be nearly as happy today as you will be the week after you make $1 billion. (In rare cases, you might even be less happy after you strike it rich.) Money is great, and having a ton of it would be pretty fun. By all means, try to make a lot of it. You will most likely be at least a few percentage points happier. Just remember to keep it in a realistic perspective. Aim to be 5 or 10% happier, not 500% happier.
It's funny, although our society doles out vastly different rewards, at the end of the day, in what matters the most, mother nature has created a pretty equal playing field.
August 6, 2010 — In February I celebrated my 26th Orbit. I am 26 orbits old. How many orbits are you?
I think we should use the word "orbit" instead of year. It's less abstract. The earth's 584 million mile journey around the sun is an amazing phenomena, and calling it merely "another year" doesn't do it justice.
Calling years orbits also makes life sound more like a carnival ride--you get a certain number of orbits and then you get off.
Enjoy the ride!
August 6, 2010 — Figuring out what you want in life is very hard. No one tells you exactly what you want. You have to figure it out on your own.
When you're young, it doesn't really matter what you want because your parents choose what you do. This is a good thing, otherwise kids would grow up uneducated and malnourished from ice cream breakfasts. But when you grow up, you get to call the shots.
The big problem with calling the shots is that what your conscious, narrative mind thinks you want and what your subconscious mind really wants often differ quite a lot. For instance, growing up I said I wanted to be in politics, but in reality I always found myself tinkering with computers. Eventually you have the "aha" moment, and drop things you thought you wanted and focus on the things that you really want, the things you keep coming back to.
If you pay attention to what you keep drifting back to, you'll figure out what you want. You just have to pay attention.
Collect data on what makes you happy as you go. Run experiments with your life.
You don't have to log what you do each day and run statistics on your life. But you do have to get out there and create the data. Try different things. Try different jobs, try different activities, try living in different places. Then you'll have experiences--data--which you can use to figure out exactly what the hell it is you really want.
People like to simplify things as much as possible. It would be nice if you only wanted a few things, such as a good family, a good job, and food on the table. I think though that in reality we each want somewhere around 10 to 20 different things. On my list of things I want, I've got 15 or 16 different things. Family, money, and food are on there. But also some more specific things, like living in the San Francisco Bay area, and studying computer science and statistics.
You don't get unlimited hours in the day so you've got to budget your time amongst all of these things that you want. If I were to spend all of my time programming, I'd have no time for friends and family, which are two things really important to me. So I've got to split my energies between these things. You'll always find yourself neglecting at least one area. Life is a juggling act. The important thing is to juggle with the right balls. It's fine to drop a ball for a bit, just pick it back up and keep going.
As you grow up you'll learn that there are things you want that aren't so good for you. Don't pretend you don't want that, just try to minimize it. For instance, part of me wants to eat ice cream almost everyday. But part of me wants to have healthy teeth, and part of me wants to not be obese. You've got to strike a balance.
First, you've got to figure out all the different things you want. Then, you've got to juggle these things as best as possible. Finally, when you think you've got it figured out, you'll realize that your wants have changed slightly. You might want one thing a bit less (say, partying), while wanting something else more (a career, a family, learning to sail, who knows). That's totally normal. Just add or drop the new discovery to your list and keep going.
Almost 2 years ago I made a dead simple mindmap of what I wanted. I think a mindmap is better than a list in this case because A) it looks cooler and B) there's not really a particular ranking with what I want. My list has changed by just one or two things in 2 year's time.
I like to be mysterious and have something to talk about at parties, so I've gone ahead and erased most of the items, but you can get the idea:
If you don't know what it is you want, try making a mindmap.
August 3, 2010 — Last night over dinner we had an interesting conversation about why we care about celebrities. Here's my thinking on the matter.
If you look at some stats about the attributes of celebrities, you'll realize something interesting: they're not that special. By any physical measure--height, weight, facial symmetry, body shape, voice quality, personality, intelligence--celebrities are not much different from the people around you. Conan O'Brien might be a bit funnier than your funniest friend, but he wouldn't make you laugh 10x more; it'd be more like 5% more. Angelina Jolie might be 10% more attractive than your most attractive friend, but for some groups she could even be less attractive.
If these people aren't so special, why do they interest us so much? One explanation is that we see these people over and over again on television and as a result we are conditioned to care about them.
I concede this may be part of it, but I actually don't think celebrities are forced upon us. Instead, I think we need celebrities. We need them to function in a global society.
It's all because of the Do You Know Game.
The Do You Know Game is a popular party game. People often play it every time they meet a stranger. It goes something like this:
That's the basic premise. You ask me where I am from. You think of everyone you know from that place and ask me one by one if I know that person. Then we switch roles and play again.
People play this game at work, at parties, at networking events, at college--especially at college. This game has a benefit.
People play this game for many reasons, but certainly one incentive to play is that if two strangers can identify a mutual friend, they can instantly trust each other a bit more. If we have a mutual friend, I'm more likely to do you a favor, and less likely to screw you over, because word gets around. Back in the day when people carried swords, this was even more important.
A mutual friend also gives two strangers a shared interest. It's something that they can continually talk about.
And having a mutual friend can reveal a lot about a person:
As you can see, having mutual friends serves many purposes.
Throughout the 20th century, the proportion of people that have traveled far from their hometowns for school or career has steadily increased. The further you travel from your home, the less likely you are to have a successful round of "do you know" with a stranger. You might share common interests or values with the new people you meet, but you'll know none of the same people and thus it will be harder to build and grow relationships. This is a big problem for a globalized society that depends on strong ties between people from different places to keep the economy running smoothly.
Celebrities have naturally arisen to fill a need for strangers in a globalized world to have mutual friends. We all interact with strangers more frequently nowadays, and if we didn't have celebrities, there would be a gaping hole in our arsenal of shortcuts to establishing trust with new people. There are a thousand ways to build repoire with a stranger, but the technique of talking about a shared acquaintance is one of the easiest and most effective. We travel farther than we ever have, but thanks to celebrities, we still have dozens of "mutual friends" wherever we go.
Of course, just because two people know who Tom Hanks is doesn't mean they should trust each other more. Tom Hanks doesn't know them and so none of the "word gets around" stuff I mentioned earlier applies. I'm not arguing that celebrities are an equal substitute for a mutual friend by any means. A mutual friend is a much more powerful bond than knowing about the same celebrity.
But celebrities are better than nothing.
July 2, 2010 — A year ago I wrote a post titled "The Truth about Web Design" where I briefly argued that "design doesn't matter a whole lot."
My argument was: "you go to a website for the utility of it. Design is far secondary. There are plenty of prettier things to look at in the real world."
I do think the real world is a pretty place, but about design, I was completely wrong.
I now think design is incredibly important, and on par with engineering. I used to think a poorly designed product was a matter of a company setting the right priorities, now I think it reflects ignorance, laziness or mediocrity. If a company engineers a great product but fails to put forward a great design, it says:
For nearly a decade I've always dreamed of my ideal computer as no computer at all. I wanted a computer smaller than the smallest smartphone, that would always be ready to take commands but would also be out of site. In other words, I've always thought of computers purely as problem solving tools--as a means to an end.
I want the computer to solve the problem and get out of my way. Computers are ugly. The world is beautiful. I like to look at other people, the sky, the ocean and not a menu or a screen. I didn't care about the style in which the computer solved my problem, because no matter how "great" it looked it couldn't compare to the natural beauty of the world.
I was wrong.
A computer, program, or product should always embody a good design, because the means to the end is nearly important as the end itself. True, when riding in a car I care about the end--getting to my destination. But why shouldn't we care about the style in which we ride? Why shouldn't we care about the means? After all, isn't living all about appreciating the means? We all know what the end of life is, the important thing is to live the means with style. I've realized that I want style--and I'm a little late to the party, most people want style.
If that argument didn't make sense, there are a number of practical reasons why a great design is important.
A great design can unlock more value for the user. Dropbox overcomes herculean engineering challenges to work, but if it weren't for its simple, easy to use design it wouldn't be nearly as useful.
A great design can be the competitive edge in a competive market. Mint.com had a great design, and it bested a few other startups in that emerging market.
A great design can be the differentiator in a crowded market. Bing's design is better than Google's. The design of Bing differentiates the two search engines in my mind, and makes Bing more memorable to me. The results of Microsoft's search engine have always been decent, but it was the design of Bing that finally gave them a memorable place in consumers' minds.
A great design is easy to get people behind. People like to support sites and products that are designed well. People love to show off their Apple products. Airbnb's beautiful design had a large role in making it easy for people to support the fledgling site.
Personally, I'm a terrible designer. Like many hackers, I can program but I can't paint. What should we do?
First, learn to appreciate the importance of design.
Second, learn to work well with designers. Don't treat design as secondary to engineering. Instead, think of how you can be a better engineer to execute the vision of your design team.
Great engineering can't compensate for poor design just as great design can't compensate for poor engineering. To create great products, you need both. Don't be lazy when it comes to design. It could be the make or break difference between your product's success or failure.
June 28, 2010 — Competition and specialization are generally positive economics forces. What's interesting is that they are contradictory.
Competition. Company 1 and Company 2 both try to solve problem A. The competition will lead to a better outcome for the consumer.
Specialization. Company 1 focuses on problem A; Company 2 focuses on problem B. The specialization will lead to a better outcome for all because of phenomena like economies of scale and comparative advantage.
So which is better? Is it better to have everyone compete to solve a small number of problems or to have everyone specialize on a unique problem?
Well, you want both. If you have no competition, it's either because you've been able to create a nice monopolistic arrangement for yourself or it's because you're working on a problem no one cares about.
If you have tons of competition, you're probably working on a problem that people care about but that is hard to make a profit in.
Update 8/6/2010: Overspecialization can be bad as well when things don't go according to plan, as NNT points out, Mother Nature does not like overspecialization, as it limits evolution and weakens the animals. If Intel fell into a sinkhole, we'd be screwed if it weren't for having a backup in AMD.
June 17, 2010 — Doing a startup is surprisingly simple. You have to start by creating a product that people must have, then you scale it from there.
What percent of your customers or "users" would be disappointed if your product disappeared tomorrow? If it's less than 40%, you haven't built a must have yet.
As simple as this sounds, I've found it to be quite hard. It's not easy to build a must have.
What are some other reasons people fail to build a must have product?
June 16, 2010 — Every Sunday night in college my fraternity would gather in the commons room for a "brother meeting". (Yes, I was in a fraternity, and yes I do regret that icing hadn't been invented yet). These meetings weren't really "productive", but we at least made a few decisions each week. The debates leading up to these decisions were quite fascinating. The questions would be retarded, like whether or not our next party should be "Pirate" themed or "Prisoner" themed(our fraternity was called Pike, so naturally(?) we were limited to themes that started with the letter P so we could call the party "Pike's of the Caribbean" or something). No matter what the issue, we would always have members make really passionate arguments for both sides.
The awesome thing was that these were very smart, persuasive guys. I'd change my mind a dozen times during these meetings. Without fail, whichever side spoke last would have convinced me that not only should we have a Pirate themed party, but that it was quite possibly one of the most important decisions we would ever make.
The thing I realized in these meetings is that flip flopping is quite easy to do. It can be really hard, if not impossible, to make the "right" decision. There are always at least two sides to every situation, and choosing a side is a lot more about the skills of the debaters, the mood you happen to be in, and the position of the moon(what I'm trying to say is there's a lot of variables at work).
I think humans are capable of believing almost anything. I think our convictions are largely arbitrary.
Try an experiment.
1) Take an issue, a political issue--the war in Afghanistan, Global Warming, marijuana legalization--or a minor everyday issue--what to have for dinner tonight, whether it's better to drink coffee or not, whether Facebook is a good thing or bad thing.
2) Take a stand on that issue. Think of all the reasons why your stand is right. Be prepared to support your stance in a debate.
3) Completely change your position. Take the other side. Think of every reason why this new side is correct. Be prepared to support this side without feeling like you are lying.
4) Keep flipping if you want.
I think it's fascinating to see how now matter what the issue, you can create a convincing case for any side. And it's hard not to hear an argument for the opposing side and not want to change your position. Our brains can be easily overloaded. The most recently presented information pushes out the old arguments.
But at some points, survival necessitates we take a side. The ability to become stubborn and closed-minded is definitely a beneficial trait. Survival causes us to become stubborn on issues and survival requires closed-mindedness to get anything done.
Three men set out to find a buried treasure. The first guy believes the treasure is to the north so heads in that direction. The second guy heads south. The third guy keeps changing his mind and zigzags between north and south. I don't know who finds the treasure first, but I do know it's certainly not the third guy.
Oftentimes the expected value of being stubborn is higher than the expected value of being thoughtful.
Is flip flopping a good thing? Is being open minded harder than being stubborn? Does it depend on the person? Does success require being certain?
I have no idea.
June 15, 2010 — I think it's interesting to ponder the value of information over it's lifetime.
Different types of data become outdated at different rates. A street map is probably mostly relevant 10 years later, while a 10 year old weather forecast is much less valuable.
Phone numbers probably last about 5 years nowadays. Email addresses could end up lasting decades. News is often largely irrelevant after a day. For a coupon site I worked on, the average life of a coupon seemed to be about 2 weeks.
If your data has a long half life, then you have time to build it up. Wikipedia articles are still valuable years later.
What information holds value the longest? What are the "twinkies" of the data world?
Books, it seems. We don't regularly read old weather forecasts, census rolls, or newspapers, but we definitely still read great books, from Aristotle to Shakespeare to Mill.
Facts and numbers have a high churn rate, but stories and knowledge last a lot longer.
June 14, 2010 — Have you heard of the Emperor Penguins? It's a species of penguins that journeys 30-75 miles across the frigid Antarctic to breed. Each year these penguins endure 8 months of brutally cold winters far from food. If you aren't familiar with them, check out either of the documentaries March of the Penguins or Planet Earth.
I think the culture of the emperor penguins is fascinating and clearly reveals some general traits from all cultures:
Culture is a set of habits that living things repeat because that's what they experienced in the past, and the past was favorable to them. Cultures have a mutually dependent relationship with their adherents.
The Emperor Penguins are born into this Culture. The Culture survives because the offspring keep repeating the process. The Emperor Penguins survive because the process seems to keep them safe from predators and close to mates. The culture and the species depend on each other.
Cultures are borne out of randomness.
At any moment, people or animals are doing things that may blossom into a new culture. Some of these penguins could branch off to Hawaii and start a new set of habits, which 500 years from now might be the dominant culture of the Emperor Penguins.
But predicting what will develop into a culture and what won't is impossible--there's too many variables, too much randomness involved. Would anyone have predicted that these crazy penguins who went to breed in the -40 degree weather for 8 months would survive this long? Probably not. Would anyone have predicted that people would still pray to this Jesus guy 2,000 years later? Probably not.
Cultures seem crazy to outsiders and are almost impossible to explain.
One widespread human culture is to always give an explanation for an event even when the true reason is just too complex or random to understand. The cultural habits are always easier to repeat and pass down then they are to explain.
I don't have any profound insights on culture, I just think it's fascinating and something not to read too much into---it helps us survive, but there's no greater meaning to it.
March 24, 2010 — "Dad, I finished my homework. Why?"
The father thinks for a moment. He realizes the answer involves explaining the state of the world prior to the child doing the homework. It involves explaining the complex probabilities that combined would calculate the odds the child was going to do the homework. And it likely involved explaining quantum mechanics.
The father shrugs and says "Because you have free will, and chose to do it."
Thus was born the notion of free will, a concept to explain why we have gone down certain paths when alternatives seemed perfectly plausible. We attribute the past to free will, and we attribute the unpredictability of the future to free will as well (i.e. "we haven't decided yet").
The problem is, this is wrong. You never choose just one path to go down. In fact, you go down all the paths. The catch is you only get to observe one.
In one world the child did their homework. In another world, they didn't.
The child who did their homework will never encounter the child who didn't, but they both exist, albeit in different universes or dimensions. Both of them are left wondering why they "chose" the way they did. The reality is that they chose nothing. They're both just along for the ride.
Even the smug boy who says free will doesn't exist, is just one branch of the smug boy.
March 22, 2010 — Google has a list of 10 principles that guide its actions. Number 2 on this list is:
It's best to do one thing really, really well.
This advice is so often repeated that I thought it would be worthwhile to think hard about why this might be the case.
For two reasons: economies of scale and network effects.
Economies of scale. The more you do something, the better you get at it. You can automate and innovate. You'll be able to solve the problem better than it's been solved in the past and please more people with your solutions. You'll discover tricks you'd never imagine that help you create and deliver a better "thing".
Network effects. If you work on a hard problem for a long time, you'll put a great deal of distance between yourself and the average competitor, and in our economy it doesn't take too big a lead to dominate a market. If your product and marketing is 90% as good as the competitor's, it will capture much less than 47% of the market. The press likes to write about the #1 company in an industry. The gold medalist doesn't get 1/3 of the glory, they get 95% of the glory. The network effects in our economy are very strong. If you only do something really well, the company that does it really, really well will eat your lunch.
A simpler analogy: You can make Italian food and Chinese food in the same restaurant, but the Italian restaurant down the street will probably have better Italian food and the Chinese restaurant will probably have better Chinese food, and you'll be out of business soon.
My English teacher would have told me that at least one of the "really"'s was unnecessary. But if you think about the statement in terms of math having the two "really"'s makes sense.
Let's define doing one thing well as being in the top 10% of companies that do that thing. Doing one thing really well means being in the top 1% of companies that do that thing. Doing one thing really, really well means being in the top 0.1% of companies that do that thing.
Thus, what Google is striving for is to be the #1 company that does search. They don't want to just be in the top 10% or even top 1% of search companies, they want to do it so well that they are at the very top. If you think about it like that, the 2 "really's" make perfect sense.
My guess is they don't choose the correct "thing" for their given team. They pick the wrong thing to focus on. For instance, if Ben and I started a jellyfish business, and decided to do jellyfish tanks really, really well, we would be making a huge mistake because we just don't have the right team for that business. It makes more sense when Al, a marine biology major and highly skilled builder, decides to do jellyfish tanks really, really well.
It makes perfect sense for the Google founders to start Google since they were getting their PhD's in search.
You need good team/market fit. The biggest mistake people make when following the "do one thing really, really well" advice is choosing the wrong product or market for their team.
Picking a "thing" that's too easy. You should go after a problem that's hard with a big market. Instead of writing custom software for ten of your neighbors that helps them do their taxes, generalize the problem and write internet software that can help anyone do their taxes. It's good to start small of course, but be in a market with a lot of room to grow.
Yes. It's good to be flexible until you stumble upon the one thing your team can do really, really well that can address a large market. Don't be stubborn. If at first you thought it was going to be social gaming, and then you learn that you can actually do photo sharing really, really well and people really want that, do photo sharing.
Microsoft Windows brings in something like $15 billion per year. Google Adwords brings in something like $15 billion per year. When you make that kind of money, you can drop $100 million selling ice cream and it won't hurt you too much. But to get there, you've first got to do one hard thing really, really well, whether it be operating systems or search.
March 17, 2010 — If you automate a process which you repeat Y times, that takes X minutes, what would your payoff be?
Payoff = XY minutes saved, right?
Surprisingly I've found that is almost never the case. Instead, the benefits are almost always greater than XY. In some cases, much greater. The benefits of automating a process are greater than the sum of the process' parts.
Actual Payoff = XY minutes saved + E
What is E? It's the extra something you get from not having to waste time and energy on XY.
Last year I did a fair amount of consulting work I found via craigslist. I used to check the Computer Gigs page for a few different cities, multiple times per day. I would check about 5 cities, spending about 2 minutes on each page, about 3 times per day. Thus, I'd spend 30 minutes a day just checking and evaluating potential leads.
I then wrote a script that aggregated all of these listings onto one page(including the contents so I didn't have to click to a new page to read a listing). It also highlighted a gig if it met a certain criteria that I had found to be promising. The script even automated a lot of the email response I would write to each potential client.
It cut my "searching time" down to about 10 minutes per day. But then something happened: I suddenly had more time and energy to focus on the next aspect of the problem: getting hired. It wasn't long before I was landing more than half the gigs I applied to, even as I raised my rates.
I think this is where the unexpected benefits come from. The E is the extra energy you'll have to focus on other problems once you don't have to spend so much time doing rote work.
Try to automate as much as possible. The great thing about automation is that once you automate one task you'll have more time to automate the next task. Automation is a great investment with compounding effects. Try to get a process down to as few steps or keystrokes as possible(your ideal goal is zero keystrokes). Every step you eliminate will pay off more than you think.
March 16, 2010 — I wrote a simple php program called phpcodestat that computes some simple statistics for any given directory.
I think brevity in source code is almost always a good thing. I think as a rule your code base should grow logarithmically with your user base. It should not grow linearly and certainly not exponentially.
If your code base is growing faster than your user base, you're in trouble. You might be attacking the wrong problem. You might be letting feature creep get the past of you.
I thought it would be neat to compute some stats for popular open source PHP applications.
My results are below. I don't have any particular profound insights at the moment, but I thought I'd share my work as I'm doing it in the hopes that maybe someone else would find it useful.
Name | Directories | Files | PHPFiles | PHPLOC | PHPClasses | PHPFunctions |
---|---|---|---|---|---|---|
../cake-1.2.6 | 296 | 677 | 428 | 165183 | 746 | 3675 |
../wordpress-2.9.2 | 82 | 753 | 279 | 143907 | 149 | 3827 |
../phpMyAdmin-3.3.1-english | 63 | 810 | 398 | 175867 | 44 | 3635 |
../CodeIgniter_1.7.2 | 44 | 321 | 136 | 43157 | 74 | 1211 |
../Zend-1.10 | 360 | 2145 | 1692 | 336419 | 42 | 11123 |
../symfony-1.4.3 | 770 | 2905 | 2091 | 298700 | 362 | 12198 |
March 8, 2010 — If a post on HackerNews gets more points, it gets more visits.
But how much more? That's what Murkin wanted to know.
I've submitted over 10 articles from this site to HackerNews and I pulled the data from my top 5 posts (in terms of visits referred by HackerNews) from Google Analytics.
Here's how it looks if you plot visits by karma score:
The Pearson Correlation is high: 0.894. Here's the raw data:
Karma | Visits | Page |
---|---|---|
53 | 3389 | /twelve_tips_to_master_programming_faster |
54 | 2075 | /code/use_rsync_to_deploy_your_website |
54 | 1688 | /unfeatures |
34 | 1588 | /flee_the_bubble |
25 | 1462 | /make_something_40_of_your_customers_must_have |
14 | 1056 | /when_forced_to_wait_wait |
4 | 214 | /diversification_in_startups |
1 | 146 | /seo_made_easy_lumps |
1 | 36 | /dont_flip_the_bozo_bit |
February 19, 2010 — All the time I overhear people saying things like "I will start excercising everyday" or "We will ship this software by the end of the month" or "I will read that book" or "I will win this race." I'm guilty of talking like this too.
The problem is that often, you say you will do something and you don't end up doing it. Saying "I will do", might even be a synonym for "I won't do".
Why does this happen? I don't think it's because people are lazy. I think it's because we overestimate our ability to predict the future. We like to make specific predictions as opposed to predicting ranges.
I'll explain why we are bad at making predictions in a minute, but first, if you find yourself making predictions about what you will do that turn out to be wrong, you should fix that. You can either tone down your predictions, giving ranges instead. For instance, instead of saying "I think I will win the race", say "I think I will finish the race in the top 10". Or, even easier: stop talking about things you will do entirely, and only talk about things you have done. So, in the race example, you might say something like "I ran 3 miles today to train for the race." (If you do win the race, don't talk about it a lot. No one likes a braggert).
Pretend you are walking down a path:
Someone asks you whether you've been walking on grass or dirt. You can look down and see what it is:
Now, they ask you what you will be walking on. You can look ahead see what it is:
Easy right? But this is not a realistic model of time. Let's add some fog:
Again, someone asks you whether you've been walking on grass or dirt. Even with the fog, you can look down and see what it is:
Now, they ask you what you will be walking on. You look ahead, but now with the fog you can't see what it is:
What do you do? Do you say:
In my opinion you should say something like 3 or 4.
This second example models real life better. The future is always foggy.
I don't know. Maybe a physicist could answer that question, but I don't know the answer. And I don't think I ever will.
February 17, 2010 — If a book is worth reading, it's worth buying too.
If you're reading a book primarily to gain value from it(as opposed to reading it for pleasure) you should always buy it unless it's a bad book.
The amount of value you can get from a book varies wildly. Most books are worthless. Some can change your life. For simplicity, let's say the value you can derive from any one book varies from 1 cent to $100,000(there are many, many more worthless books than there are of the really valuable kind).
The cost however, does not vary as much. Books rarely cost more than $100, and generally average to about $15.
You shouldn't read a book that you think will offer you less than $100 in value. Time could be better spent reading more important books.
So let's assume you never read a book that gives you less than $100 in value. Thus, the cost of a physical copy of the book is at most 15% (using the $15 average price) of the value gained.
Would owning that book help you extract 15% more from it? It nearly always will. When you own a book, you can take it anywhere. You can mark it up. You can flip quickly through the pages. You can bookmark it. You can easily share it with a friend and then dicuss it. If these things don't help you get 15% more out of that book, I'd be very surprised.
Where it gets even more certain, is when you read a really valuable book--say a book offering $1,000 of value. Now you'd only need to get 1.5% more out of that book.
The investment in that case is a no brainer.
February 2, 2010 — My room was always messy. Usually because clothes were strewn everywhere On the floor, on the couch, anywhere there was a surface there was a pile of clothes. Dirty, clean, or mostly-clean scattered about.
I tried a dresser. I tried making a system where I had spaces for each type of clothing: shirts, pants, etc. Nothing worked.
Then a friend saw my room and quipped, "Duh. You have too many clothes. Let's get rid of most of them."
So we did. About 75% of my clothes were packed up in garbage bags and sent off to the Salvation Army that day.
Ever since, my room has been at least 5x cleaner on average.
Almost always, there is one simple change you can make that will have drastic effects. This change is called the least you can do.
I had a website that was struggling to earn money even with a lot of visitors. I added AdSense and almost nothing happened. Then I moved the AdSense to a different part of the page and it suddenly made 5x more money. A week later I changed the colors of the ad and it suddenly made 2x as much money. Now the site makes 10x as much money and I barely did anything.
These are trivial examples, but the technique works on real problems as well.
The key is to figure out what the "least you can do" is.
You can discover it by working harder or smarter:
In reality you need to do things both ways. But try to put extra effort into doing things the smart way, and see where it takes you.
January 29, 2010 — Good communication is overcommunication. Very few people overcommmunicate. Undercommunication is much more common. Undercommunication is also the cause of countless problems in business.
Instead of striving for some subjective "good communication", simply strive to overcommunicate. It's very unlikely you'll hit a point where people say "he communicates too much". It's much more likely you'll come up a bit short, in which case you'll be left with good communication.
Here are 4 tips that will bring you closer to overcommunicating:
That's it. Good luck!
January 22, 2010 — Network effects are to entrepreneurs what compounding effects are to investors: a key to getting rich.
Sometimes a product becomes more valuable simply as more people use it. This means the product has a "network effect".
You're probably familiar with two famous examples of network effects:
All businesses have network effects to some degree. Every time you buy a slice of pizza, you are giving that business some feedback and some revenue which they can use to improve their business.
Giant businesses took advantage of giant network effects. When you bought that pizza, you caused a very tiny network effect. But when you joined Facebook, you immediately made it a more valuable product for many other users(who could now share info with you), and you may even have invited a dozen more users. When a developer joins Facebook, they might make an application that improves the service for thousands or even millions of users, and brings in a similar number of new users.
The biggest businesses enabled user-to-user network effects. Only the pizza store can improve its own offering. But Facebook, Craiglist, Twitter, and Windows have enabled their customers and developers to all improve the product with extremely little involvement from the company.
January 15, 2010 — In computer programming, one of the most oft-repeated mottos is DRY: "Don't Repeat Yourself."
The downside of DRY's popularity is that programmers might start applying the principle to conversations with other humans.
This fails because computers and people are polar opposites.
With computers, you get zero benefit if you repeat yourself. With people, you get zero benefit if you don't repeat yourself!
If you tell something to your computer once:
If you tell something to a person once:
In other words, the odds of communicating perfectly are very low: 1.8%! You are highly likely to run into at least one of those four problems.
Now, if you repeat yourself 1 time, and we assume independence, here's how the probabilities change:
By repeating yourself just once you've increased the chances of perfect communication from 1.8% to 12.5%! Repeat yourself one more time and the probability of perfect communication increases to over 90%. Well, in this simplistic model anyway. But I hope you get the idea.
To communicate well you should try to overcommunicate. Overcommunicating is hard to do. It's much easier and more common to undercommunicate. If you're not repeating yourself a lot, you're not overcommunicating.
On the various projects I'm involved with we use Gmail, Google Docs, Google Wave, Basecamp, Github, Sifter, gChat and Skype. Which one do I prefer?
None of them. I prefer pen, paper, whiteboards and face-to-face meetings. I write down my own todo list and schedule with pen and paper. Then I login to these sites and repeat what I've written down for the sake of repeating myself to other people. This isn't inefficiency, it's good communication.
Some people prefer Google Docs, some prefer Basecamp. I'll post things to both, to ensure everyone knows what I'm working on.
With every new project I repeat a lot of messages and questions to the team. "How many people love this product?", "How can we make this simpler?", "Which of the 7 deadly sins does this appeal to?". I think these are important questions and so I'll repeat them over and over and add them to the todo lists for every project, multiple times.
January 14, 2010 — When a problem you are working on forces you to wait, do you wait or switch tasks?
For example, if you are uploading a bunch of new web pages and it's taking a minute, do you almost instinctively open a new website or instant message?
I used to, and it made me less productive. I would try to squeeze more tasks into these short little idle periods, and as a result I would get less done.
Doing other things during idle times seems like it would increase productivity. After all, while you're waiting for something to load you're not getting anything done. So doing something else in the interim couldn't hurt, right? Wrong.
While you're solving one problem, you likely are "holding that problem in your head". It takes a while to load that problem in your head. You can only hold one important problem in your head at a time. If you switch tasks, even for a brief moment, you're going to need to spend X minutes "reloading" that problem for what is often only a 30 second vacation to Gmail, Facebook, Gchat, Hackernews, Digg, etc. It's clearly a bad deal.
If you're doing something worth doing, give it all of your attention until it's done. Don't work on anything else, even if you're given idle time.
Human intelligence is overrated. Even the smartest people I know still occasionally misplace their keys or burn toast. We are good at following simple tasks when we focus, most of the time. But we are not built for multitasking.
Can you rub your head clockwise? Can you rub your belly counterclockwise? Can you say your ABC's backwards?
Dead simple, right? But can you do all three at once? If you can, by all means ignore my advice and go multitask.
If what you are doing is easy or mundane, multitasking is permissible because loading a simple problem like "laundry" into your head does not take much time. But if what you are doing is important and worth doing, you are obligated to give it your full attention and to wait out those "idle times".
If you switch tasks during your idle times, you're implying that the time to reload the problem is less than the time gained doing something else. In other words, you are implying what you are doing is not worth doing. If that's the case, why work on it at all?
January 12, 2010 — Whether you're an entrepreneur, a venture capitalist, a casual investor or just a shopper looking for a deal, you should know how to buy low and sell high. Buying low and selling high is not easy. It's not easy because it requires too things humans are notoriously bad at: long term planning and emotional control. But if done over a long period of time, buying low and selling high is a surefire way to get rich.
Warren Buffett is perhaps the king of buying low and selling high. These tips are largely regurgitated from his speeches and biographies which I've been reading over the past two years.
Everything has both a price and a value. Price is what you pay for something, value is what you get. The two rarely match. Both can fluctuate wildly depending on a lot of things. For instance, the price of gas can double or triple in a year based on events in the Middle East, but the value of a gallon of gas to you largely remains constant.
Don't let the market ever tell you the value of something--don't let it instruct you. Your job is to start figuring out the intrinsic value of things. Then you can take advantage when the price is far out of whack with the true value of something--you can make the market serve you.
Google's price today is $187 Billion. But what's its value? The average investor assumes the two are highly correlated. Assume the correlation is closer to 0. Make a guess about the true value of something. You may be way off the mark in you value estimating abilities, but honing that skill is imperative.
You've got to be in a position to take advantage of the market, and if you spend your cash on unnecessary things, you won't be. Buy food in bulk at Costco. Cut your cell phone bill or cancel it altogether. Trim the fat wherever you can. You'd be surprised how little you can live off of and be happy. Read P.T. Barnum's "The Art of Moneygetting" for some good perspective on how being frugal has been a key to success for a long time.
The crazy market will constantly offer you "buy high, sell low" deals. You've got to be able to turn these down. If you don't have good cash flow or a cash cushion, it's very hard. That's why being frugal is so important.
If you're happy with what you have now it's easy to make good deals over the long run. Buying low and selling high requires long term emotional control. If you're unhappy or stressed, it's very hard to make clear headed decisions. Do what you have to do to get happy.
Out of the tens of thousands of potential deals you can make every month, which ones should you act on? The easy ones. Don't do deals in areas that you don't understand. Do deals where you know the area well. I wouldn't do a deal in commodities, but I'd certainly be willing to invest in early stage tech startups.
The easy deals have a wide margin of safety. An easy deal has a lot of upside. An easy deal with a wide margin of safety has little to no downside. Say a company has assets you determine are worth $1 Million and for some reason the company is selling for $950,000. Even if the company didn't grow, it has a good margin of safety because the price of its assets alone are worth more than the price you paid.
How do you find these easy deals? You've got to read a lot. You've got to keep your eyes open. Absorb and think mathematically about a lot of information you encounter in everyday life.
Businesses can be the ultimate thing to buy low and sell high because they have nearly unlimited upside. Real estate, gold, commodities, etc., can be good investments perhaps. But when's the last time you heard of someone's house going up 10,000%? Starting a business can be your best investment ever, as you are guaranteed to buy extremely low, and have the potential to sell extremely high.
January 5, 2010 — Possibly the biggest mistake a web startup can make is to develop in a bubble. This is based on my own experience launching 13 different websites over the past 4 years. The raw numbers:
Type | Count | Successes | TimeToLaunch | CumulativeGrossRevenues | %ofTotalTraffic | CumulativeProfits | EmotionalToll |
---|---|---|---|---|---|---|---|
Bubble | 3 | 0 | Months | <$5,000 | <1% | -$10,000's | High |
NonBubble | 10 | 5-8 | 1-14Days | $100,000's | >99% | Good | None-low |
The bubble is the early, early product development stage. When new people aren't constantly using and falling in love with your product, you're in the bubble. You want to get out of here as fast as possible.
If you haven't launched, you're probably in the bubble. If you're in "stealth mode", you're probably in the bubble. If you're not "launching early and often", you're probably in the bubble. If you're not regularly talking to users/customers, you're probably in the bubble. If there's not a steady uptick in the number of users in love with your product, you're probably in the bubble.
A part of you always wants to stay in the bubble because leaving is scary. Launching a product and having it flop hurts. You hesitate for the same reason you hesitate before jumping into a pool in New England: sure, sometimes they're heated, but most of the time they're frickin freezing. If the reception to your product is cold, if no one falls in love with it, it's going to hurt.
You can stand at the edge of the pool for as long as you want, but you're just wasting time. Life is too short to waste time.
In addition to wasting time, money and energy in the bubble (which can seem like a huge waste if your product flops), two things happen the longer you stay in the bubble:
This is a very bad combination that can lead to paralysis. The more you pour into your bubble product, the less impact your additional efforts will have yet at the same time the more you will expect your product to succeed.
Don't wait any longer: jump in the water, flee the bubble!
Here are four easy strategies for leaving the bubble: launch, launch & drop, pick one & launch, or drop.
Launch. Post your product to your blog today. Email your mailing list. Submit it to Reddit or Hackernews or TechCrunch. Just get it out there and see what happens. Maybe it will be a success.
Launch & Drop. Maybe you'll launch it and the feedback will be bad. Look for promising use cases and tweak your product to better fit those. If the feedback is still bad, drop the product and be thankful for the experience you've gained. Move on to the next one.
Pick One & Launch. If you're product has been in the bubble too long, chances are it's bloated. Pick one simple feature and launch that. You might be able to code it from scratch in a day or two since you've spent so much time already working on the problem.
Drop. Ideas are for dating not marrying. Don't ever feel bad for dropping an idea when new data suggests it's not best to keep pursuing it. It's a sign of intelligence.
That's all I've got. But don't take it from me, read the writings of web entrepreneurs who have achieved more success. (And please share what you find or your own experiences on HackerNews).
December 28, 2009 — At our startup, we've practiced a diversification strategy.
We've basically run an idea lab, where we've built around 7 different products. Now we're getting ready to double down on one of these ideas.
The question is, which one?
Here's a 10 question form that you can fill out for each of your products.
2021 Update: I think the model and advice presented here is weak and that this post is not worth reading. I keep it up for the log, and not for the advice and analysis provided.
December 24, 2009 — Over the past 6 months, our startup has taken two approaches to diversification. We initially tried no diversification and then we tried heavy diversification.
In brief, my advice is:
Diversify heavily early. Then focus.
In the early stages of your startup, put no more than 33% of your resources into any one idea. When you've hit upon an idea that you're excited about and that has product/market fit, then switch and put 80% or more of your resources into that idea.
An investor diversifies when they put money into different investments. For example, an investor might put some money into stocks, some into bonds, and some into commodities. If one of these investments nosedives, you won't lose all your money. Also, you have better odds that you'll pick some investments that generate good returns. The downside is that although you reduce the odds of getting a terrible outcome, you also reduce the odds of getting a great outcome.
A startup diversifies when it puts resources into different products. For example, a web startup might develop a search engine and an email service at the same time and hope that one does very well.
There are 4 main benefits to diversify:
If diversifying has so many benefits, should you ever stop? Yes, you should.
Focus when you are ready to make money.
Coming up with new ideas and building new, simple products is the easy part of startups. Unfortunately, developing new solutions is not what creates a lot of value for other people. Bringing your solution to other people is when most value is created--and exchanged.
Imagine you're a telecom company and you build a fiber optic network on the streets of every city in America--but fail to connect people's homes to the new system. Although connecting each home can be hard and tedious, without this step no value is created and no money will come your way.
When you hear the phrase "execution is everything", this is what it refers to. If you want to make money, and you've got a great team and found product/market fit, you've then got to focus and execute. Drop your other products and hunker down. Fix all the bugs in your main product. Really get to know your customers. Identify your markets and the order in which you'll go after them. Hire great people that have skills you are going to need.
Let's recap the benefits of focusing.
When you first begin your startup it's very similar to playing roulette. You plunk down some resources on an idea and then the wheel spins and you win more money or lose the money that you bet.
In roulette, you can bet it all on one number(focusing) or bet a smaller amount on multiple numbers(diversifying). If you bet it all on one number and win, you get paid a lot more money. But you're also more likely to lose it all.
The "game of startups" though, has two very important differences:
You get way more information about the odds of an idea "hitting the jackpot" after you plunked some time and money into it. You may find customers don't really have as big a problem as you thought. Or that the market that has this problem is much smaller than you thought. You may find one idea you thought was silly actually solves a big problem for people and is wildly popular.
You can then adjust your bets. If your new info leads you to believe that this idea has a much higher chance of hitting the jackpot, grab your resources from the other ideas and plunk them all down on this one. Or vice versa.
Sadly I bet there are paperboys who's businesses have done better than all mine to date, so take my advice with a grain of salt.
But if you want to learn more, I suggest reading the early histories of companies such as eBay, Twitter, and Facebook and see what their founders were up to before they founded those sites and in the following early period.
And check back here, I'll hopefully be sharing how this approached worked for us.
December 23, 2009 — It is better to set small, meaningful goals than to set wild, audacious goals.
Here's one way to set goals:
Make them good. Make them small.
Good goals create value. Some examples:
Start small. It is better to set one or two goals per time period than to set two dozen goals. Instead of a goal like "get 1,000,000 people to your website", start with a smaller goal like "get 10 people to your website."
If you exceed a goal and still think it's a good thing, raise the goal an order of magnitude. If you get those 10 visitors, aim for 100.
Setting smaller goals is better because:
Another way to set goals is to use ranges. Set a low bar and a high bar. For example, your weekly goals might be:
LowBar | HighBar | What |
---|---|---|
2 | 7 | new customers |
2 | 4 | product improvements |
1 | 3 | blog posts |
If you exceed your low bar, you can be happy. If you exceed your high bar, you can be very happy.
December 20, 2009 — Programming, ultimately, is about solving problems. Often I make the mistake of judging a programmer's work by the elegance of the code. Although the solution is important, what's even more important is the problem being solved.
Problems are not all created equal, so while programming you should occasionally ask yourself, "is this problem worth solving?"
Here's one rubric you can use to test whether a problem is worth solving:
The best programmers aren't simply the ones that write the best solutions: they're the ones that solve the best problems. The best programmers write kernels that allow billions of people to run other software, write highly reliable code that puts astronauts into space, write crawlers and indexers that organize the world's information. They make the right choices not only about how to solve a problem, but what problem to solve.
Life is too short to solve unimportant problems. If you want to solve important problems, it's now or never. The greatest programmers only get to solve a relatively small amount of truly important problems. The sooner you get started working on those, the better.
If you don't have the skills yet to solve important problems, reach out to those who do. To solve important problems, you need to develop a strong skill set. But you can do this much faster than you think. If you commit to solving important problems and then reach out to more committed programmers than you, I'm sure you'll find many of them willing to help speed you along your learning curve.
December 16, 2009 — If you combine Paul Graham's "make something people want" advice with Sean Ellis' product-market fit advice (you have product-market fit when you survey your users and at least 40% of them would be disappointed if your product disappeared tomorrow), you end up with a possibly even simpler, more specific piece of advice:
Make something 40% of your users must have
Your steps are then:
Only when you hit that 40% number(or something in that range) should you be comfortable that you've really made something people want.
Does this advice work? I think it would for 3 reasons.
PG and Sean Ellis know what they're talking about.
I made a list of my "must have" products and they are all largely successful. I suggest you try this too. It's a good exercise.
My List of Must Haves:
I've worked on a number of products over the past 3 years.
One of them I can tell you had a "I'd be disappointed if this disappeared" rate of over 40%. We sold that site.
All the others did not have that same "must-have" rate. We launched Jobpic this summer at Demo Day. People definitely wanted it. But we didn't get good product/market fit. If we had surveyed our users, I bet less than 10% of them would report being disappointed if Jobpic disappeared. Our options are to change the product to achieve better product/market fit, or go forward with an entirely new product that will be a must have.
I don't know if this advice will work. But I'm going to try it.
Startup advice can be both exhilarating and demoralizing.
On the plus side, good advice can drastically help you. At the same time, if it's really good advice that means two things:
That can frustrating. I've spent a few years now in the space and to realize you've been doing certain things wrong for a few years is...well...painful.
But you laugh it off and keep chugging along.
December 15, 2009 — The best Search Engine Optimization(SEO) system I've come across comes from Dennis Goedegebuure, SEO manager at eBay. Dennis' system is called LUMPS. It makes SEO dead simple.
Just remember LUMPS:
These are the things you need to focus on in order to improve your SEO. You should also, of course, first know what terms you want to rank highly for.
LUMPS is listed in order of importance to search engines. So links are most important, sitemaps are least important.
Let's break each one down a bit more.
External links--links from domains other than your own--are most important. For external links, focus on 3 things, again listed in order of importance:
Your internal link structure is also important. Make sure your site repeatedly links to the pages you are optimizing for.
External links are the most important thing you need for SEO. Internal links you can easily control, but it takes time to accumulate a lot of quality external links. Focus on creating quality content(or even better, build a User Generated Content site). People will link to interesting content.
The terms you are optimizing for should be in your urls. It's even better if they are in your domain. For instance, if I'm optimizing for "breck yunits", I've done a good job by having the domain name breckyunits.com. If I'm optimizing for the term "seo made easy", ideally I'd have that domain. But I don't, so having breckyunits.com/seomadeeasy is the next best thing.
Luckily, URL Structure is not just important, it's also relatively easy to do well and you can generally set up friendly URLs in an hour or so. I could explain how to do it with .htaccess and so forth, but there are plenty of articles out there with more details on that.
Your TITLE tags and META DESCRIPTIONS tags are important for 2 reasons. First, search engines will use the content in them to rank your pages. Second, when a user sees a search results page, the title and description tags are what the user sees. You need good copy that will increase the Click Through Rate. Think of your title and description tags as the Link Text and Description in an AdWords ad. Just as you'd optimize the AdWords ad, you need to optimize this "seo ad". Make the copy compelling and clear.
Like URL structure, you can generally set up a system that generates good meta and description tags relatively easily.
Content is king. If you've got the other 3 things taken care of and you have great content, you're golden. Not only will great content please your visitors, but it will likely be keyword rich which helps with SEO. Most importantly, it is much easier to get links to valuable, interesting content than to bad content. Figure out a way to get great content and the whole SEO process will work a lot better.
Sitemaps are not the most crucial thing you can do, but they help and are an easy thing to check off your list. Use Google Webmaster tools and follow all recommendations and submit links to your sitemaps.
There you have it, SEO made easy! Just remember LUMPS.
December 13, 2009 — Do you "flip the bozo bit" on people?
If you don't know what that means, you probably do it unknowingly!
When you "flip the bozo bit" on someone you ignore everything they say or do. You flip the bozo bit on a person when they are wrong or make a mistake over and over again. Usually you flip the bozo bit unconsciously.
You are writing a program with Bob. Bob constantly writes buggy code. You get frustrated by Bob's bugs and slowly start ignoring all the code he submits and start writing everything yourself. You've flipped the bozo bit!
This is bad for everyone. Now you are doing more work, and Bob is becoming resentful because you are ignoring his ideas and work.
Instead of flipping the bozo bit, perhaps you could work with another person. If that's not possible, take a more constructive approach:
It seems like a simple evolutionary trick to save time. If someone is right only 10% of the time, would it be faster to ignore every statement they made, or faster to analyze each statement carefully in case it's the 1 out of 10 statements that might be true? Seems like it would be faster to just ignore everything by flipping the bozo bit.
But this is a bad solution. The two presented above are better.
December 11, 2009 — Jason Fried from 37signals gave a great talk at startup school last month. At one point he said "software has no edges." He took a normal, everyday bottle of water and pointed out 3 features:
If you added a funnel to help pour the water, that might be useful in 5% of cases, but it would look a little funny. Then imagine you attach a paper towel to each funnel for when you spill. Your simple water bottle is now a monstrosity.
The clear edges of physical products make it much harder for feature creep to happen. But in software feature creep happens, and happens a lot.
How do you fight feature creep in software? Here's an idea: do not put each new feature request or idea on a to-do list. Instead, put them on an (un)features list.
An (un)features list is a list of features you've consciously decided not to implement. It's a well maintained list of things that might seem cool, but would detract from the core product. You thought about implementing each one, but after careful consideration decided it should be an (un)feature and not a feature. Your (un)features list will also include features you built, but were only used by 1% of your customers. You can "deadpool" these features to the (un)features list. Your (un)features list should get as much thought, if not more, than your features list. It should almost certainly be bigger.
When you have an idea or receive a feature request, there's a physical, OCD-like urge to do something with it. Now, instead of building it or putting it on a todo list, you can simply write it down on your (un)features list, and be done with it. Then maybe your water bottles will look more like water bottles.
This blog is powered by software with an (un)features list.
Edit: 01/05/2010. Features are a great way to make money.
December 10, 2009 — Employees and students receive deadlines, due dates, goals, guidelines, instructions and milestones from their bosses and teachers. I call these "arbitrary constraints".
Does it really matter if you learn about the American Revolution by Friday? No. Is there a good reason why you must increase your sales this month by 10%, versus say 5% or 15%? No. Does it really matter if you get a 4.0 GPA? No.
But these constraints are valuable, despite the fact that they are arbitrary. They help you get things done.
Constraints, whether meaningful or not, simplify things and help you focus. We are simple creatures. Even the smartest amongst us need simple directions: green means go, red means stop, yellow means step on it. Even if April 15th is an arbitrary day to have your tax return filed, it is a simple constraint that gets people acting.
Successful people are good at getting things done. They focus well. Oftentimes they focus on relatively meaningless constraints. But they meet those constraints, however arbitrary. By meeting a lot of constraints, in the long run they hit enough of those non-arbitrary constraints to achieve success. Google is known for it's "OKR's"--objectives and key results--basically a set of arbitrary constraints that each employee sets and tries to hit.
If you start a company, there are no teachers or bosses to set these constraints for you. This is a blessing and a curse. It's a blessing because you get to choose constraints that are more meaningful to you and your interests. It's a curse because if you don't set these constraints, you can get fuddled. Being unfocused, at times, can be very beneficial. Having unfocused time is a great way to learn new things and come up with new ideas. However, to get things done you need to be focused. And the first step to get focused is to set some arbitrary constraints.
Here are some specific constraints I set in the past week:
All of these are mostly arbitrary. And I have not met all of them. But setting them has helped me focus.
If you don't meet your constraints, it's no big deal. They're largely arbitrary anyway. Even by just trying to meet your constraints, you learn a lot more. You are forced to think critically about what you are doing.
When you don't meet some constraints, set new ones. Because you now have more experience, the new ones might be less arbitrary.
But the important thing is just having constraints in the first place.
December 9, 2009 — A lot of people have the idea that maybe one day they'll become rich and famous and then write a book about it. That's probably because it seems like the first thing people do after becoming rich and famous is write a book about it.
But you don't have to wait until you're rich and famous to write a book about your experiences and ideas.
A few months ago I was talking to another MBA student, a very talented man, about 30 years old from a great school with a great resume. I asked him what he wanted to do for his career, and he replied that he wanted to go into a particular field, but thought he should work for McKinsey for a few years first to add to his resume. To me that's like saving sex for your old age. It makes no sense. @ Warren Buffet
Likewise, saving blogging for your old age makes no sense. There are two selfless reasons why you should start blogging now:
It used to take a lot of work to publish something. Now it is simpler than brushing your teeth. So publish, write, blog!
If you need some selfish reasons, here are 5:
Blogging. Don't save it for your old age.
December 8, 2009 — Finding experienced mentors and peers might be the most important thing you can do if you want to become a great programmer. They will tell you what books to read, explain the pros and cons of different languages, demystify anything that seems to you like "magic", help you when you get in a jam, work alongside you to produce great things people want, and challenge you to reach new heights.
Great coders travel in packs, just like great authors.
If you want to reach the skills of a Linus, Blake, Joe, Paul, David, etc., you have to build yourself a group of peers and mentors that will instruct, inspire, and challenge.
Here are 6 specific tips to do that.
Hopefully you'll find some of these tips useful. Feel free to email me if you need a first mentor (breck7 at google's email service). I'm not very good yet, but I may be able to help.
December 7, 2009 — Do you think in Orders of Magnitude? You should.
If you think in orders of magnitude you can quickly visualize how big a number is and how much effort it would take to reach it.
Orders of magnitude is a way of grouping numbers. The numbers 5, 8 and 11 are all in the same order of magnitude. The numbers 95, 98 and 109 are in the same order of magnitude as well, but their order of magnitude is one order of magnitude greater than 5, 8, 11.
Basically, if you multiple a number by 10, you raise it one order of magnitude. If you've ever seen the scary looking notation 5x10^2, just take the number five and raise it 2 orders of magnitude (to 500).
Think of orders of magnitude as rough approximations. If you want the number 50 to be in the same order of magnitude as the number 10, you can say that "it's roughly in the same order of magnitude" or that "it's about half an order of magnitude bigger". Don't worry about being exact.
Orders of magnitude is a great system because generally there's a huge difference between 2 numbers in different orders of magnitude. Thus to cross from one order of magnitude to the next, a different type of effort is required than to simply increment a number. For example, if you run 2 miles each day and then decide to run one more, 3 total, it should be easy. But if you decided to run one more order of magnitude, 20 miles, it would take a totally new kind of effort. You'd have to train longer, eat differently, and so forth. To go from 2 to 3 requires a simple approach, just increase what you're doing a bit. To go from 2 to 20, to increase by an order of magnitude, requires a totally different kind of effort.
Let's do a business example.
Pretend you started a business delivering pizza. Today you have five customers, make 5 pizzas a week, and earn $50 revenue per week.
You can keep doing what you're doing and slowly raise that to 6 customers, then 7 and so on. Or you can ask yourself, "How can I increase my business an order of magnitude?"
Going from 5 to 50 will take a different type of effort than just going from 5 to 6. You may start advertising or you might create a "Refer a Customer, get a free pizza" promotion. You might have to hire a cook. Maybe lower your price by $2.
Imagine you do all those things and now have 50 customers. How do you get to 500?
Now you might need a few employees, television advertisements, etc.
Growing a business is the process of focusing like a laser on the steps needed to reach the next order of magnitude.
Here are some more examples of orders of magnitude if it's still not clear:
Bill Gates has approximately $50,000,000,000. Warren Buffett has $40,000,000,000. For Warren to match Bill, he merely has to make a few more great investments and hope Microsoft's stock price doesn't go up. He does not have to increase his wealth an order of magnitude. I on the other hand, have $5 (it was a good month). For me to become as rich as BillG, I have to increase my wealth 10 orders of magnitude. That means that I'd have 10 different types of hard challenges to overcome to match BillG's wealth.
December 6, 2009 — Imagine you are eating dinner with 9 friends and you all agree to play Credit Card Roulette. Credit Card Roulette is a game where everyone puts their credit card in a pile and the server randomly chooses one and charges the whole meal to it.
Imagine you are playing this game with your own friends. Pause for a second and picture it happening.
...
What did you see?
I bet you saw one person's card get picked and that person was sad and everyone else laughed.
Wrong!
This is not what really happened! In reality, despite the fact that you observed only one's person card getting picked, in reality everyone's card got chosen.
In reality, when you played the game, the world split into 10 paths, and every person's card got picked in one of those paths. You only observed one path, but trust me, there were 9 others.
This is a simple example of the many worlds law. You probably were not taught the many worlds law in school, which is a shame. It's one of the most important laws in the world.
December 4, 2009 — Do you want to become a great coder? Do you have a passion for computers but not a thorough understanding of them? If so, this post is for you.
There is a saying that it takes 10,000 hours of doing something to master it.
So, to master programming, it might take you 10,000 hours of being actively coding or thinking about coding. That translates to a consistent effort spread out over a number of years.
There is another saying that I just read which inspired me to write this, that says "there is no speed limit".
In that post, Derek Sivers claims that a talented and generous guy named Kimo Williams taught him 2 years worth of music theory in five lessons. I have been learning to program for 2 years, and despite the fact that I've made great progress, my process has been slow and inefficient.
I did not have a Kimo Williams. But now that I know a bit, I'll try and emulate him and help you learn faster by sharing my top 12 lessons.
I'll provide the tips first, then if you're curious, a little bit more history about my own process.
That's it, go get started!
Actually, I'll give you one bonus tip:
Two years ago, in December 2007, I decided to become a great programmer. Before then, I had probably spent under 1,000 hours "coding". From 1996 to 2007, age 12 to age 23, I spent around 1,000 hours "coding" simple things like websites, MSDOS bat scripts, simple php functions, and "hello world" type programs for an Introduction to Computer Science class. Despite the fact that I have always had an enormous fascination with computers, and spent a ton of time using them, I was completely clueless about how they worked and how to really program.
(If you're wondering why didn't I start coding seriously until I was 23 and out of college there's a simple and probably common reason: the whole time I was in school my goal was to be cool, and programming does not make you cool. Had I known I would never be cool anyway, I probably would have started coding sooner.)
Finally in December 2007 I decided to make programming my career and #1 hobby. Since then I estimate I've spent 20-50 hours per week either coding or practicing. By practicing I mean reading books about computers and code, thinking about coding, talking to others, and all other related activities that are not actually writing code.
That means I've spent between 2,000-5,000 hours developing my skills. Hopefully, by reading these tips, you can move much faster than I have over the past 2 years.
December 3, 2009 — What would happen if instead of writing about subjects you understood, you wrote about subjects you didn't understand? Let's find out!
Today's topic is linear algebra. I know almost nothing about vectors, matrices, and linear algebra.
I did not take a Linear Algebra course in college. Multivariable calculus may have done a chapter on vectors, but I only remember the very basics: it's a size with a direction, or something like that.
I went to a Borders once specifically to find a good book to teach myself linear algebra with. I even bought one that I thought was the most entertaining of the bunch. Trust me, it's far from entertaining. Haven't made it much further than page 10.
I bet vectors, matrices, and linear algebra are important. In fact, I'm positive they are. But I don't know why. I don't know how to apply linear algebra in everyday life, or if that's something you even do with linear algebra.
I use lots of math throughout the day such as:
But I have no idea when I should be using vectors, matrices, and other linear algebra concepts throughout the day.
There are lots of books that teach how to do linear algebra. But are there any that explain why?
Would everyone benefit from linear algebra just as everyone would benefit from knowing probability theory? Would I benefit?
I don't know the answer to these questions. Fooled by Randomness revealed to me why probability is so incredibly important and inspired me to master it. Is there a similar book like that for linear algebra?
I guess when you write about what you don't know, you write mostly questions.
December 2, 2009 — What books have changed your life? Seriously, pause for a few minutes and think about the question. I'll share my list in a moment, but first come up with yours.
Do you have your list yet? Writing it down may help. Try to write down 10 books that you think have most impacted your life.
Take all the time you need before moving on.
Are you done yet? Don't cheat. Write it down then continue reading.
Okay, at this point I'm assuming you've followed instructions and wrote down your list of 10 books.
Now you have one more step. To the right of each book title, write "fiction" or "nonfiction". You can use the abbreviations "F" and "NF" if you wish.
You should now have a list that looks something like mine:
Now, count the NF's. How many do you have? I have 7. So 7 out of the 10 books that I think have most impacted my life are non-fiction. Therefore, if I have to guess whether the next book I read that greatly impacts my life will be fiction or nonfiction, my guess is it will be nonfiction.
What's your list? Do you think the next book that will greatly impact your life will be fiction or non-fiction?
Share your results here.
Experience is what you get when you don't get what you want.
December 2, 2009 — How many times have you struggled towards a goal only to come up short? How many times have bad things happened to you that you wish hadn't happened? If you're like me, the answer to both of those is: a lot.
But luckily you always get something when you don't get what you want. You get experience. Experience is data. When accumulated and analyzed, it can be incredibly valuable.
To be successful in life you need to have good things happen to you. Some people call this "good luck". Luck is a confusing term. It was created by people who don't think clearly. Forget about the term "luck". There is not "good luck" and "bad luck". Instead, "good things happen", and "bad things happen". Your life is a constant bombardment of things happening, good and bad. Occasionally, despite making bad decisions steadily, some people have good things happen to them. But in most cases to have good things happen to you, you've got to make a steady stream of good decisions.
You've got to see patterns in the world and recognize cause and effect. You've got to think through your actions and foresee how each action you take will affect the chances of "good things happening" versus "bad things happening" down the line.
When you're fresh out of the gate, it's hard to make those predictions. You just don't have any data so you can't analyze cause and effect appropriately. But once you're out there attempting things, even if you screw up or don't get what you want, you get experience. You get data to use to make better decisions in the future.
December 2, 2009 — Decided to blog again. I missed it. Writing publicly, even when you only get 3 readers, two of which are bots and the other is your relative, is to the mind what exercise is to the body. It's fun and feels good; especially when you haven't done it in a while.
Also decided to go old school. No Wordpress or Tumblr, Blogger or Posterous. Instead, I'm writing this on pen and paper. Later I'll type it into HTML using Notepad++, vim, or equivalent(EDIT: after writing this I coded my own, simple blogging software called brecksblog). It will just be text and links. Commenting works better on hackernews, digg, or reddit anyway.
Hopefully these steps will result in better content. Pen and paper make writing easier and more enjoyable, so hopefully I'll produce more. And the process of typing should serve as a filter. If something sucks, I won't take the time to type it.
I'm writing to get better at communicating, thinking, and just for fun. If anyone finds value in these posts, that's an added bonus.
Written 11/30/2009