Lambda Calculus
In 1936, Alonzo Church invented a universal model of computation called “lambda calculus.” This system expresses computation as reductions on lambda expressions, which are basically just functions and variables. The system is simple but incredibly expressive, and serves as the foundation for programming languages such as Haskell and Idris.
The significance of having a universal model of computation is that it provides a way of solving any problem that can be expressed in the system, and changes the problem from “how can I calculate this” to “can I express this properly?” Lambda calculus is also Turing complete, and even more impressively, was invented in the 1930s independently of Turing.
My Infinity is Bigger Than Yours
What is Infinity? We are going to start this post by trying to figure out what infinity is. What do you think of when you think of infinity? Most people know that it’s this uncountable number or thing (or is it…?). There is no beginning or end, it just goes on forever.
Semigroups and Monoids
In functional programming there are two concepts that are mentioned a lot that may sound intimidating: the semigroup and the monoid (not monad). These concepts originate from category theory, which is a branch of math that aims to reason about the entirety of math through graphs called “categories”. While monoids and semigroups sound complex, they are a simple but powerful abstraction that allow you to generically combine data types.
Semigroups Before we get into monoids, we first need to define a Semigroup, since the monoid is a strict subset of the semigroup. A semigroup is basically a set with an associative multiplication operation defined on it. In Haskell, we don’t have a multiplication operation, but rather, the <> operator. Both of these things allude to the same thing: a method of coalescing elements across a set in a associative manner. Note that all I mean by “associative operation” is that a <> (b <> c) = (a <> b) <> c, so order shouldn’t matter when chaining these operations together (just like with multiplication).
Introducing oars
The Motivation Behind oars For a while now, I have been working on a library called oars (orthogonal arrays rust).
The need for this library was born out of the work we were doing on my Bachelor’s thesis and related EGSR paper. The team and I were working on implementing orthogonal array construction methods, and using Art Owen’s techniques to create point sets from these orthogonal arrays that are suitable for Monte Carlo integration. At first, we implemented these algorithms and generation methods and relied on code review and visualizations to ensure that these points were correct (i.e. valid orthogonal arrays). As I attempted to implement more complex methods, I found that I was having a more difficult time catching small errors, and that I was having a tough time verifying my work in general. I decided to go ahead and create something new that would hopefully help me with my work and allow for faster iteration.
A Memory and Space Constant Shuffling Algorithm
Andrew Kensler, a researcher at Pixar, introduced an interesting technique for generating the permutation of an array in his 2013 paper, Correlated Multi-Jittered Sampling.
Firstly, let’s look at the naive way of generating a permutation. You construct an array of elements from , and then you randomly shuffle them. Then, your resulting array (let’s call it A), will have the permuted value for i at A[i].
from random import shuffle n = 10 permutation = list(range(n)) permutation = shuffle(permutation) The bright side of this is that it’s really easy to implement and fairly easy to access. You simply subscript the array at whichever number you want to permute and you get the permutation for that number. The downside is that it’s O(n) for space complexity and O(n) for time complexity. At the very least, you need to create an array of n elements, yielding an O(n) space complexity, and then generate the array, which is O(n) time, and then shuffle the array, which is also O(n) with the Fisher-Yates algorithm, which the shuffle method from Python uses.
Using the Latest LLVM Release on MacOS
MacOS is really frustrating with how it handles its libraries and compilers. It is also frustrating because it ships an unspecified version of LLVM, which generally isn’t the latest stable release. You can, however, with a little tweaking, use the latest version of LLVM or GCC on your Mac, and reliably use it for your C and C++ tooling.
Installation First, you need to install the latest version of LLVM. Most people nowadays are using Homebrew. If you don’t have it, you can install LLVM from source, which takes a lot of time to compile. If you have brew, you can install LLVM with
Beautiful PDFs from Your Markdown Notes
This is a short post on how I write my notes in Markdown and convert them to beautiful PDFs using pandoc and the eisvogel template. I really like markdown - it’s simple and has just enough formatting features to be useful and expressive. It’s quick, too. I would not write notes in real-time with latex (way too slow), but it’s perfectly doable with Markdown.
You can turn Markdown files into PDFs that look like this:
My Neovim Development Setup
It’s been a while since I wrote about my Neovim setup. Since my last post, my nvim config has grown to be a little more sophisticated, and I finally worked out autocompletion and linting for all of the languages I work with.
Here’s what my editor looks like:
I have posted my full neovim configuration on Github
Split up your init.vim I had a horribly long init.vim file before. It gets clunky to manage and long files are ugly. It’s highly recommended that you split up your init.vim files into more manageable chunks. The way you can do this is by sourcing each chunk in your main init.vim file. Suppose we have a file for our deoplete settings, and another file for our language client settings. Our init.vim file could look something like this:
All possible encodings from a numerical mapping
I haven’t done interview prep in a while, and I decided to get back into it after I saw a practice problem that caught my eye. I got it from this mailing list.
The problem Suppose we have a mapping of letters to numbers, or an encoding, such that .
Parallel Microservices with Dependencies
The problem Right now, I’m a software engineering intern on the infrastructure team at Blend Labs. We run a standard microservice architecture over k8s running on AWS instances. Internal apps make it convenient to run and deploy microservices, abstracting away a lot of k8s stuff. It’s simple. You just register an image or a git repo and let our tools do the rest.