2021-05-03 20:48:52 -07:00

## Lazy

Heyo! This sounds like a fun project (and I'm not at all biased by my own choice of final project). I'll be linking stuff I mention here without second thought; you may very well be aware of the stuff I link, but it's easier to just dump all the information into this post. Here are a few ideas for you:

• You talk about linear algebra and vectors, as well as supporting simple extensions like dot products. Matrices and vectors are particularly interesting, in my view, because they can be indexed by their size! For instance, the 2x2 identity matrix is not at all the same as the 3x3 identity matrix. Operations that work for one of these matrices do not necessarily work for the other: the simplest example is probably matrix multiplication. If you choose to include the size of the matrix into its type information (ala `Matrix(3,3)`, which I think is vastly superior to having a basic `Matrix` type), you run into an issue: what's the type of the `multiply` operation? The best way to formulate it is probably as `(Matrix n m, Matrix m k) -> Matrix n k`. But note that `n`, `m`, and `k` are not type parameters, but values! When terms can depend on values, you arrive at dependent types, which is the x-axis of the lambda cube. It would be interesting to see this feature implemented into your type system!

• If you already carry around type information for your matrices, another interesting question is: what if I wanted a different algorithm for a different matrix size? For instance, there's a fairly simple definition of a matrix determinant for a 2x2 matrix, while the general case of a matrix determinant is a little bit more involved. C++ has this feature (changing implementation depending on the type) in the form of template specialization; more generally, this is an example of terms (functions, for instance) depending on types (the type of a matrix). This, I believe, is an example of ad hoc polymorphism, which falls under the y-axis of the lambda cube linked above. If you are feeling particularly adventurous, you can try implementing this!

• I'm not sure if you should be worried about parsing when defining language extensions. A more interesting question is, what is an extension, and how much power does it have? Would writing an extension correspond to writing a module in Haskell, or a class in Java? Or would it be more something along the lines of Lips libraries, which can (if I remember correctly) modify the entire language when loaded? If you want something like the latter, then you may be interested to know that Lisp programs are homoiconic; that is, they can effectively manipulate their own code and definitions; perhaps extensions could build upon this to, well, extend the language. For your ideas about extensible notation, you could check out Agda's and Coq's (in that order) notation mechanisms, since they both seem quite powerful and fairly ergonomic.

• "The output type will be a Maybe value, so a run time error such as accessing an undefined variable will result in a Nothing being returned." Consider using "Either LanguageError" (with LanguageError being a type you define), since it'll help you carry out more helpful information to the user. I also recommend taking a look into the `Reader` and `Except` monad transformers, since the former can be used to keep a stack trace, and the latter to cleanly short-circuit erroring evaluation.

## Purple Cobras

• A totality checker sounds fascinating! Since you're writing a functional DSL, you'll probably be using recursion as the bread and butter of your programs. You probably know this, but the Idris and Coq totally checkers use structural recursion to ensure totality; you can only recurse on values that were "unpacked" from the current arguments. This is fairly restrictive; for instance, the following definition of `interleave` would be rejected by Coq's totality checker (if I remember correctly; I last encountered this a couple of years ago):

``````interleave [] xs = xs
interleave (y:ys) xs = y : interleave xs ys
``````

The issue in the above example is that `xs` is not unpacked from `y:ys`.

An alternative approach (that I heard of), is using SMT and SAT solvers to compute various invariants about your code, and I suspect that termination may be one of them. If you can somehow assert (and verify via SMT/SAT brute force) that there's a function of your arguments that's always decreasing (for instance, the sum of the lengths of xs and ys in the above example), you can probably convince yourself that the recursion is not infinite, and that termination is possible.

• I'm sure you're aware of this, since Ben was the one that suggested this week's reading group paper, but linear types would be excellent for this! Modifying various pixels in a buffer could be done in-place if that buffer is linearly typed, since you are then guaranteed that the effect of your destructive in-place update is not felt anywhere else. I think that this would be particularly useful when you work on composition: while the "low level" details of what changes and what doesn't may be lost at the abstraction boundaries of the shader programs being composed, a linearly typed output from one shader would give the next shader in the pipeline enough information to continue using in-place operations.

• It seems like not worrying about "undefined" values falls out of using a functional language. You certainly can't write an uninitialized variable in something like vanilla Haskell. What exactly do you mean, then, about avoiding the use of undefined values in your language? I'm not sure I understand this objective.