diff --git a/content/blog/00_compiler_intro.md b/content/blog/00_compiler_intro.md index 84c6ae9..5ae24aa 100644 --- a/content/blog/00_compiler_intro.md +++ b/content/blog/00_compiler_intro.md @@ -40,7 +40,7 @@ Let's go over some preliminary information before we embark on this journey. #### The "classic" stages of a compiler Let's take a look at the high level overview of what a compiler does. Conceptually, the components of a compiler are pretty cleanly separated. -They are as gollows: +They are as follows: 1. Tokenizing / lexical analysis 2. Parsing diff --git a/content/blog/01_compiler_tokenizing.md b/content/blog/01_compiler_tokenizing.md index edfcecc..f602922 100644 --- a/content/blog/01_compiler_tokenizing.md +++ b/content/blog/01_compiler_tokenizing.md @@ -103,7 +103,7 @@ generate a state machine, and convert it into code to simulate that state machin that code as part of our compiler. This way, we have a state machine "hardcoded" into our tokenizer, and no conversion of regex to DFAs needs to be done at runtime. -#### The Practice +### The Practice Creating an NFA, and then a DFA, and then generating C++ code are all cumbersome. If we had to write code to do this every time we made a compiler, it would get very repetitive, very fast. Fortunately, there exists a tool that does exactly this for us - it's called `flex`. Flex