• Latest Articles
  • Atom Feed
  • About
  • Notes on a Property Graph Query Language Parser with Goyacc Tommie's blog

    The past week, I implemented pgql-go, a parser and AST definition in Go. When implementing a graph database, picking a query language can be tricky. This research is part of that journey for Itergia Core.

    Looking at GraphQL, openCypher, PGQL and Gremlin, none of them felt like the choice. Gremlin seems to be a lower-level language. Perhaps a better way to think about it is as the output of a query planner; the execution plan. For now, I wanted to stay at a high-level, so ignored Gremlin.

    GraphQL is, well, barely a graph query language. It seems to be lacking recursion and transitive queries. Ignored.

    Caveat emptor I’ll be highlighting things I think are wrong and should change. It may sound very negative. There are many things right which I won’t bring up at all. The point is not to disuade anyone from using any of the tools I bring up, but to document issues I will use to compare against issues I find in other situations later in life. And indeed, if you are aware of the weaknesses of a tool, perhaps it’s better to use that tool than to use something where no one has yet found its faults.

    SQLish Languages

    openCypher (from NEO4J) and PGQL (from Oracle) are virtually the same, both trying to be readable in English, like SQL. The edge/vertex matching syntax look very similar. From the looks of it, both are trying to get ISO to standardize GQL, a graph query langauge that extends SQL. PGQL has somewhat wider support for data types, and the syntax felt better integrated into SQL. And so I decided that I wanted to study it more. Is there an existing parser in Go? No? Then that’s a research project.

    Goyacc and ANTLR

    I’ve been using Bison (the GNU Yacc) many times for C/C++. (And PLY was my favorite last time I had to write a parser in Python.) The Go community prefers writing recursive descent parsers in plain Go. However, since PGQL most likely is not an LL(1) grammar (given its SQL inheritance,) a plain-Go parser would not be fun to write. One common parser generator (capable of parsing the more general LALR(1) grammars) is ANTLR. It can target Go, and is used in cloudprivacylabs/openCypher. The benefit of ANTLR is that it outputs an AST without much code, unlike Yacc. The drawback is you don’t have as much control of the output. It also supports EBNF, which is a nice extension to BNF (and indeed what the PGQL specification is written in.)

    I felt that Goyacc was the more Goish thing to use, so I decided to go with that for this project. When making this decision, you have to keep in mind that your grammar will be much more verbose when converted from EBNF to plain BNF. For a research project, that might provide additional insights, since it allows working with the specification more. As an example, an optional keyword like

    PropertiesAreAllColumns: 'PROPERTIES' 'ARE'? 'ALL' 'COLUMNS' ExceptColumns?
    

    Becomes

    PropertiesAreAllColumns: 'PROPERTIES' OptAre 'ALL' 'COLUMNS' ExceptColumns?
    
    OptAre: /* empty */ | 'ARE'
    

    There is no difference in expressivity, just how verbose it is to write and maintain.

    Thoughts on Goyacc

    Goyacc is mostly obvious, and follows the Yacc tradition. There are, however, three things worth pointing out:

    1. Position support is missing There is no out-of-band place for the lexer/scanner/tokenizer to place input position information, and the error messages don’t include it. This is strange, since Goyacc errors themselves show position. One would think that if you know token positions are useful, they would be added to the API to promote the best-practice. Instead, you have to include them in every part of the %union used by the lexer. An ugly hack is to use the last read input token as the error position. This often work, but if you use the built-in error non-terminal to gracefully recover from syntax errors, the last read input token will be too far ahead.
    2. %union is a struct In Yacc, this actually produces a C union, which means it doesn’t take up more space just because you add more types. In Goyacc, it’s a struct (and indeed you can use %struct instead.) This means adding more types wastes space for every recursion frame (parser shift) in your parser. For small parsers in short-lived processes, this might be acceptable. For serving infrastructure, with many parsing contexts going on simultaneously, it’s not great. At the same time, Goyacc helps with keeping track of type conflicts if you make a mistake, so using a single type and multiplex on top of that would feel iffy. In the end, this was a research project, and ending up with 20 types was fine. (In cases where I needed slices for something, I reused the slices also for the singleton productions, to reduce storage overhead.)
    3. There is no %param type or prefix code In Bison, you can pass an additional parameter to the parsing function, and you can add a snippet of code that runs that the top of the parsing function. Neither exists in Goyacc, which makes building re-entrant (thread-safe) parsers more difficult. It’s also difficult to get the resulting AST out of the parsing function! The only solution seems to be to downcast the Lexer, which is the only parameter to the function. This is safe, but with the lack of prefix code, it might not be very performant. In the PGQL code, the downcast was only needed to store the resulting AST, which is only needed once.

      There is a StackOverflow question about this. Both iant@ and rsc@ of Go has said no.

    In summary, I’d start with ANTLR next time. Writing the lexer for PGQL was pretty easy, but the Goyacc issues combined with the specification using EBNF and ANTLR handling lexing… It all just builds up to Goyacc being too simplistic.

    Thoughts on PGQL

    I followed the 1.5 specification. The documentation is both dense and vague. The EBNF grammar is embedded, but e.g. it has no details on minimum required integer/decimal sizes. There are plenty of examples, which I extracted and used as compliance tests. The grammar is ambiguous in many places, which makes me wonder if it’s actually used anywhere. It would be more useful if even the spec grammar was ready for AST generation. See e.g.

    PathSpecification: LabelPredicate | PathPredicate
    
    PathPredicate: ':' Label GraphPatternQuantifier?
    
    LabelPredicate: ColonOrIsKeyword Label ( '|' Label )*
    ColonOrIsKeyword: ':' | 'IS'
    

    Because GraphPatternQuantifier is optional, it is ambiguous with LabelPredicate. Once you read more, you notice that in PathPredicate, the intent is that Label is actually a reference to a path macro, not a label. All-in-all, there were over 30 reduce/reduce conflicts to solve. The over 50 shift/reduce were mostly solved by implementing the specified operator precedence rules. I added some comments in the grammar when I had to resolve conflicts.

    The specification itself is buggy in some places:

    Finally, the grammar does not support multiple statements. Statements don’t even end in semicolons. This is strange to me, since e.g. bulk loading a database would require reading multiple statements from the same stream.

    The Language

    I think the language itself is fine. Not obviously right, not obviously wrong. It is complex, however. I feel that -/.../-> and ALL () ->+ () and -[...]-> should be unified somehow. Yes, they are used to describe different queries, but I’d like to see a hierarchy of syntax, rather than three completely separate branches.

    The CREATE PROPERTY GRAPH is a way to make a relational database look like a graph database. Not sure why we need to involve labels as an abstraction over SQL table names. It feels like something an API gateway like Apollo would need to tie different backends/databases together, not the backends themselves. But I could opt to just make that implicit, so it doesn’t matter.

    For my purposes, I’d like a language that is more focused on object property accesses, and not so much on edges and vertices. The whole MATCH thing feels needlessly strange. I just want to navigate from person.location to a Location and from there on to location.occupants to fan out to many of Person.

    That said, hopefully someone finds a use for a PGQL 1.5 parser in Go.