Back

Fast Concordance: Instant concordance on a corpus of >1,200 books

49 points5 daysiafisher.com
simonw15 hours ago

This is a neat brute-force search system - it uses goroutines, one for each of the 1,200 books in the corpus, and has each one do a regex search against the in-memory text for that book.

Here's a neat trick I picked up from the source code:

    indices := fdr.rgx.FindAllStringSubmatchIndex(text, -1)

    for _, pair := range indices {
        start := pair[0]
        end := pair[1]
        leftStart := max(0, start-CONTEXT_LENGTH)
        rightEnd := min(end+CONTEXT_LENGTH, len(text))

        // TODO: this doesn't work with Unicode
        if start > 0 && isLetter(text[start-1]) {
            continue
        }

        if end < len(text) && isLetter(text[end]) {
            continue
        }
An earlier comment explains this:

    // The '\b' word boundary regex pattern is very slow. So we don't use it here and
    // instead filter for word boundaries inside `findConcordance`.
    // TODO: case-insensitive matching - (?i) flag (but it's slow)
    pattern := regexp.QuoteMeta(keyword)
So instead of `\bWORD\b` it does the simplest possible match and then checks to see if the character one index before the match and or one index after the matches are also letters. If they are it skips the match.
never_inline5 hours ago

Spinning 1K goroutines per request doesn't feel right to me for some reason.

Isn't trigram search supposed to be better?

https://swtch.com/~rsc/regexp/regexp4.html

2b3a5116 hours ago

It is, indeed, impressively fast. The results seem to be sorted by first name of author. Is that a deliberate choice?

est6 hours ago

It's very fast, and the result aligning by keyword looks super cool.

drivebyhooting13 hours ago

It seems to work at the word level.

Why not use a precomputed posting list?

mrkeen6 hours ago

Yeah I can't figure out if this is something the author stands by or if it's just a project to mess around with goroutines or something. And it's unfair to criticise if it isn't meant to be good.

> The server reads all the documents into memory at start-up. The corpus occupies about 600 MB, so this is reasonable, though it pushes the limits of what a cloud server with 1 GB of RAM can handle. With 2 GB, it's no problem.

1200 books per 1GB server? Whole-internet search engines are older than 1GB servers.

> queries that take 2,000 milliseconds from disk can be done in 800 milliseconds from memory. That's still too slow, though, which is why fast-concordance uses [lots of threads]

No query should ever take either of those amounts of time. And the "optimisation" is to just use more threads. Which other consumers could have used to run their searches, but now can't.

https://www.pingdom.com/blog/original-google-setup-at-stanfo...